Comsol -leaderboard other pages

Topics

Electroweak milestones at CERN

Celebrating the 1973 discovery of weak neutral currents by the Gargamelle experiment and the 1983 discoveries of the W and Z bosons by the UA1 and UA2 experiments at the SppS, a highly memorable scientific symposium in the new CERN Science Gateway on 31 October brought the past, present and future of electroweak exploration into vivid focus. “Weak neutral currents were the foundation, the W and Z bosons the pillars, and the Higgs boson the crown of the 50 year-long journey that paved the electroweak way,” said former Gargamelle member Dieter Haidt (DESY) in his opening presentation.

History could have turned out differently, said Haidt, since both CERN and Brookhaven National Laboratory (BNL) were competing in the new era of high-energy neutrino physics: “The CERN beam was a flop initially, allowing BNL to snatch the muon-neutrino discovery in 1962, but a second attempt at CERN was better.” This led André Lagarrigue to dream of a giant bubble chamber, Gargamelle, financed and built by French institutes and operated by CERN with beams from the Proton Synchrotron (PS) from 1970 to 1976. Picking out the neutral-current signal from the neutron-cascade background was a major challenge, and a solution seemed hopeless until Haidt and his collaborators made a breakthrough regarding the meson component of the cascade.

The ten years between the discovery of neutral currents and the W and Z bosons are what took CERN from competent mediocrity to world leader

Lyn Evans

By early July 1973, it was realised that Gargamelle had seen a new effect. Paul Musset presented the results in the CERN auditorium on 19 July, yet by that autumn Gargamelle was “treated with derision” due to conflicting results from a competitor experiment in the US. ‘The Gargamelle claim is the worst thing to happen to CERN,’ Director-General John Adams was said to have remarked. Jack Steinberger even wagered his cellar that it was wrong. Following further cross checks by bombarding the detector with protons, the Gargamelle result stood firm. At the end of Haidt’s presentation, collaboration members who were present in the audience were recognised with a warm round of applause.

From the PS to the SPS
The neutral-current discovery and the subsequent Gargamelle measurement of the weak mixing angle made it clear not only that the electroweak theory was right but that the W and Z were within reach of the technology of the day. Moving from the PS to the SPS, Jean-Pierre Revol (Yonsei University) took the audience to the UA1 and UA2 experiments ten years later. Again, history could have taken a different turn. While CERN was working towards a e+e collider to find the W and Z, said Revol, Carlo Rubbia proposed the radically different concept of a hadron collider — first to Fermilab, which, luckily for CERN, declined. All the ingredients were presented by Rubbia, Peter McIntyre and David Cline in 1976; the UA1 detector was proposed in 1978 and a second detector, UA2, was proposed by CERN six months later. UA1 was huge by the standards of the day, said Revol. “I was advised not to join, as there were too many people! It was a truly innovative project: the largest wire chamber ever built, with 4π coverage. The central tracker, which allowed online event displays, made UA1 the crucial stepping stone from bubble chambers to modern electronic ones. The DAQ was also revolutionary. It was the beginning of computer clusters, with same power as IBM mainframes.”

First SppS collisions took place on 10 July 1981, and by mid-January 1983 ten candidate W events had been spotted by the two experiments. The W discovery was officially announced at CERN on 25 January 1983. The search for the Z then started to ramp up, with the UA1 team monitoring the “express line” event display around the clock. On 30 April, Marie Noelle Minard called Revol to say she had seen the first Z. Rubbia announced the result at a seminar on 27 May, and UA2 confirmed the discovery on 7 June. “The SppS was a most unlikely project but was a game changer,” said Haidt. “It gave CERN tremendous recognition and paved the way for future collaborations, at LEP then LHC.”

Former UA2 member Pierre Darriulat (Vietnam National Space Centre) concurred: “It was not clear at all at that time if the collider would work, but the machine worked better than expected and the detectors better than we could dream of.” He also spoke powerfully about the competition between UA1 and UA2: “We were happy, but it was spoiled in a way because there was all this talk of who would be ‘first’ to discover. It was so childish, so ridiculous, so unscientific. Our competition with UA1 was intense, but friendly and somewhat fun. We were deeply conscious of our debt toward Carlo and Simon [van der Meer], so we shared their joy when they were awarded the Nobel prize two years later.” Darriulat emphasised the major role of the Intersecting Storage Rings and the input of theorists such as John Ellis and Mary K Gaillard, reserving particular praise for Rubbia. “Carlo did the hard work. We joined at the last moment. We regarded him as the King, even if we were not all in his court, and we enjoyed the rare times when we saw the King naked!”

Our competition with UA1 was intense, but friendly and somewhat fun

Pierre Darriulat

The ten years between the discovery of neutral currents and the W and Z bosons are what took CERN “from competent mediocrity to world leader”, said Lyn Evans in his account of the SppS feat. Simon van der Meer deserved special recognition, not just for his 1972 paper on stochastic cooling, but also his earlier invention of the magnetic horn, which was pivotal in increasing the neutrino flux in Gargamelle. Evans explained the crucial roles of the Initial Cooling Experiment and the Antiproton Accumulator, and the many modifications needed to turn the SPS into a proton-antiproton collider. “All of this knowledge was put into the LHC, which worked from the beginning extremely well and continues to do so. One example was intrabeam scattering. Understanding this is what gives us the very long beam lifetimes at the LHC.”

Long journey
The electroweak adventure began long before CERN existed, pointed out Wolfgang Hollik, with 2023 also marking the 90th anniversary of Fermi’s four-fermion model. The incorporation of parity violation came in 1957 and the theory itself was constructed in the 1960s by Glashow, Salam, Weinberg and others. But it wasn’t until ‘t Hooft and Veltman showed that the theory is renormalizable in the early 1970s that it became a fully-fledged quantum field theory. This opened the door to precision electroweak physics and the ability to search for new particles, in particular the top quark and Higgs boson, that were not directly accessible to experiments. Electroweak theory also drove a new approach in theoretical particle physics based around working groups and common codes, noted Hollik.

The afternoon session of the symposium took participants deep into the myriad of electroweak measurements at LEP and SLD (Guy Wilkinson, University of Oxford), Tevatron and HERA (Bo Jayatilaka, Fermilab), and finally the LHC (Maarten Boonekamp, Université Paris-Saclay and Elisabetta Manca, UCLA). The challenges of such measurements at a hadron collider, especially of the W-boson mass, were emphasised, as were their synergies with QCD in measurements in improving the precision of parton distribution functions.

The electroweak journey is far from over, however, with the Higgs boson offering the newest exploration tool. Rounding off a day of excellent presentations and personal reflections, Rebeca Gonzalez Suarez (Uppsala University) imagined a symposium 40 years from now when the proposed collider FCC-ee at CERN has been operating for 16 years and physicists have reconstructed nearly 1013 W and Z bosons. Such a machine would take the precision of electroweak physics into the keV realm and translate to a factor of seven increase in energy scale. “All of this brings exciting challenges: accelerator R&D, machine-detector interface, detector design, software development, theory calculations,” she said. “If we want to make it happen, now is the time to join and contribute!”

Kaon physics at a turning point

Only two experiments worldwide are dedicated to the study of rare kaon decays: NA62 at CERN and KOTO at J-PARC in Japan. NA62 plans to conclude its efforts in 2025, and both experiments are aiming to reach important milestones on this timescale. The future experimental landscape for kaon physics beyond this date is by no means clear, however. With proposals for next-generation facilities such as HIKE at CERN and KOTO-II at J-PARC currently under scrutiny, more than 100 kaon experts met at CERN from 11 to 14 September for a hybrid workshop to take stock of the experimental and theoretical opportunities in kaon physics in the coming decades.

Kaons, which contain one strange and either a lighter up or down quark, have played a central role in the development of the Standard Model (SM). Augusto Ceccucci (CERN) pointed out that many of the SM’s salient features – including flavour mixing, parity violation, the charm quark and CP violation – were discovered through the study of kaons, leading to the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix. The full particle content of the SM was finally experimentally established at CERN with the Higgs-boson discovery in 2012, but many open questions remain.

The kaon’s special role in this context was the central topic of the workshop. The study of rare kaon decays provides a unique sensitivity to new physics, up to  scales higher than those at collider experiments. In the SM, the rare decay of a charged or neutral kaon into a pion plus a pair of charged or neutral leptons is strongly suppressed, even more so than the similar rare B-meson decays. This is due to the absence at tree-level of flavour-changing neutral current interactions (e.g. s → d) in the SM. Such a transition can only proceed at loop level involving the creation of at least one very heavy (virtual) electroweak gauge boson (figure “Decayed”, left). While experimentally this suppression constitutes a formidable challenge in identifying the decay products amongst a variety of background signals, new-physics contributions could leave a significantly measurable imprint through tree-level or virtual contributions. In contrast to rare B decays, the “gold-plated” rare kaon decay channels K+→π+νν and KL→π0νν do not suffer from large hadronic uncertainties and are experimentally clean due to the limited number of possible decay channels.

kaons_at_cern_diagram

The charged-kaon decay is currently being studied at NA62, and a measurement of its branching ratio with a precision of 15% is expected by 2025. However, as highlighted by NA62 physics coordinator Karim Massri (Lancaster University), to improve this measurement and thus significantly increase the  likelihood of a discovery, the experimental precision must be reduced to the level of the theoretical prediction, i.e. 5%. This can only be achieved with a next-generation experiment. The HIKE experiment, a proposed high-intensity kaon factory at CERN currently under approval, would reach the 5% precision goal on the measurement of K+→π+νν during its first phase of operation. experiment, a future high-intensity kaon factory at CERN currently under approval, will reach the 5% precision goal on the measurement of K+→π+νν during its first phase of operation. Afterwards, a second phase with a neutral KL beam aiming at the first observation of the very rare decays KL→π0+ is foreseen. With a setup and detectors optimised for the measurement of the most challenging processes, the HIKE programme would be able to achieve unprecedented precision on most K+ and KL decays.

For KOTO, Koji Shimi and Hashime Nanjo reported on the experimental progress on KL→π0+ and presented a new bound on its branching ratio. A planned phase two of KOTO, if funded, aims to measure the branching ratio with a precision of 20%. Although principally designed for the study of (rare) bottom-quark decays, LHCb can also provide information about the rare decay of the shorter-lived KS.Radoslav Marchevski (EPFL Lausanne) presented the status and the prospects for a proposed LHCb-Phase II upgrade.

From the theory perspective, underpinned by impressive new perturbative, lattice QCD and effective-field-theory calculations presented at the workshop, the planned measurement of K+→π+νν at HIKE clearly has discovery potential, remarked Gino Isidori (University of Zurich). Together with other rare decay channels such as KL→μ+μ, KL→π0+ and K+→π++that would be measured by HIKE, added Giancarlo D’Ambrosio (INFN), the combined global theory analyses of experimental data will allow for discovering new physics if it exists within the reach of the experiment, and for providing solid constraints for new physics.

A decision on HIKE and other proposed experiments in CERN’s North Area will take place in early December.

From the cosmos to the classroom

Although science education, communication and public outreach are distinct fields of research and practice, they often overlap and intertwine. Common to the three fields is their shared goal of increasing scientific literacy, improving attitudes to STEM (science, technology, engineering and mathematics) and empowering society to engage with and apply scientific knowledge in decision-making processes. In light of challenges such as climate change and rapid advances in artificial intelligence, achieving these goals has become more relevant than ever.

Science education, communication and outreach have developed from different origins, at different times, and in response to diverse public needs. The formation of science education as a proper discipline, for example, dates back to educational reform movements in the late 19th century. Science communication, on the other hand, is a relatively young field that only took a clear form in the second half of the 20th century in response to a growing awareness of the role and impact of scientific progress.

While it is true that practitioners often cross the disciplinary boundaries of these fields, education, communication and outreach today represent distinct professions, each with its own identity, methods and target groups. Whereas science educators tend to focus on individual learners, often in school settings, public-outreach professionals aim to inspire interest in and engagement with science among the general public in out-of-school settings. Besides, the differences go beyond variations in target groups and domains. After all, the distinction between education and communication is substantial: many science journalists resist the suggestion that they serve a role in education, arguing that their primary goal is to provide information.

Questions then arise: how do these disciplines overlap, diverge and interact, and how have their practices evolved over time? And how do these evolutions affect our understanding of science and its place in society? As two academics whose career trajectories have spanned science, education and communication, we have experienced intersections and interactions between these fields up close and see exciting opportunities for the future of science engagement.

Magdalena: farewell to the deficit model

What stands out to me is the parallel development that science education and communication have undergone over the past decades. Despite their different origins, traditions, ideas, models and theories, all have seen a move away from simple one-way knowledge transmission to genuine and meaningful engagement with their respective target groups, whether that’s in a classroom, a public lecture or at a science festival.

In classrooms, there has been a noticeable shift from teacher-centred to student-centred instructional practices. In the past, science teachers used to be active (talking and explaining), while students were passive (listening). Today, the focus is the students and how to engage them actively in the learning process. A popular approach to engaging students is enquiry-based science education, where students take the lead (asking questions, formulating hypotheses, running experiments and drawing conclusions) and teachers act as facilitators.

Collaboration between science education researchers and practitioners is critical to improving science education

One excellent example of such an enquiry-based approach is Mission Gravity, an immersive virtual-reality (VR) programme for lower- and upper-secondary students (see “Mission gravity” image). Developed by the education and public outreach team at OzGrav, the Australian Research Council Centre of Excellence for Gravitational Wave Discovery, the programme aims to teach stellar evolution and scientific modelling by inviting students on a virtual field trip to nearby stars. The VR environment enables students to interact with stars, make measurements and investigate stellar remnants. By collecting data, forming hypotheses and trying to figure out how stars change over time, the students discover the laws of physics instead of merely hearing about them.

The shift towards student-centric education has been accompanied by an evolution in our understanding of student learning. Early-learning theories used to lean heavily on ideas of conditioning, treating learning as a predictable process that teachers could control through repetition and reinforcement. Contemporary models consider cognitive functions, including perception, problem-solving and imagination, and recognise the crucial role of social and cultural contexts in learning science. Nowadays, we acknowledge that education is most meaningful when students take responsibility for their learning and connect the subject matter to their own lives.

Virtual-reality experience

For instance, my PhD project on general-relativity education leveraged sociocultural learning theory to design an interactive learning environment, incorporating collaborative activities that encourage students to articulate and discuss physics concepts. This “talking physics” approach is great for fostering conceptual understanding in modern physics, and we refined the approach further through iterative trials with physics teachers to ensure an authentic learning experience. Again, collaboration between science education researchers and practitioners (in this case, physics teachers) is critical to improving science education.

Similarly, science communication has transitioned from deficit models to more dialogic and participatory ones. The earlier deficit models perceived society as lacking scientific understanding and envisaged a one-way flow of information from expert scientists to a passive audience – quite similar to the behaviourist approach prevalent in the early days of science education. Modern science communication practices foster a dialogue where scientists and the public engage in meaningful discussions. In particular, the participatory model positions scientists and the public as equal participants in an open conversation about science. Here, the interaction is as critical as the outcomes of the discussions. This places emphasis on the quality of communication and meaning-making, similar to what many consider the goals of good science education (see “Increasing interaction” figure).

To illustrate a participatory approach to science communication, consider the City-Lab: Galileo initiative in Zurich. This initiative integrates theatre, podcasts and direct interactions between scientists, actors and citizens to foster dynamic conversations about the role of science in society. A range of media and formats were employed to engage the public beyond traditional forms, ranging from audio-visual exhibits to experiences where the public could attend a play and then engage in a post-show discussion with scientists. By directly involving scientists and the public in such exchanges, City-Lab: Galileo invites everyone to shape a dialogue about science and society, underlining the shifting paradigms in science communication.

Urban: the power of semiotics

For me, a ground-breaking moment in how we communicate disciplinary knowledge came when I saw two astronomers in a coffee room discussing the evolution of a stellar cluster. They were using their hands to sign the so-called turn-off point in a Hertzsprung–Russell diagram in mid-air, indicating their individual perspective on the age of the cluster. These hand-wavings would most likely not mean anything to anyone outside the discipline and I was intrigued by how powerful communication using such semiotic resources can be. The conclusion is that communicating science does not just involve speech or text.

Particularly intriguing are the challenges students and others have with visualising the world in 3D and 4D from 2D input, for example in astronomical images, which I started to notice while teaching astronomy. How hard can it be to “see”, in one’s head, the 3D structure of a nebula (see “Nebulous” image) a galaxy or even the Sun–Earth–Moon system when looking at a 2D representation of it? It turns out to be very hard for most people. This led to an investigation of the ability of people to extrapolate 3D in their mind, which immediately raised another question: what do people actually “see” or discern when engaged in disciplinary communication, or when looking at the stunning images from the Hubble or Webb space telescopes? Nowadays this is referred to as disciplinary discernment in the literature.

The “rosette” model of science communication

Researching such questions relies on methods that are quite different from those used in the natural sciences. Often data exists in the form of transcripts of interviews, which are then read, coded and characterised for categories of discernment. In the case of spatial perception, this inductive process led to an anatomy of multidimensional discernment describing the different ways that the participants experience three-dimensionality in particular and disciplinary discernment in general. It also identified a deeper underlying challenge that all science learning depends upon: large and small spacetime scales. Spatial and temporal scales, in particular large and small, are identified as threshold concepts in science. As a result, the success of any teaching activity in schools, science centres and other outreach activities depends on how well students come to understand these scales. With very little currently known, there is much to explore.

As an educational researcher in physics, one has to be humbled by the great diversity of ideas about what it means and entails to teach and learn physics. However, I’ve come to appreciate a particular theoretical framework based on studies of the disciplinary communication in a special group in society: physicists. This, and indeed any group with the same interests, develops and shares a special way of communicating knowledge. In addition to highly specialised language, they use a whole setup of representations, tools and activities. These are often referred to as semiotic resources and are studied in a theoretical framework called social semiotics.

Social semiotics turns out to be a powerful way to study and analyse the disciplinary communication in physics and astronomy departments. I usually describe the framework as a jigsaw puzzle that we are still building. We have identified, and described in detail, certain pieces in this theory but there are more to explore. One such piece is embodiment and what the use of gestures means for communicating disciplinary knowledge, such as the hand-waving astronomers in the coffee room. It is similar to the theory-building processes in physics, where through empirical investigations physicists try to construct a solid theory for how nature works.

Joint conclusion

Understanding how we think about and communicate physics is as interesting and challenging as physics itself. We believe that they are inseparable, and as we explore the landscape of physics to understand the universe we are also exploring the human mind and how we can understand the universe. Physicists too have to be able to communicate with and engage other physicists. The scientific process of publishing research is an excellent example of how challenging this can be: researchers must convince their colleagues around the world that what they have found is correct, or else they fail. The history of science is full of such examples. For example, in the 1920s Swedish astronomer Knut Lundmark made observations of stars in distant galaxies and found that these galaxies seem to move away from us – in essence he had discovered the expansion of the universe. However, he was unable to convince (read: communicate this to) his colleagues, and a few years later Edwin Hubble did the same thing and made a more convincing case.

Finally, this article tries to shed light on the challenges in communicating physics to not just physicists but also students, and the public. The challenges are similar but at different levels, depending on the persons involved and engaged in this interchange of knowledge. What the physicist tries to communicate and what the audience discerns and ultimately learns about the universe are often two different things.

Supernovae probe neutrino self-interactions

Towards the end of the lifetime of a very massive (>8 M) star, the nuclear fusion processes in its core are no longer sufficient to balance the constantly increasing pull of gravitational forces. This eventually causes the core to collapse, with the release of an enormous amount of matter and energy via shockwaves. Nearly 99% of such a core-collapse supernova’s energy is released in the form of neutrinos, usually leaving behind a compact proto-neutron star with a mass of about 1.5 M and a radius of about 10 km. For more massive remnant cores (>3 M), a black hole is formed instead. The near-zero mass and electrical neutrality of neutrinos make their detection particularly challenging: when the famous 1987 supernova SN1987a occurred 168,000 light-years from Earth, the IMB observatory in the US detected just eight neutrinos, BNO in Russia detected 13 and Kamiokande II in Japan detected 11 (CERN Courier March/April 2021 p12).

Besides telling us about the astrophysical processes inside a core-collapse supernova, such neutrino detections might also tell us more about the particles themselves. The Standard Model (SM) predicts feeble self-interactions among neutrinos (νSI), but probing them remains beyond the reach of present-day laboratories on Earth. As outlined in a white paper published earlier this year by Jeffrey Berryman and co-workers, νSI (mediated, for example, by a new scalar or vector boson) enter many beyond-the-SM theories that attempt to explain the generation of neutrino masses and the origin of dark matter. One of the probes that can be used to explore such interactions are core-collapse supernovae, since the extreme conditions in these catastrophic events make it more likely for νSI to occur and therefore affect the behaviour of the emitted neutrinos.

Recently, Po-Wen Chang and colleagues at Ohio State University explored this possibility by considering the formation of a tightly coupled neutrino fluid that expands under relativistic hydrodynamics, thereby having an effect on neutrino pulses detected on Earth. The team derives solutions to relativistic hydrodynamic equations for two cases: a “burst outflow” and a “wind outflow”. A burst outflow of a uniform neutrino fluid occurs when it undergoes free expansion in vacuum, while a wind outflow occurs when steady-state solutions to the hydrodynamics equations are looked for. In their current work, the authors focus on the former.

In a scenario without νSI, the neutrinos escape and form a shell of thickness about 105 times the radius of the proto-neutron star that freely travels away at the speed of light. On the other hand, in a scenario with νSI, the neutrinos don’t move freely immediately after escaping the proto-neutron star and instead undergo increased neutrino elastic scattering. As a result, the neutrino shell continues expanding radially until it reaches the point where the density becomes low enough for the neutrinos to decouple and begin free-flowing. The thickness of the shell at this instant depends on the strength of the νSI interactions and is expected to be much larger than that in the no-νSI case. This, in turn, would translate to longer neutrino signals in detectors on Earth.

The effects of neutrino self interactions on SN1987a are starting to become clearer

Data from SN1987a, where the neutrino signal lasted for about 10 s, broadly agree with the no-νSI scenario and were used to set limits on very strong νSI interactions. On the other hand, if νSI were to exist as a burst-outflow, the proposed model gives very robust results, with an estimated sensitivity of 3 s. Additionally, the authors argue that the steady-state wind-outflow case might be more likely to occur, a dedicated treatment of which has been left for future work.

For the first time since its observation 36 years ago, the effects of νSI on SN1987a are starting to become clearer. Further advances in this direction are much anticipated so that when the next supernovae occurs it could help clear the fog that surrounds our current understanding of neutrinos.

Antinuclei production in pp collisions

LHCb figure 1

At the European Physical Society Conference on High Energy Physics, held in Hamburg in August, the LHCb collaboration announced first results on the production of antihelium and antihypertriton nuclei in proton–proton (pp) collisions at the LHC. These promising results open a new research field, that up to now has been pioneered by ground-breaking work from the ALICE collaboration on the central rapidity interval |y| < 0.5. By extending the measurements into the so-far unexplored forward region 1.0 < y < 4.0, the LHCb results provide new experimental input to derive the production cross sections of antimatter particles formed in pp collisions, which are not calculable from first principles.

LHCb’s newly developed helium-identification technique mainly exploits information from energy losses through ionisation in the silicon sensors upstream (VELO and TT stations) and downstream (Inner Tracker) of the LHCb magnet. The amplitude measurements from up to ~50 silicon layers are combined for each subdetector into a log-likelihood estimator. In addition, timing information from the Outer Tracker and velocity measurements from the RICH detectors are used to improve the separation power between heavy helium nuclei (with charge Z = 2) and lighter, singly charged particles (mostly charged pions). With a signal efficiency of about 50%, a nearly background-free sample of 1.1 × 105 helium and antihelium nuclei is identified in the data collected during LHC Run 2 from 2016 to 2018 (see figure, inset).

The helium identification method proves the feasibility of new research fields at LHCb

As a first step towards a light-nuclei physics programme in LHCb, hypertritons are reconstructed via their two-body decay into a now-identified helium nucleus and a charged pion. Hypertriton (3ΛH) is a bound state of a proton, a neutron and a Λ hyperon that can be produced via coalescence in pp collisions. These states provide experimental access to the hyperon–nucleon interaction through the measurement of their lifetime and of their binding energy. Hyperon–nucleon interactions have significant implications for the understanding of astrophysical objects such as neutron stars. For example, the presence of hypernuclei in the dense inner core can significantly suppress the formation of high-mass neutron stars. As a result, there is some tension between the observation of neutron stars heavier than two solar masses and corresponding hypertriton results from the STAR collaboration at Brookhaven. ALICE seems to have resolved the tension between hypertriton measurements at colliders and neutron stars. An independent confirmation of the ALICE result has up to now been missing, and can be provided by LHCb.

The invariant-mass distribution of hypertriton and antihypertriton candidates is shown in figure 1. More than 100 signal decays are reconstructed, with a statistical uncertainty on the mass of 0.16 MeV, similar to that of STAR. In a next step, corrections for efficiencies and acceptance obtained from simulation, as well as systematic uncertainties on the mass scale and lifetime measurement, will be derived.

The new helium identification method from LHCb summarised here proves the feasibility of a rich programme of measurements in QCD and astrophysics involving light antinuclei in the coming years. The collaboration also plans to apply the method to other LHCb Run 2 datasets, such as proton–ion, ion–ion and SMOG collision data.

Collectivity in small systems produced at the LHC

ALICE figure 1

High-energy heavy-ion collisions at the LHC exhibit strong collective flow effects in the azimuthal angle distribution of final-state particles. Since these effects are governed by the initial collision geometry of the two colliding nuclei and the hydrodynamic evolution of the collision, the study of anisotropic flow is a powerful way to characterise the production of the quark–gluon plasma (QGP) – an extreme state of matter expected to have existed in the early universe.

To their surprise, researchers on the ALICE experiment have now revealed similar flow signatures in small systems encompassing proton–proton (pp) and proton–lead (pPb) collisions, where QGP formation was previously assumed not to occur. The origin of the flow signals in small systems (and in particular whether the mechanisms behind these correlations in small systems share commonalities with heavy-ion collisions) are not yet fully understood. To better interpret these results, and thus to understand the limit of the system size that exhibits fluid-like behaviour, it is important to carefully single out possible scenarios that can mimic the effect of collective flow. 

Anisotropic-flow measurements become more difficult in small systems because non-flow effects, such as the presence of jets, become more dominant. Thus, it is important to examine methods where non-flow effects are properly subtracted first. One of the methods, the so-called low-multiplicity template fit, has been widely used in several experiments to determine and subtract the non-flow elements.

The origin of the flow signals in small systems is not yet fully understood

The ALICE collaboration studied long-range angular correlations for pairs of charged particles produced in pp and pPb collisions at centre-of-mass energies of 13 TeV and 5.02 TeV, respectively. Flow coefficients were extracted from these correlations using the template-fit method in samples of events with different charged-particle multiplicities. This method considers that the yield of jet fragments increases as a function of particle multiplicity and allows physicists to examine assumptions made in the low-multiplicity template fit for the first time – demonstrating their validity, including a possible jet-shape modification.

Figure 1 shows the measurement of two components of anisotropic flow – elliptic (v2) and triangular (v3) – as a function of charged-particle multiplicity at midrapidity (Nch). The data show decreasing trends towards lower multiplicities. In pp collisions, the results suggest that the v2 signal disappears below Nch = 10. The results are then compared with hydrodynamic models. To accurately describe the data, especially for events with low multiplicities, a better understanding of initial conditions is needed.

These results can help to constrain the modelling of initial-state simulations, as the significance of initial-state effects increases for collisions resulting in low multiplicities. The measurements with larger statistics from Run 3 data will push down this multiplicity limit and reduce the associated uncertainties.

Measuring energy correlators inside jets

CMS figure 1

Quarks and gluons are the only known elementary particles that cannot be seen in isolation. Once produced, they immediately start a cascade of radiation (the parton shower), followed by confinement, when the partons bind into (colour-neutral) hadrons. These hadrons form the jets that we observe in detectors. The different phases of jet formation can help physicists understand various aspects of quantum chromodynamics (QCD), from parton interactions to hadron interactions – including the confinement transition leading to hadron formation, which is particularly difficult to model. However, jet formation cannot be directly observed. Recently, theorists proposed that the footprints of jet formation are encoded in the energy and angular correlations of the final particles, which can be probed through a set of observables called energy correlators. These observables record the largest angular distance between N particles within a jet (xL), weighted by the product of their energy fractions.

The CMS collaboration recently reported a measurement of the energy correlators between two (E2C) and three (E3C) particles inside a jet, using jets with pT in the 0.1–1.8 TeV range. Figure 1 (top) shows the measured E2C distribution. In each jet pT range, three scaling regions can be seen, corresponding to three stages in jet-formation evolution: parton shower, colour confinement and free hadrons (from right to left). The opposite E2C trends in the low and high xL regions indicate that the interactions between partons and those between hadrons are rather different; the intermediate region reflects the confinement transition from partons to hadrons.

Theorists have recently calculated the dynamics of the parton shower with unprecedented precision. Given the high precision of the calculations and of the measurements, the CMS team used the E3C over E2C ratio, shown in figure 1 (bottom), to evaluate the strong coupling constant αS. The ratio reduces the theoretical and experimental uncertainties, and therefore minimises the challenge of distinguishing the effects of αS variations from those of changes in quark–gluon composition. Since αS depends on the energy scale of the process under consideration, the measured value is given for the Z-boson mass: αS = 0.1229 with an uncertainty of 4%, dominated by theory uncertainties and by the jet-constituent energy-scale uncertainty. This value, which is consistent with the world average, represents the most precise measurement of αS using a method based on jet evolution.

Pushing the intensity frontier at ECN3

CCNovDec23_NA_ECN3

Following a decision taken during the June session of the CERN Council to launch a technical design study for a new high-intensity physics programme at CERN’s North Area, a recommendation for experiment(s) that can best take advantage of the intense proton beam on offer is expected to be made by the end of 2023.

The design study concerns the extraction of a high-intensity beam from the Super Proton Synchrotron (SPS) to deliver up to a factor of approximately 20 more protons per year to ECN3 (Experimental Cavern North 3). It is an outcome of the Physics Beyond Colliders (PBC) initiative, which was launched in 2016 to explore ways to further diversify and expand the CERN scientific programme by covering kinematical domains that are complementary to those accessible to high-energy colliders, with a focus on programmes for the start of operations after Long Shutdown 3 towards the end of the decade.

CERN is confident in reaching the beam intensities required for all experiments

To employ a high-intensity proton beam at a fixed-target experiment in the North Area and to effectively exploit the protons accelerated by the SPS, the beam must be extracted slowly. In contrast to fast extraction within a single turn of the synchrotron, which utilises kicker magnets to change the path of a passing proton bunch, slow extraction gradually shaves the beam over several hundred thousand turns to produce a continuous flow of protons over a period of several seconds. One important limitation to overcome concerns particle losses during the extraction, foremost on the thin electrostatic extraction septum of the SPS but also along the transfer line leading to the North Area target stations. An R&D study backed by the PBC initiative has shown that it is possible to deflect the protons away from the blade of the electrostatic septum using thin, bent crystals. “Based on the technical feasibility study carried out in the PBC Beam Delivery ECN3 task force, CERN is confident in reaching the beam intensities required for all experiments,” says ECN3 project leader Matthew Fraser.

Currently, ECN3 hosts the NA62 experiment, which searches for ultra-rare kaon decays as well as for feebly-interacting particles (FIPs). Three experimental proposals that could exploit a high-intensity beam in ECN3 have been submitted to the SPS committee, and on 6 December the CERN research board is expected to decide which should be taken forward. The High-Intensity Kaon Experiment (HIKE), which requires an increase of the current beam intensity by a factor of between four and seven, aims to increase the precision on ultra-rare kaon decays to further constrain the Cabibbo–Kobayashi–Maskawa unitarity triangle and to search for decays of FIPs that may appear on the same axis as the dumped proton beam. Looking for off-axis FIP decays, the SHADOWS (Search for Hidden And Dark Objects With the SPS) programme could run alongside HIKE when operated in beam-dump mode. Alternatively, the SHiP (Search for Hidden Particles) experiment would investigate hidden sectors such as heavy neutral leptons in the GeV mass range and also enable access to muon- and tau-neutrino physics in a dedicated beam-dump facility installed in ECN3.

The ambitious programme to provide and prepare the high-intensity ECN3 facility for the 2030s onwards is driven in synergy with the North Area consolidation project, which has been ongoing since Long Shutdown 2. Works are planned to be carried out without impacting the other beamlines and experiments in the North Area, with first beam commissioning of the new facility expected from 2030.

“Once the experimental decision has been made, things will move quickly and the experimental groups will be able to form strong collaborations around a new ECN3 physics facility, upgraded with the help of CERN’s equipment and service groups,” says Markus Brugger, co-chair of the PBC ECN3 task force.

Highest-energy observation of quantum entanglement

ATLAS figure 1

Entanglement is an extraordinary feature of quantum mechanics: if two particles are entangled, the state of one particle cannot be described independently from the other. It has been observed in a wide variety of systems, ranging from microscopic particles such as photons or atoms to macroscopic diamonds, and over distances ranging from the nanoscale to hundreds of kilometres. Until now, however, entanglement has remained largely unexplored at the high energies accessible at hadron colliders, such as the LHC.

At the TOP 2023 workshop, which took place in Michigan this week, the ATLAS collaboration reported a measurement of entanglement using top-quark pairs with one electron and one muon in the final state selected from proton–proton collision data collected during LHC Run 2 at a centre-of-mass energy of 13 TeV, opening new ways to test the fundamental properties of quantum mechanics.

Two-qubit system
The simplest system which gives rise to entanglement is a pair of qubits, as in the case of two spin-1/2 particles. Since top quarks are typically generated in top-antitop pairs (tt) at the LHC, they represent a unique high-energy example of such a two-qubit system. The extremely short lifetime of the top (10-25 s, which is shorter than the timescale for hadronisation and spin decorrelation) means that its spin information is directly transferred to its decay products. Close to threshold, the tt pair produced through gluon fusion is almost in a spin-singlet state, maximally entangled. By measuring the angular distributions of the tt decay products close to threshold, one can therefore conclude whether the tt pair is in an entangled state.

For this purpose, a single observable can be used as an entanglement witness, D. This can be measured from the distribution of cos?, where ? is the angle between the charged lepton directions in each of the parent top and anti-top rest frames, with D = −3⋅⟨cos?⟩. The entanglement criterion is given by D = tr(C)/3 < −1/3, where tr(C) is the sum of the diagonal elements of the spin-correlation matrix C of the tt̄ pair before hadronisation effects occur. Intuitively, this criterion can be understood from the fact that tr(C) is the expectation value of the product of the spin polarizations, tr(C) =〈σ⋅σ〉, with σ, σ being the t,t polarizations, respectively (classically tr(C) ≤ 1, since spin polarizations are unit vectors).  D is measured in a region where the invariant mass is approximately twice the mass of the top quark, 340 < mtt < 380 GeV, and is performed at particle level, after hadronisation effects occur.

This constitutes the first observation of entanglement between a pair of quarks and the highest-energy measurement of entanglement

The shape of cos? is distorted by detector and event-selection effects for which it has to be corrected. A calibration curve connecting the value of D before and after the event reconstruction is extracted from simulation and used to derive D from the corresponding measurement, which is then compared to predictions from state-of-the-art Monte Carlo simulations. The measured value D = -0.547 ± 0.002 (stat.) ± 0.021 (syst.) is well beyond 5σ from the non-entanglement hypothesis. This constitutes the first-ever observation of entanglement between a pair of quarks and the highest-energy measurement of entanglement.

Apart from the intrinsic interest of testing entanglement under unprecedented conditions, this measurement paves the way to use the LHC as a novel facility to study quantum information. Prime examples are quantum discord, which is the most basic form of quantum correlations; quantum steering, which is how one subsystem can steer the state of the other one; and tests of Bell’s inequalities, which explore non-locality.  Furthermore, borrowing concepts from quantum information theory inspires new approaches to search for physics beyond the Standard Model.

Beauty in the Auvergne

The 20th International Conference on B-Physics at Frontier Machines, Beauty 2023, was held in Clermont-Ferrand, France, from 3-7 July, hosted by the Laboratoire de Physique de Clermont (IN2P3/CNRS, Université Clermont Auvergne). It was the first in-person edition of the series since the pandemic, and attracted 75 participants from all over the world. The programme had 53 invited talks of which 13 were theoretical overviews. An important element was also the Young Scientist Forum, with 7 short presentations on recent results.

The key focus of the conference series is to review the latest results in heavy-flavour physics and discuss future directions. Heavy-flavour decays, in particular those of hadrons that contain b quarks, offer powerful probes of physics beyond the Standard Model (SM). Beauty 2023 took place 30 years after the opening meeting in the series. A dedicated session was devoted to reflections on the developments in flavour physics over this period, and also celebrating the life of Sheldon Stone, who passed away in October 2021. Sheldon was both an inspirational figure in flavour physics as a whole, a driving force behind the CLEO, BTeV and LHCb experiments, and a long-term supporter of the Beauty conference series.

LHC results
Many important results have emerged from the LHC since the last Beauty conference. One concerns the CP-violating parameter sin2β, for which measurements by the BaBar and Belle experiments at the start of the millennium marked the dawn of the modern flavour-physics era.  LHCb has now measured sin2β with a precision better than any other experiment, to match its achievement for ϕs, the analogous parameter in Bs0 decays, where ATLAS and CMS have also made a major contribution. Continued improvements in the knowledge of these fundamental parameters will be vital in probing for other sources of CP violation beyond the SM.

Over the past decade, the community has been intrigued by strong hints of the breakdown of lepton-flavour universality, one of the guiding tenets of the SM, in B decays. Following a recent update from LHCb, it seems that lepton universality may remain a good symmetry, at least in the class of electroweak-penguin decays such as B→K(*)l+l, where much of the excitement was focused (CERN Courier January/February 2023 p7). Nonetheless, there remain puzzles to be understood in this sector of flavour physics, and anomalies are emerging elsewhere. For example, non-leptonic decays of the kind Bs→ Ds +K show intriguing patterns through CP-violation and decay-rate information.

The July conference was noteworthy as being a showcase for the first major results to emerge from the Belle II experiment. Belle II has now collected 362 fb-1 of integrated luminosity on the Υ(4S) resonance, which constitutes a dataset similar in size to that accumulated by BaBar and the original Belle experiment, and results were shown from early tranches of this sample.  In some cases, these results already match or exceed in sensitivity and precision what was achieved at the first generation of B-factory experiments, or indeed elsewhere. These advances can be attributed to improved instrumentation and analysis techniques. For example, world-leading measurements of the lifetimes of several charm hadrons were presented, including the D0, D+, Ds+ and Λc+. Belle II and its accelerator, SuperKEKB, will emerge from a year-long shutdown in December with the goal to increase the dataset by a factor of 10-20 in the coming half decade.

Full of promise
The future experimental programme of flavour physics is full of promise. In addition to the upcoming riches expected from Belle II, an upgraded LHCb detector is being commissioned in order to collect significantly larger event samples over the coming decade. Upgrades to ATLAS and CMS will enhance these experiments’ capabilities in flavour physics during the High-Luminosity LHC era, for which a second upgrade to LHCb is also foreseen. Conference participants also learned of the exciting possibilities for flavour physics at the proposed future collider FCC-ee, where samples of several 1012 Z0 decays will open the door to ultra-precise measurements in an analysis environment much cleaner than at the LHC. These projects will be complemented by continued exploration of the kaon sector, and studies at the charm threshold for which a high-luminosity Super Tau Charm Factory is proposed in China.

The scientific programme of Beauty 2023 was complemented by outreach events in the city, including a `Pints of Science’ evening and a public lecture, as well as a variety of social events. These and the stimulating presentations made the conference a huge success, demonstrating that flavour remains a vibrant field and continues to be a key player in the search for new physics beyond the Standard Model.

bright-rec iop pub iop-science physcis connect