Topics

Dielectrons take the temperature of Pb–Pb collisions

ALICE figure 1

Collisions between lead ions at the LHC produce the hottest system ever created in the lab, exceeding those in stellar interiors by about a factor of 105. At such temperatures, nucleons no longer exist and quark–gluon plasma (QGP) is formed. Yet, a precise measurement of the initial temperature of the QGP created in these collisions remains challenging. Information about the early stage of the collision gets washed out because the system constituents continue to interact as it evolves. As a result, deriving the initial temperature from the hadronic final state requires a model-dependent extrapolation of system properties (such as energy density) by more than an order of magnitude.

In contrast, electromagnetic radiation in the form of real and virtual photons escapes the strongly interacting system. Moreover, virtual photons – emerging in the final state as electron–positron pairs (dielectrons) – carry mass, which allows early and late emission stages to be separated.

Radiation from the late hadronic phase dominates the thermal dielectron spectrum at invariant masses below 1 GeV. The yield and spectral shape in this mass window reflects the in-medium properties of vector mesons, mainly the ρ, and can be connected to the restoration of chiral symmetry in hot and dense matter. In the intermediate-mass region (IMR) between about 1 and 3 GeV, thermal radiation is expected to originate predominantly from the QGP, and an estimate of the initial QGP temperature can be derived from the slope of the exponential spectrum. This makes dielectrons a unique tool to study the properties of the system at its hottest and densest stage.

A new approach to separate the heavy-flavour contribution experimentally has been employed for the first time at the LHC

At the LHC, this measurement is challenging because the expected thermal dielectron yield in the IMR is outshined by a physical background that is about 10 times larger, mainly from semileptonic decays of correlated pairs of cc or bb hadrons. In ALICE, the electron and positron candidates are selected in the central barrel using complementary information provided by the inner tracking system (ITS), time projection chamber and time-of-flight measurements. Figure 1 (left) shows the dielectron invariant-mass spectrum in central lead–lead (Pb–Pb) collisions. The measured distribution is compared with a “cocktail” of all known contributions from hadronic decays. At masses below 0.5 GeV, an enhancement of the dielectron yield over the cocktail expectation is observed, which is consistent with calculations that include thermal radiation from the hadronic phase and an in-medium modification of the ρ-meson. Between 0.5 GeV and the ρ mass (0.77 GeV) a small discrepancy between the data and calculations is observed.

In the IMR, however, systematic uncertainties on the cocktail contributions from charm and beauty prevent any conclusion being drawn about thermal radiation from QGP. To overcome this limitation, a new approach to separate the heavy-flavour contribution experimentally has been employed for the first time at the LHC. This approach exploits the high-precision vertexing capabilities of the ITS to measure the displaced vertices of heavy-quark pairs. Figure 1 (right) shows the dielectron distribution in the IMR compared to template distributions from Monte Carlo simulations. The best fit includes templates from heavy-quark pairs and an additional prompt dielectron contribution, presumably from thermal radiation. This is the first experimental hint of thermal radiation from the QGP in Pb–Pb collisions at the LHC, albeit with a significance of 1σ.

Ongoing measurements with the upgraded ALICE detector will provide an unprecedented improvement in precision, paving the way for a detailed study of thermal radiation from hot QGP.

Resolving asymmetries in B0 and B0s oscillations

In the Standard Model (SM), CP violation originates from a single complex phase in the 3 × 3 Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix. The unitarity condition of the CKM matrix (Vud V*ub + Vcd V*cb + Vtd V*tb = 0, where Vij are the CKM matrix elements) can be represented as a triangle in the complex plane, with an area proportional to the amount of CP violation in the quark sector. One angle of this triangle, γ = arg (–Vud V*ub/ Vcd V*cb), is of particular interest as it can be probed both indirectly under the assumption of unitarity and in tree-level processes that make no such assumption. Its most sensitive direct experimental determination is currently given by a combination of LHCb measurements of B+, B0, B0s decays to final states containing a D(s) meson and one or more light mesons. Decay-time-dependent analyses of tree-level B0s Ds K± and B0 Dπ± decays are sensitive to the angle γ through CP violation in the interference between mixing and decay amplitudes. Thus, comparing the value of γ obtained from tree-level processes with indirect measurements of γ and other unitary triangle parameters in loop-level processes provides an important consistency check of the SM.

LHCb figure 1

Measurements using neutral B0 and B0s mesons are particularly powerful because they resolve ambiguities that other measurements cannot. Due to the interference between B0(s) – B0(S) mixing and decay amplitudes, the physical CP-violating parameters in these decays are functions of a combination of γ and the relevant mixing phase, namely γ + 2β in the B0 system, where β = arg(–Vcd V*cb/ Vtd V*tb), and γ–2βs in the B0s system, where βs = arg(–Vts V*tb/ Vtd V*tb). Measurements of these physical quantities can therefore be interpreted in terms of the angles γ and β(s), and γ can be derived using independent determinations of the other parameter as input.

The LHCb collaboration recently presented a new measurement of B0s Ds K± decays collected during Run 2. This is a challenging analysis, as it requires a decay time-dependent fit to extract the CP-violating observables expressed as amplitudes of the four different decay paths that arise from B0s and – B0s to Ds K± final states. Previously, LHCb measured γ in this decay using the Run 1 dataset, obtaining γ = 128 +17–22°. The B0s – B0s oscillation frequency ∆ms must be precisely constrained in order to determine the phase differences between the amplitudes. In the Run 2 measurement, the established uncertainty on ∆ms would have been a limiting systematic uncertainty, which motivated the recent LHCb measurement of ∆ms using the flavour-specific B0s Ds π+ decays from the same dataset. Combined with Run 1 measurements of ∆ms, this has led to the most precise contribution to the world average and has greatly improved the precision on γ in the B0s Ds K± analysis. Indeed, for the first time the four amplitudes are resolved with sufficient precision to show the decay rates separately (see figure 1).

The angle γ is determined using inputs from other LHCb measurements of the CP-violating weak phase –2βs, along with measurements of the decay width and decay-width difference. The final result, γ = 74 ± 11°, is compatible with the SM and is the most precise determination of γ using B0s meson decays to date.

Scrutinising g-2 from all angles

The anomalous magnetic moment of the muon has long exhibited an intriguing tension between experiment and theory. The latest measurement from Fermilab is around 5σ higher than the official Standard Model prediction, but newer calculations based on lattice-QCD reduce the gap significantly. Confusion surrounds how best to determine the leading quantum correction to the muon’s magnetic moment: a process called hadronic vacuum polarisation (HVP), whereby a virtual photon briefly transforms into a hadronic blob before being reabsorbed.

While theorists are working hard to resolve this tension, the MUonE project aims to provide an independent determination of HVP using an intense muon beam from the CERN Super Proton Synchrotron. Whereas HVP is traditionally determined via hadron-production cross sections in e+e data, or via theory-based estimates from recent lattice calculations, MUonE would make a very precise measurement of the shape of the differential cross section of μ+e μ+e scattering. This will enable a direct measurement of the hadronic contribution to the running of the electromagnetic coupling constant α, which governs the HVP process.

MUonE was first proposed in 2017 as part of the Physics Beyond Colliders initiative, and a test run in 2018 was performed to validate the basic idea of a detector. Following a decision by CERN in 2019 to carry out a three-week long pilot run to validate the experimental idea, the MUonE team collected data at the M2 beamline from 21 August to 10 September 2023, using a 160 GeV/c muon beam fired at atomic electrons in a fixed target located at CERN’s North Area. The main purpose of the run was to verify the system’s engineering and to attempt to measure the leptonic corrections to the running of α, for which an analysis is in progress.

The full experiment would have 40 stations comprising a 1.5 cm thick beryllium target followed by a tracking system, which can measure the scattering angles with high precision; further downstream lies an electromagnetic calorimeter and a muon detector. During the 2023 run, two MUonE stations followed by a calorimeter were installed, and a further tracking station without target was placed upstream of the apparatus to detect the incoming muons; the upstream station, towards the beam and without target, was dedicated to tracking the incoming muons. The next step is to install further detetor stations in stages.

“The original schedule has been delayed, partly due to the COVID pandemic, and the final measurement is expected to be performed after Long Shutdown 3,” explains MUonE collaboration board chair Clara Matteuzzi (INFN Milano Bicocca). “A first stage with a scaled detector, comprising a few stations followed by a calorimeter and a muon identifier, which could provide a very first measurement of HVP with low accuracy and a demonstration of the whole concept before the full final run, is under consideration.”

The overall goal of the experiment is to gather around 3.5 × 1012 elastic scattering events with an electron energy larger than 1 GeV, during three years of data-taking at the M2 beam. This would allow the team to achieve a statistical error of 0.3% and thus make MUonE competitive with the latest HVP results computed by other means. The challenge, however, is to keep the systematic error at the level of the statistical one.

“This successful test run gives MUonE confidence that the final goal can be reached, and we are very much looking forward to submitting the proposal for the full run,” adds Matteuzzi.

Webb sheds light on oldest black holes

JWST image of distant galaxies

While it is believed that each galaxy, including our own, contains a supermassive black hole (SMBH) at its centre, much remains unknown about the origin of these extreme objects. The seeds for SMBHs are thought to have existed as early as 200 million years after the Big Bang, after which they accreted mass for 13 billion years to turn into black holes with sizes of up to tens of billions of solar masses. But what were the seeds of these massive black holes? Some theories state that they were formed after the collapse of the first generation of stars, thereby making them tens to hundreds of solar masses, while other theories attribute their origin to the collapse of massive gas clouds that could produce seeds with masses of 104–105 solar masses.

The recent joint detection of a SMBH dating from 500 million years after the Big Bang by the James Webb Space Telescope (JWST) and the Chandra X-ray Observatory provides new insights into this debate. The JWST, sensitive to highly redshifted emission from the early universe, observed a gravitationally lensed area to provide images of some of the oldest galaxies. One such galaxy, called UHZ1, has a redshift corresponding to 13.2 billion years ago, or 500 million years after the Big Bang. Apart from its age, the observations allow an estimate of its stellar mass, while the SMBH expected to be at its centre remains hidden in these wavelengths. This is where Chandra, which is sensitive in the 0.2 to 10 keV energy range, came in.

Observations by Chandra of the area of the cluster lens, Abell 2744, which magnifies UHZ1, shows an excess at energies of 2–7 keV. The measured emission spectrum and luminosity correspond to that from an accreting black hole with a mass of 107 to 108 solar masses, which is about half of the total mass of the galaxy. This can be compared to our own galaxy where the SMBH is estimated to make up only 0.1% of the total mass.

Such a mass can be explained by a seed black-hole of 104 to 105 solar masses accreting matter for 300 million years. A small seed is more difficult to explain, however, because such sources would have to continuously accrete matter at twice their Eddington limit (the point at which the gravitational pull of the object is cancelled by the radiation pressure it applies through the accretion to the surrounding matter). Although super-Eddington accretion is possible, as this limit assumes for example spherical emission of the radiation, which is not necessarily correct, the accretion rates required for light seeds are difficult to explain. 

The measurements of a single early galaxy already provide strong hints regarding the source of SMBHs. As JWST continues to observe the early universe, more such sources will likely be revealed. This will allow us to better understand the masses of the seeds, as well as how they grew over a period of 13 billion years.

Electroweak milestones at CERN

Celebrating the 1973 discovery of weak neutral currents by the Gargamelle experiment and the 1983 discoveries of the W and Z bosons by the UA1 and UA2 experiments at the SppS, a highly memorable scientific symposium in the new CERN Science Gateway on 31 October brought the past, present and future of electroweak exploration into vivid focus. “Weak neutral currents were the foundation, the W and Z bosons the pillars, and the Higgs boson the crown of the 50 year-long journey that paved the electroweak way,” said former Gargamelle member Dieter Haidt (DESY) in his opening presentation.

History could have turned out differently, said Haidt, since both CERN and Brookhaven National Laboratory (BNL) were competing in the new era of high-energy neutrino physics: “The CERN beam was a flop initially, allowing BNL to snatch the muon-neutrino discovery in 1962, but a second attempt at CERN was better.” This led André Lagarrigue to dream of a giant bubble chamber, Gargamelle, financed and built by French institutes and operated by CERN with beams from the Proton Synchrotron (PS) from 1970 to 1976. Picking out the neutral-current signal from the neutron-cascade background was a major challenge, and a solution seemed hopeless until Haidt and his collaborators made a breakthrough regarding the meson component of the cascade.

The ten years between the discovery of neutral currents and the W and Z bosons are what took CERN from competent mediocrity to world leader

Lyn Evans

By early July 1973, it was realised that Gargamelle had seen a new effect. Paul Musset presented the results in the CERN auditorium on 19 July, yet by that autumn Gargamelle was “treated with derision” due to conflicting results from a competitor experiment in the US. ‘The Gargamelle claim is the worst thing to happen to CERN,’ Director-General John Adams was said to have remarked. Jack Steinberger even wagered his cellar that it was wrong. Following further cross checks by bombarding the detector with protons, the Gargamelle result stood firm. At the end of Haidt’s presentation, collaboration members who were present in the audience were recognised with a warm round of applause.

From the PS to the SPS
The neutral-current discovery and the subsequent Gargamelle measurement of the weak mixing angle made it clear not only that the electroweak theory was right but that the W and Z were within reach of the technology of the day. Moving from the PS to the SPS, Jean-Pierre Revol (Yonsei University) took the audience to the UA1 and UA2 experiments ten years later. Again, history could have taken a different turn. While CERN was working towards a e+e collider to find the W and Z, said Revol, Carlo Rubbia proposed the radically different concept of a hadron collider — first to Fermilab, which, luckily for CERN, declined. All the ingredients were presented by Rubbia, Peter McIntyre and David Cline in 1976; the UA1 detector was proposed in 1978 and a second detector, UA2, was proposed by CERN six months later. UA1 was huge by the standards of the day, said Revol. “I was advised not to join, as there were too many people! It was a truly innovative project: the largest wire chamber ever built, with 4π coverage. The central tracker, which allowed online event displays, made UA1 the crucial stepping stone from bubble chambers to modern electronic ones. The DAQ was also revolutionary. It was the beginning of computer clusters, with same power as IBM mainframes.”

First SppS collisions took place on 10 July 1981, and by mid-January 1983 ten candidate W events had been spotted by the two experiments. The W discovery was officially announced at CERN on 25 January 1983. The search for the Z then started to ramp up, with the UA1 team monitoring the “express line” event display around the clock. On 30 April, Marie Noelle Minard called Revol to say she had seen the first Z. Rubbia announced the result at a seminar on 27 May, and UA2 confirmed the discovery on 7 June. “The SppS was a most unlikely project but was a game changer,” said Haidt. “It gave CERN tremendous recognition and paved the way for future collaborations, at LEP then LHC.”

Former UA2 member Pierre Darriulat (Vietnam National Space Centre) concurred: “It was not clear at all at that time if the collider would work, but the machine worked better than expected and the detectors better than we could dream of.” He also spoke powerfully about the competition between UA1 and UA2: “We were happy, but it was spoiled in a way because there was all this talk of who would be ‘first’ to discover. It was so childish, so ridiculous, so unscientific. Our competition with UA1 was intense, but friendly and somewhat fun. We were deeply conscious of our debt toward Carlo and Simon [van der Meer], so we shared their joy when they were awarded the Nobel prize two years later.” Darriulat emphasised the major role of the Intersecting Storage Rings and the input of theorists such as John Ellis and Mary K Gaillard, reserving particular praise for Rubbia. “Carlo did the hard work. We joined at the last moment. We regarded him as the King, even if we were not all in his court, and we enjoyed the rare times when we saw the King naked!”

Our competition with UA1 was intense, but friendly and somewhat fun

Pierre Darriulat

The ten years between the discovery of neutral currents and the W and Z bosons are what took CERN “from competent mediocrity to world leader”, said Lyn Evans in his account of the SppS feat. Simon van der Meer deserved special recognition, not just for his 1972 paper on stochastic cooling, but also his earlier invention of the magnetic horn, which was pivotal in increasing the neutrino flux in Gargamelle. Evans explained the crucial roles of the Initial Cooling Experiment and the Antiproton Accumulator, and the many modifications needed to turn the SPS into a proton-antiproton collider. “All of this knowledge was put into the LHC, which worked from the beginning extremely well and continues to do so. One example was intrabeam scattering. Understanding this is what gives us the very long beam lifetimes at the LHC.”

Long journey
The electroweak adventure began long before CERN existed, pointed out Wolfgang Hollik, with 2023 also marking the 90th anniversary of Fermi’s four-fermion model. The incorporation of parity violation came in 1957 and the theory itself was constructed in the 1960s by Glashow, Salam, Weinberg and others. But it wasn’t until ‘t Hooft and Veltman showed that the theory is renormalizable in the early 1970s that it became a fully-fledged quantum field theory. This opened the door to precision electroweak physics and the ability to search for new particles, in particular the top quark and Higgs boson, that were not directly accessible to experiments. Electroweak theory also drove a new approach in theoretical particle physics based around working groups and common codes, noted Hollik.

The afternoon session of the symposium took participants deep into the myriad of electroweak measurements at LEP and SLD (Guy Wilkinson, University of Oxford), Tevatron and HERA (Bo Jayatilaka, Fermilab), and finally the LHC (Maarten Boonekamp, Université Paris-Saclay and Elisabetta Manca, UCLA). The challenges of such measurements at a hadron collider, especially of the W-boson mass, were emphasised, as were their synergies with QCD in measurements in improving the precision of parton distribution functions.

The electroweak journey is far from over, however, with the Higgs boson offering the newest exploration tool. Rounding off a day of excellent presentations and personal reflections, Rebeca Gonzalez Suarez (Uppsala University) imagined a symposium 40 years from now when the proposed collider FCC-ee at CERN has been operating for 16 years and physicists have reconstructed nearly 1013 W and Z bosons. Such a machine would take the precision of electroweak physics into the keV realm and translate to a factor of seven increase in energy scale. “All of this brings exciting challenges: accelerator R&D, machine-detector interface, detector design, software development, theory calculations,” she said. “If we want to make it happen, now is the time to join and contribute!”

Kaon physics at a turning point

Only two experiments worldwide are dedicated to the study of rare kaon decays: NA62 at CERN and KOTO at J-PARC in Japan. NA62 plans to conclude its efforts in 2025, and both experiments are aiming to reach important milestones on this timescale. The future experimental landscape for kaon physics beyond this date is by no means clear, however. With proposals for next-generation facilities such as HIKE at CERN and KOTO-II at J-PARC currently under scrutiny, more than 100 kaon experts met at CERN from 11 to 14 September for a hybrid workshop to take stock of the experimental and theoretical opportunities in kaon physics in the coming decades.

Kaons, which contain one strange and either a lighter up or down quark, have played a central role in the development of the Standard Model (SM). Augusto Ceccucci (CERN) pointed out that many of the SM’s salient features – including flavour mixing, parity violation, the charm quark and CP violation – were discovered through the study of kaons, leading to the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix. The full particle content of the SM was finally experimentally established at CERN with the Higgs-boson discovery in 2012, but many open questions remain.

The kaon’s special role in this context was the central topic of the workshop. The study of rare kaon decays provides a unique sensitivity to new physics, up to  scales higher than those at collider experiments. In the SM, the rare decay of a charged or neutral kaon into a pion plus a pair of charged or neutral leptons is strongly suppressed, even more so than the similar rare B-meson decays. This is due to the absence at tree-level of flavour-changing neutral current interactions (e.g. s → d) in the SM. Such a transition can only proceed at loop level involving the creation of at least one very heavy (virtual) electroweak gauge boson (figure “Decayed”, left). While experimentally this suppression constitutes a formidable challenge in identifying the decay products amongst a variety of background signals, new-physics contributions could leave a significantly measurable imprint through tree-level or virtual contributions. In contrast to rare B decays, the “gold-plated” rare kaon decay channels K+→π+νν and KL→π0νν do not suffer from large hadronic uncertainties and are experimentally clean due to the limited number of possible decay channels.

kaons_at_cern_diagram

The charged-kaon decay is currently being studied at NA62, and a measurement of its branching ratio with a precision of 15% is expected by 2025. However, as highlighted by NA62 physics coordinator Karim Massri (Lancaster University), to improve this measurement and thus significantly increase the  likelihood of a discovery, the experimental precision must be reduced to the level of the theoretical prediction, i.e. 5%. This can only be achieved with a next-generation experiment. The HIKE experiment, a proposed high-intensity kaon factory at CERN currently under approval, would reach the 5% precision goal on the measurement of K+→π+νν during its first phase of operation. experiment, a future high-intensity kaon factory at CERN currently under approval, will reach the 5% precision goal on the measurement of K+→π+νν during its first phase of operation. Afterwards, a second phase with a neutral KL beam aiming at the first observation of the very rare decays KL→π0+ is foreseen. With a setup and detectors optimised for the measurement of the most challenging processes, the HIKE programme would be able to achieve unprecedented precision on most K+ and KL decays.

For KOTO, Koji Shimi and Hashime Nanjo reported on the experimental progress on KL→π0+ and presented a new bound on its branching ratio. A planned phase two of KOTO, if funded, aims to measure the branching ratio with a precision of 20%. Although principally designed for the study of (rare) bottom-quark decays, LHCb can also provide information about the rare decay of the shorter-lived KS.Radoslav Marchevski (EPFL Lausanne) presented the status and the prospects for a proposed LHCb-Phase II upgrade.

From the theory perspective, underpinned by impressive new perturbative, lattice QCD and effective-field-theory calculations presented at the workshop, the planned measurement of K+→π+νν at HIKE clearly has discovery potential, remarked Gino Isidori (University of Zurich). Together with other rare decay channels such as KL→μ+μ, KL→π0+ and K+→π++that would be measured by HIKE, added Giancarlo D’Ambrosio (INFN), the combined global theory analyses of experimental data will allow for discovering new physics if it exists within the reach of the experiment, and for providing solid constraints for new physics.

A decision on HIKE and other proposed experiments in CERN’s North Area will take place in early December.

From the cosmos to the classroom

Although science education, communication and public outreach are distinct fields of research and practice, they often overlap and intertwine. Common to the three fields is their shared goal of increasing scientific literacy, improving attitudes to STEM (science, technology, engineering and mathematics) and empowering society to engage with and apply scientific knowledge in decision-making processes. In light of challenges such as climate change and rapid advances in artificial intelligence, achieving these goals has become more relevant than ever.

Science education, communication and outreach have developed from different origins, at different times, and in response to diverse public needs. The formation of science education as a proper discipline, for example, dates back to educational reform movements in the late 19th century. Science communication, on the other hand, is a relatively young field that only took a clear form in the second half of the 20th century in response to a growing awareness of the role and impact of scientific progress.

While it is true that practitioners often cross the disciplinary boundaries of these fields, education, communication and outreach today represent distinct professions, each with its own identity, methods and target groups. Whereas science educators tend to focus on individual learners, often in school settings, public-outreach professionals aim to inspire interest in and engagement with science among the general public in out-of-school settings. Besides, the differences go beyond variations in target groups and domains. After all, the distinction between education and communication is substantial: many science journalists resist the suggestion that they serve a role in education, arguing that their primary goal is to provide information.

Questions then arise: how do these disciplines overlap, diverge and interact, and how have their practices evolved over time? And how do these evolutions affect our understanding of science and its place in society? As two academics whose career trajectories have spanned science, education and communication, we have experienced intersections and interactions between these fields up close and see exciting opportunities for the future of science engagement.

Magdalena: farewell to the deficit model

What stands out to me is the parallel development that science education and communication have undergone over the past decades. Despite their different origins, traditions, ideas, models and theories, all have seen a move away from simple one-way knowledge transmission to genuine and meaningful engagement with their respective target groups, whether that’s in a classroom, a public lecture or at a science festival.

In classrooms, there has been a noticeable shift from teacher-centred to student-centred instructional practices. In the past, science teachers used to be active (talking and explaining), while students were passive (listening). Today, the focus is the students and how to engage them actively in the learning process. A popular approach to engaging students is enquiry-based science education, where students take the lead (asking questions, formulating hypotheses, running experiments and drawing conclusions) and teachers act as facilitators.

Collaboration between science education researchers and practitioners is critical to improving science education

One excellent example of such an enquiry-based approach is Mission Gravity, an immersive virtual-reality (VR) programme for lower- and upper-secondary students (see “Mission gravity” image). Developed by the education and public outreach team at OzGrav, the Australian Research Council Centre of Excellence for Gravitational Wave Discovery, the programme aims to teach stellar evolution and scientific modelling by inviting students on a virtual field trip to nearby stars. The VR environment enables students to interact with stars, make measurements and investigate stellar remnants. By collecting data, forming hypotheses and trying to figure out how stars change over time, the students discover the laws of physics instead of merely hearing about them.

The shift towards student-centric education has been accompanied by an evolution in our understanding of student learning. Early-learning theories used to lean heavily on ideas of conditioning, treating learning as a predictable process that teachers could control through repetition and reinforcement. Contemporary models consider cognitive functions, including perception, problem-solving and imagination, and recognise the crucial role of social and cultural contexts in learning science. Nowadays, we acknowledge that education is most meaningful when students take responsibility for their learning and connect the subject matter to their own lives.

Virtual-reality experience

For instance, my PhD project on general-relativity education leveraged sociocultural learning theory to design an interactive learning environment, incorporating collaborative activities that encourage students to articulate and discuss physics concepts. This “talking physics” approach is great for fostering conceptual understanding in modern physics, and we refined the approach further through iterative trials with physics teachers to ensure an authentic learning experience. Again, collaboration between science education researchers and practitioners (in this case, physics teachers) is critical to improving science education.

Similarly, science communication has transitioned from deficit models to more dialogic and participatory ones. The earlier deficit models perceived society as lacking scientific understanding and envisaged a one-way flow of information from expert scientists to a passive audience – quite similar to the behaviourist approach prevalent in the early days of science education. Modern science communication practices foster a dialogue where scientists and the public engage in meaningful discussions. In particular, the participatory model positions scientists and the public as equal participants in an open conversation about science. Here, the interaction is as critical as the outcomes of the discussions. This places emphasis on the quality of communication and meaning-making, similar to what many consider the goals of good science education (see “Increasing interaction” figure).

To illustrate a participatory approach to science communication, consider the City-Lab: Galileo initiative in Zurich. This initiative integrates theatre, podcasts and direct interactions between scientists, actors and citizens to foster dynamic conversations about the role of science in society. A range of media and formats were employed to engage the public beyond traditional forms, ranging from audio-visual exhibits to experiences where the public could attend a play and then engage in a post-show discussion with scientists. By directly involving scientists and the public in such exchanges, City-Lab: Galileo invites everyone to shape a dialogue about science and society, underlining the shifting paradigms in science communication.

Urban: the power of semiotics

For me, a ground-breaking moment in how we communicate disciplinary knowledge came when I saw two astronomers in a coffee room discussing the evolution of a stellar cluster. They were using their hands to sign the so-called turn-off point in a Hertzsprung–Russell diagram in mid-air, indicating their individual perspective on the age of the cluster. These hand-wavings would most likely not mean anything to anyone outside the discipline and I was intrigued by how powerful communication using such semiotic resources can be. The conclusion is that communicating science does not just involve speech or text.

Particularly intriguing are the challenges students and others have with visualising the world in 3D and 4D from 2D input, for example in astronomical images, which I started to notice while teaching astronomy. How hard can it be to “see”, in one’s head, the 3D structure of a nebula (see “Nebulous” image) a galaxy or even the Sun–Earth–Moon system when looking at a 2D representation of it? It turns out to be very hard for most people. This led to an investigation of the ability of people to extrapolate 3D in their mind, which immediately raised another question: what do people actually “see” or discern when engaged in disciplinary communication, or when looking at the stunning images from the Hubble or Webb space telescopes? Nowadays this is referred to as disciplinary discernment in the literature.

The “rosette” model of science communication

Researching such questions relies on methods that are quite different from those used in the natural sciences. Often data exists in the form of transcripts of interviews, which are then read, coded and characterised for categories of discernment. In the case of spatial perception, this inductive process led to an anatomy of multidimensional discernment describing the different ways that the participants experience three-dimensionality in particular and disciplinary discernment in general. It also identified a deeper underlying challenge that all science learning depends upon: large and small spacetime scales. Spatial and temporal scales, in particular large and small, are identified as threshold concepts in science. As a result, the success of any teaching activity in schools, science centres and other outreach activities depends on how well students come to understand these scales. With very little currently known, there is much to explore.

As an educational researcher in physics, one has to be humbled by the great diversity of ideas about what it means and entails to teach and learn physics. However, I’ve come to appreciate a particular theoretical framework based on studies of the disciplinary communication in a special group in society: physicists. This, and indeed any group with the same interests, develops and shares a special way of communicating knowledge. In addition to highly specialised language, they use a whole setup of representations, tools and activities. These are often referred to as semiotic resources and are studied in a theoretical framework called social semiotics.

Social semiotics turns out to be a powerful way to study and analyse the disciplinary communication in physics and astronomy departments. I usually describe the framework as a jigsaw puzzle that we are still building. We have identified, and described in detail, certain pieces in this theory but there are more to explore. One such piece is embodiment and what the use of gestures means for communicating disciplinary knowledge, such as the hand-waving astronomers in the coffee room. It is similar to the theory-building processes in physics, where through empirical investigations physicists try to construct a solid theory for how nature works.

Joint conclusion

Understanding how we think about and communicate physics is as interesting and challenging as physics itself. We believe that they are inseparable, and as we explore the landscape of physics to understand the universe we are also exploring the human mind and how we can understand the universe. Physicists too have to be able to communicate with and engage other physicists. The scientific process of publishing research is an excellent example of how challenging this can be: researchers must convince their colleagues around the world that what they have found is correct, or else they fail. The history of science is full of such examples. For example, in the 1920s Swedish astronomer Knut Lundmark made observations of stars in distant galaxies and found that these galaxies seem to move away from us – in essence he had discovered the expansion of the universe. However, he was unable to convince (read: communicate this to) his colleagues, and a few years later Edwin Hubble did the same thing and made a more convincing case.

Finally, this article tries to shed light on the challenges in communicating physics to not just physicists but also students, and the public. The challenges are similar but at different levels, depending on the persons involved and engaged in this interchange of knowledge. What the physicist tries to communicate and what the audience discerns and ultimately learns about the universe are often two different things.

Supernovae probe neutrino self-interactions

Towards the end of the lifetime of a very massive (>8 M) star, the nuclear fusion processes in its core are no longer sufficient to balance the constantly increasing pull of gravitational forces. This eventually causes the core to collapse, with the release of an enormous amount of matter and energy via shockwaves. Nearly 99% of such a core-collapse supernova’s energy is released in the form of neutrinos, usually leaving behind a compact proto-neutron star with a mass of about 1.5 M and a radius of about 10 km. For more massive remnant cores (>3 M), a black hole is formed instead. The near-zero mass and electrical neutrality of neutrinos make their detection particularly challenging: when the famous 1987 supernova SN1987a occurred 168,000 light-years from Earth, the IMB observatory in the US detected just eight neutrinos, BNO in Russia detected 13 and Kamiokande II in Japan detected 11 (CERN Courier March/April 2021 p12).

Besides telling us about the astrophysical processes inside a core-collapse supernova, such neutrino detections might also tell us more about the particles themselves. The Standard Model (SM) predicts feeble self-interactions among neutrinos (νSI), but probing them remains beyond the reach of present-day laboratories on Earth. As outlined in a white paper published earlier this year by Jeffrey Berryman and co-workers, νSI (mediated, for example, by a new scalar or vector boson) enter many beyond-the-SM theories that attempt to explain the generation of neutrino masses and the origin of dark matter. One of the probes that can be used to explore such interactions are core-collapse supernovae, since the extreme conditions in these catastrophic events make it more likely for νSI to occur and therefore affect the behaviour of the emitted neutrinos.

Recently, Po-Wen Chang and colleagues at Ohio State University explored this possibility by considering the formation of a tightly coupled neutrino fluid that expands under relativistic hydrodynamics, thereby having an effect on neutrino pulses detected on Earth. The team derives solutions to relativistic hydrodynamic equations for two cases: a “burst outflow” and a “wind outflow”. A burst outflow of a uniform neutrino fluid occurs when it undergoes free expansion in vacuum, while a wind outflow occurs when steady-state solutions to the hydrodynamics equations are looked for. In their current work, the authors focus on the former.

In a scenario without νSI, the neutrinos escape and form a shell of thickness about 105 times the radius of the proto-neutron star that freely travels away at the speed of light. On the other hand, in a scenario with νSI, the neutrinos don’t move freely immediately after escaping the proto-neutron star and instead undergo increased neutrino elastic scattering. As a result, the neutrino shell continues expanding radially until it reaches the point where the density becomes low enough for the neutrinos to decouple and begin free-flowing. The thickness of the shell at this instant depends on the strength of the νSI interactions and is expected to be much larger than that in the no-νSI case. This, in turn, would translate to longer neutrino signals in detectors on Earth.

The effects of neutrino self interactions on SN1987a are starting to become clearer

Data from SN1987a, where the neutrino signal lasted for about 10 s, broadly agree with the no-νSI scenario and were used to set limits on very strong νSI interactions. On the other hand, if νSI were to exist as a burst-outflow, the proposed model gives very robust results, with an estimated sensitivity of 3 s. Additionally, the authors argue that the steady-state wind-outflow case might be more likely to occur, a dedicated treatment of which has been left for future work.

For the first time since its observation 36 years ago, the effects of νSI on SN1987a are starting to become clearer. Further advances in this direction are much anticipated so that when the next supernovae occurs it could help clear the fog that surrounds our current understanding of neutrinos.

Antinuclei production in pp collisions

LHCb figure 1

At the European Physical Society Conference on High Energy Physics, held in Hamburg in August, the LHCb collaboration announced first results on the production of antihelium and antihypertriton nuclei in proton–proton (pp) collisions at the LHC. These promising results open a new research field, that up to now has been pioneered by ground-breaking work from the ALICE collaboration on the central rapidity interval |y| < 0.5. By extending the measurements into the so-far unexplored forward region 1.0 < y < 4.0, the LHCb results provide new experimental input to derive the production cross sections of antimatter particles formed in pp collisions, which are not calculable from first principles.

LHCb’s newly developed helium-identification technique mainly exploits information from energy losses through ionisation in the silicon sensors upstream (VELO and TT stations) and downstream (Inner Tracker) of the LHCb magnet. The amplitude measurements from up to ~50 silicon layers are combined for each subdetector into a log-likelihood estimator. In addition, timing information from the Outer Tracker and velocity measurements from the RICH detectors are used to improve the separation power between heavy helium nuclei (with charge Z = 2) and lighter, singly charged particles (mostly charged pions). With a signal efficiency of about 50%, a nearly background-free sample of 1.1 × 105 helium and antihelium nuclei is identified in the data collected during LHC Run 2 from 2016 to 2018 (see figure, inset).

The helium identification method proves the feasibility of new research fields at LHCb

As a first step towards a light-nuclei physics programme in LHCb, hypertritons are reconstructed via their two-body decay into a now-identified helium nucleus and a charged pion. Hypertriton (3ΛH) is a bound state of a proton, a neutron and a Λ hyperon that can be produced via coalescence in pp collisions. These states provide experimental access to the hyperon–nucleon interaction through the measurement of their lifetime and of their binding energy. Hyperon–nucleon interactions have significant implications for the understanding of astrophysical objects such as neutron stars. For example, the presence of hypernuclei in the dense inner core can significantly suppress the formation of high-mass neutron stars. As a result, there is some tension between the observation of neutron stars heavier than two solar masses and corresponding hypertriton results from the STAR collaboration at Brookhaven. ALICE seems to have resolved the tension between hypertriton measurements at colliders and neutron stars. An independent confirmation of the ALICE result has up to now been missing, and can be provided by LHCb.

The invariant-mass distribution of hypertriton and antihypertriton candidates is shown in figure 1. More than 100 signal decays are reconstructed, with a statistical uncertainty on the mass of 0.16 MeV, similar to that of STAR. In a next step, corrections for efficiencies and acceptance obtained from simulation, as well as systematic uncertainties on the mass scale and lifetime measurement, will be derived.

The new helium identification method from LHCb summarised here proves the feasibility of a rich programme of measurements in QCD and astrophysics involving light antinuclei in the coming years. The collaboration also plans to apply the method to other LHCb Run 2 datasets, such as proton–ion, ion–ion and SMOG collision data.

Collectivity in small systems produced at the LHC

ALICE figure 1

High-energy heavy-ion collisions at the LHC exhibit strong collective flow effects in the azimuthal angle distribution of final-state particles. Since these effects are governed by the initial collision geometry of the two colliding nuclei and the hydrodynamic evolution of the collision, the study of anisotropic flow is a powerful way to characterise the production of the quark–gluon plasma (QGP) – an extreme state of matter expected to have existed in the early universe.

To their surprise, researchers on the ALICE experiment have now revealed similar flow signatures in small systems encompassing proton–proton (pp) and proton–lead (pPb) collisions, where QGP formation was previously assumed not to occur. The origin of the flow signals in small systems (and in particular whether the mechanisms behind these correlations in small systems share commonalities with heavy-ion collisions) are not yet fully understood. To better interpret these results, and thus to understand the limit of the system size that exhibits fluid-like behaviour, it is important to carefully single out possible scenarios that can mimic the effect of collective flow. 

Anisotropic-flow measurements become more difficult in small systems because non-flow effects, such as the presence of jets, become more dominant. Thus, it is important to examine methods where non-flow effects are properly subtracted first. One of the methods, the so-called low-multiplicity template fit, has been widely used in several experiments to determine and subtract the non-flow elements.

The origin of the flow signals in small systems is not yet fully understood

The ALICE collaboration studied long-range angular correlations for pairs of charged particles produced in pp and pPb collisions at centre-of-mass energies of 13 TeV and 5.02 TeV, respectively. Flow coefficients were extracted from these correlations using the template-fit method in samples of events with different charged-particle multiplicities. This method considers that the yield of jet fragments increases as a function of particle multiplicity and allows physicists to examine assumptions made in the low-multiplicity template fit for the first time – demonstrating their validity, including a possible jet-shape modification.

Figure 1 shows the measurement of two components of anisotropic flow – elliptic (v2) and triangular (v3) – as a function of charged-particle multiplicity at midrapidity (Nch). The data show decreasing trends towards lower multiplicities. In pp collisions, the results suggest that the v2 signal disappears below Nch = 10. The results are then compared with hydrodynamic models. To accurately describe the data, especially for events with low multiplicities, a better understanding of initial conditions is needed.

These results can help to constrain the modelling of initial-state simulations, as the significance of initial-state effects increases for collisions resulting in low multiplicities. The measurements with larger statistics from Run 3 data will push down this multiplicity limit and reduce the associated uncertainties.

bright-rec iop pub iop-science physcis connect