Comsol -leaderboard other pages

Topics

The Science of Learning Physics

A greying giant of the field speaks to the blackboard for 45 minutes before turning, dismissively seizing paper and scissors, and cutting a straight slit. The sheet is twisted to represent the conical space–time described by the symbols on the board. A lecture theatre of students is transfixed in admiration.

This is not the teaching style advocated by José Mestre and Jennifer Docktor in their new book The Science of Learning Physics. And it’s no longer typical, say the authors, who suggest that approximately half of physics lecturers use at least one “evidence-based instructional practice” – jargon, most often, for an interactive teaching method. As colleagues joked when I questioned them on their teaching styles, there is still a performative aspect to lecturing, but these days it is just as likely to reflect the rock-star feeling of having a hundred camera phones pointed at you – albeit so the students can snap a QR code on your slide to take part in an interactive mid-lecture quiz.

Swiss and Soviet developmental psychologists Jean Piaget and Lev Vygotsky are duly namechecked

Mestre and Docktor, who are both educational psychologists with a background in physics, offer intriguing tips to maximise the impact of such practices. After answering a snap poll, they say, students should discuss with their neighbour before being polled again. The goal is not just to allow the lecturer to tailor their teaching, but also to allow students to “construct” their knowledge. Lecturing, they say, gives piecemeal information, but does not connect it. Neurons fire, but synaptic connections are not trained. And as the list of neurotransmitters that reinforce synaptic connections includes dopamine and serotonin, making students feel good by answering questions correctly may be worth the time investment.

Relative to other sciences, physics lecturers are leading the way in implementing evidence-based instructional practices, but far too few are well trained, say Mestre and Docktor, who want to bring the tools and educational philosophies of the high-school physics teacher to the lecture theatre. Swiss and Soviet developmental psychologists Jean Piaget and Lev Vygotsky are duly namechecked. “Think–pair–share”, mini whiteboards and flipping the classroom (not a discourteous gesture but the advance viewing of pre-recorded lectures before a more participatory lecture), are the order of the day. Students are not blank slates, they write, but have strong attachments to deeply ingrained and often erroneous intuitions that they have previously constructed. Misconceptions cannot be supplanted wholesale, but must be unknotted strand by strand. Lecturers should therefore explicitly describe their thought processes and encourage students to reflect on “metacognition”, or “thinking about thinking”. Here the text is reminiscent of Nobelist Daniel Kahneman’s seminal text Thinking, Fast and Slow, which divides thinking into two types: “system 1”, which is instinctive and emotional, and “system 2”, which is logical but effortful. Lecturers must fight against “knee-jerk” reasoning, say Mestre and Docktor, by modelling the time-intensive construction of knowledge, rather than aspiring to misleading virtuoso displays of mathematical prowess. Wherever possible, this should be directly assessed by giving marks not just for correct answers, but also for identifying the “big idea” and showing your working.

Disappointingly, examples are limited to pulleys and ramps, and, somewhat ironically, the book’s dusty academic tone may prove ineffective at teaching teachers to teach. But no other book comes close to The Science of Learning Physics as a means for lecturers to reflect on and enrich their teaching strategies, and it is highly recommend on that basis. That said, my respect for my old general-relativity lecturer remained undimmed as I finished the last page. Those old-fashioned lectures were hugely inspiring – a “non-cognitive aspect” that Mestre and Docktor admit their book does not consider.

In search of WISPs

The ALPS II experiment at DESY

The Standard Model (SM) cannot be the complete theory of particle physics. Neutrino masses evade it. No viable dark-matter candidate is contained within it. And under its auspices the electric dipole moment of the neutron, experimentally compatible with zero, requires the cancellation of two non-vanishing SM parameters that are seemingly unrelated – the strong-CP problem. The physics explaining these mysteries may well originate from new phenomena at energy scales inaccessible to any collider in the foreseeable future. Fortunately, models involving such scales can be probed today and in the next decade by a series of experiments dedicated to searching for very weakly interacting slim particles (WISPs).

WISPs are pseudo Nambu–Goldstone bosons (pNGBs) that arise automatically in extensions of the SM from global symmetries which are broken both spontaneously and explicitly. NGBs are best known for being “eaten” by the longitudinal degrees of freedom of the W and Z bosons in electroweak gauge-symmetry breaking, which underpins the Higgs mechanism, but theorists have also postulated a bevy of pNGBs that get their tiny masses by explicit symmetry breaking and are potentially discoverable as physical particles. Typical examples arising in theoretically well-motivated grand-unified theories are axions, flavons and majorons. Axions arise from a broken “Peccei–Quinn” symmetry and could potentially explain the strong-CP problem, while flavons and majorons arise from broken flavour and lepton symmetries.

The Morpurgo magnet

Being light and very weakly interacting, WISPs would be non-thermally produced in the early universe and thus remain non-relativistic during structure formation. Such particles would inevitably contribute to the dark matter of the universe. WISPs are now the target of a growing number and type of experimental searches that are complementary to new-physics searches at colliders.

Among theorists and experimentalists alike, the axion is probably the most popular WISP. Recently, massive efforts have been undertaken to improve the calculations of model-dependent relic-axion production in the early universe. This has led to a considerable broadening of the mass range compatible with the explanation of dark matter by axions. The axion could make up all of the dark matter in the universe for a symmetry-breaking scale fa between roughly 108 and 1019 GeV (the lower limit being imposed by astrophysical arguments, the upper one by the Planck scale), corresponding to axion masses from 10–13 eV to 10 meV. For other light pNGBs, generically dubbed axion-like particles (ALPs), the parameter range is even broader. With many plausible relic-ALP-production mechanisms proposed by theorists, experimentalists need to cover as much of the unexplored parameter range as possible.

Although the strengths of the interactions between axions or ALPs and SM particles are very weak, being inversely proportional to fa, several strategies for observing them are available. Limits and projected sensitivities span several orders of magnitude in the mass-coupling plane (see “The field of play” figure).

IAXO’s design profited greatly from experience with the ATLAS toroid

Since axions or ALPs can usually decay to two photons, an external static magnetic field can substitute one of the two photons and induce axion-to-photon conversion. Originally proposed by Pierre Sikivie, this inverse Primakoff effect can classically be described by adding source terms proportional to B and E to Maxwell’s equations. Practically, this means that inside a static homogeneous magnetic field the presence of an axion or ALP field induces electric-field oscillations – an effect readily exploited by many experiments searching for WISPs. Other processes exploited in some experimental searches and suspected to lead to axion production are their interactions with electrons, leading to axion bremsstrahlung, and their interactions with nucleons or nuclei, leading to nucleon-axion bremsstrahlung or oscillations of the electric dipole moment of the nuclei or nucleons.

The potential to make fundamental discoveries from small-scale experiments is a significant appeal of experimental WISP physics, however the most solidly theoretically motivated WISP parameter regions and physics questions require setups that go well beyond “table-top” dimensions. They target WISPs that flow through the galactic halo, shine from the Sun, or spring into existence when lasers pass through strong magnetic fields in the laboratory.

Dark-matter halo

Haloscopes target the detection of dark-matter WISPs in the halo of our galaxy, where non-relativistic cold-dark-matter axions or ALPs induce electric field oscillations as they pass through a magnetic field. The frequency of the oscillations corresponds to the axion mass, and the amplitude to B/fa. When limits or projections are given for these kinds of experiments, it is assumed that the particle under scrutiny homogeneously makes up all of the dark matter in the universe, introducing significant cosmological model dependence.

Axion–photon coupling versus axion mass plane

The furthest developed currently operating haloscopes are based on resonant enhancement of the axion-induced electric-field oscillations in tunable resonant cavities. Using this method, the presently running ADMX project at the University of Washington has the sensitivity to discover dark-matter axions with masses of a few µeV. Nuclear resonance methods could be sensitive to halo dark-matter axions with mass below 1 neV and “fuzzy” dark-matter ALPs down to 10–22 eV within the next decade, for example at the CASPEr experiments being developed at the University of Mainz and Boston University. Meanwhile, experiments based on classical LC circuits, such as ABRACADABRA at MIT, are being designed to measure ALP- or axion-induced magnetic field oscillations in the centre of a toroidal magnet. These could be sensitive in a mass range between 10 neV and 1 µeV.

ALPS II is the first laser-based setup to fully exploit resonance techniques

For dark-matter axions with masses up to approximately 50 µeV, promising developments in cavity technologies such as multiple matched cavities and superconducting or diel­ectric cavities are ongoing at several locations, including at CAPP in South Korea, the University of Western Australia, INFN Legnaro and the RADES detector, which has taken data as part of the CAST experiment at CERN. Above ~40 µeV, however, the cavity concept becomes more and more challenging, as sensitivity scales with the volume of the resonant cavity, which decreases dramatically with increasing mass (as roughly 1/ma3). To reach sensitivity at higher masses, in the region of a few hundred µeV, a novel “dielectric haloscope” is being developed by the MADMAX (Magnetized Disk and Mirror Axion experiment) collaboration for potential installation at DESY. It exploits the fact that static magnetic-field boundaries between media with different dielectric constants lead to tiny power emissions that compensate the discontinuity in the axion-induced electric fields in neighbouring media. If multiple surfaces are stacked in front of each other, this should lead to constructive interference, boosting the emitted power from the expected axion dark matter in the desired mass range to detectable levels. Other novel haloscope concepts, based on meta-materials (“plasma haloscopes”, for example) and topological insulators, are also currently being developed. These could have sensitivity to even higher axion masses, up to a few meV.

Staying in tune

In principle, axion-dark-matter detection should be relatively simple, given the very high number density of particles – approximately 3 × 1013 axions/cm3 for an axion mass of 10 µeV – and the well-established technique of resonant axion-to-photon conversion. But, as the axion mass is unknown, the experiments must be painstakingly tuned to each possible mass value in turn. After about 15 years of steady progress, the ADMX experiment has reached QCD-axion dark-matter sensitivity in the mass regime of a few µeV.

ADMX uses tunable microwave resonators inside a strong solenoidal magnetic field, and modern quantum sensors for readout. Unfortunately, however, this technology is not scalable to the higher axion-mass regions as preferred, for example, by cosmological models where Peccei–Quinn symmetry breaking happened after an inflationary phase of the universe. That’s where MADMAX comes in. The collaboration is working on the dielectric-haloscope concept – initiated and led by scientists at the Max Planck Institute for Physics in Munich – to investigate the mass region around 100 µeV.

Astrophysical hints

Globular clusters

Weakly interacting slim particles (WISPs) could be produced in hot astrophysical plasmas and transport energy out of stars, including the Sun, stellar remnants and other dense sources. Observed lifetimes and energy-loss rates can therefore probe their existence. For the axion, or an axion-like particle (ALP) with sub-MeV mass that couples to nucleons, the most stringent limit, fa > ~108 GeV, stems from the duration of the neutrino signal from the progenitor neutron star of Supernova 1987A.

Tantalisingly, there are stellar hints from observations of red giants, helium-burning stars, white dwarfs and pulsars that seem to indicate energy losses with slight excesses with respect to those expected from standard energy emission by neutrinos. These hints may be explained by axions with masses below 100 meV or sub-keV-mass ALPs with a coupling to both electrons and photons.

Other observations suggest that TeV photons from distant blazars are less absorbed than expected by standard interactions with extragalactic background light – the so-called transparency hint. This could be explained by the conversion of photons into ALPs in the magnetic field of the source, and back to photons in astrophysical magnetic fields. Interestingly, these would have about the same ALP–photon coupling strength as indicated by the observed stellar anomalies, though with a mass that is incompatible with both ALPs which can explain dark matter and with QCD axions (see “The field of play” figure).

MADMAX will use a huge ~9 T superconducting dipole magnet with a bore of about 1.35 m and a stored energy of roughly 480 MJ. Such a magnet has never been built before. The MADMAX collaboration teamed up with CEA-IRFU and Bilfinger-Noell and successfully worked out a conceptual design. First steps towards qualifying the conductor are under way. The plan is for the magnet to be installed at DESY inside the old iron yoke of the former HERA experiment H1. DESY is already preparing the required infrastructure, including the liquid-helium supply necessary to cool the magnet. R&D for the dielectric booster, with up to 80 adjustable 1.25 m2 disks, is in full swing.

A first prototype, containing a more modest 20 discs of 30 cm diameter, will be tested in the “Morpurgo” magnet at CERN during future accelerator shutdowns (see “Haloscope home” figure). With a peak field strength of 1.6 T, its dipole field will allow new ALP-dark-matter parameter regions to be probed, though the main purpose of the prototype is to demonstrate the operation of the booster system in cryogenic surroundings inside a magnetic field. The MADMAX collaboration is extremely happy to have found a suitable magnet at CERN for such tests. If sufficient funds can be acquired within the next two to three years for magnet construction, and provided that the prototype efforts at CERN are successful, MADMAX could start data taking at DESY in 2028.

While direct dark-matter search experiments like ADMX and MADMAX offer by far the highest sensitivity for axion searches, this is based on the assumption that the dark matter problem is solved by axions, and if no signal is discovered any claim of an exclusion limit must rely on specific cosmological assumptions. Therefore, other less model-dependent experiments, such as helioscopes or light shining through a wall (LSW) experiments, are extremely beneficial in addition to direct dark-matter searches.

Solar axions

In contrast to dark-matter axions or ALPs, those produced in the Sun or in the laboratory should have considerable momentum. Indeed, solar axions or ALPs should have energies of a few keV, corresponding to the temperature at which they are produced. These could be detected by helioscopes, which seek to use the inverse Primakoff effect to convert solar axions or ALPs into X-rays in a magnet pointed towards the Sun, as at the CERN Axion Solar Telescope (CAST) experiment. Helioscopes could cover the mass range compatible with the simplest axion models, in the vicinity of 10 meV, and could be sensitive to ALPs with masses below 1 eV without any tuning at all.

The CAST helioscope, which reused an LHC prototype dipole magnet, has driven this field in the past decade, and provides the most sensitive exclusion limits to date. Going beyond CAST calls for a much larger magnet. For the next-generation International Axion Observatory (IAXO) helioscope, CERN members of the international collaboration worked out a conceptual design for a 20 m-long toroidal magnet with eight 60 cm-diameter bores. IAXO’s design profited greatly from experience with the ATLAS toroid.

BabyIAXO helioscope

In the past three years, the collaboration, led by the University of Zaragoza, has been concentrating its activities on the BabyIAXO prototype in order to finesse the magnet concept, the X-ray telescopes necessary to focus photons from solar axion conversion and the low-background detectors. BabyIAXO will increase the signal-to-noise ratio of CAST by two orders of magnitude; IAXO by a further two orders of magnitude.

In December 2020 the directorates of CERN and DESY signed a collaboration agreement regarding BabyIAXO: CERN will provide the detailed design of the prototype magnet including its cryostat, while DESY will design and prepare the movable platform and infrastructure (see “Prototype” figure). BabyIAXO will be located at DESY in Hamburg. The collaboration hopes to attract the remaining funds for BabyIAXO so construction can begin in 2021 and first science runs could take place in 2025. The timeline for IAXO will depend strongly on experiences during the construction and operation of BabyIAXO, with first light potentially possible in 2028.

Light shining through a wall

In contrast to haloscopes, helioscopes do not rely on the assumption that all dark matter is made up by axions. But light-shining-through-wall (LSW) experiments are even less model dependent with respect to ALP production. Here, intense laser light could be converted to axions or ALPs inside a strong magnetic field by the Primakoff effect. Behind a light-impenetrable wall they would be re-converted to photons and detected at the same wavelength as the laser light. The disadvantage of LSW experiments is that they only reach sensitivity to ALPs with a mass up to a few hundred µeV with comparably high coupling to photons. However, this is sensitive enough to test the parameter range consistent with the transparency hint and parts of the mass range consistent with the stellar hints (see “Astrophysical hints” panel).

The Any Light Particle Search (ALPS II) at DESY follows this approach. By seeking to observe light shining through a wall, any ALPs would be generated in the experiment itself, removing the need to make assumptions about their production. ALPS II is based on 24 modified superconducting dipole magnets that have been straightened by brute-force deformation, following their former existence in the proton accelerator of the HERA complex. With the help of two 124 m-long high-finesse optical resonators, encompassed by the magnets on both sides of the wall, ALPS II is also the first laser-based setup to fully exploit resonance techniques. Two readout systems capable of measuring a 1064 nm photon flux down to a rate of 2 × 10–5 s–1 have been developed by the collaboration. Compared to the present best LSW limits provided by OSQAR at CERN, the signal-to-noise ratio will rise by no less than 12 orders of magnitude at ALPS II. Nevertheless, MADMAX would surpass ALPS II in the sensitivity for the axion-photon coupling strength by more than three orders of magnitude. This is the price to pay for a model-independent experiment – however, ALPS II principally targets not dark-matter candidates but ALPs indicated by astrophysical phenomena.

Tunelling ahead

The installation of the 24 dipole magnets in a straight section of the HERA tunnel was completed in 2020. Three clean rooms at both ends and in the centre of the experiment were also installed, and optics commissioning is under way. A first science run is expected for autumn 2021.

ALPS II

In the overlapping mass region up to 0.1 meV, the sensitivities of ALPS II and BabyIAXO are roughly equal. In the event of a discovery, this would provide a unique opportunity to study the new WISP. Excitingly, a similar case might be realised for IAXO: combining the optics and detectors of ALPS II with simplified versions of the dipole magnets being studied for FCC-hh would provide an LSW experiment with “IAXO sensitivity” regarding the axion-photon coupling, albeit in a reduced mass range. This has been outlined as the putative JURA (Joint Undertaking on Research for Axions) experiment in the context of the CERN-led Physics Beyond Colliders study.

The past decade has delivered significant developments in axion and ALP theory and phenomenology. This has been complemented by progress in experimental methods to cover a large fraction of the interesting axion and ALP parameter range. In close collaboration with universities and institutes across the globe, CERN, DESY and the Max Planck society will together pave the road to the exciting results that are expected this decade.

Still seeking solutions

How did winning a Special Breakthrough Prize last year compare with the Nobel Prize?

Steven Weinberg

It came as quite a surprise because as far as I know, none of the people who have been honoured with the Breakthrough Prize had already received the Nobel Prize. Of course nothing compares with the Nobel Prize in prestige, if only because of the long history of great scientists to whom it has been awarded in the past. But the Breakthrough Prize has its own special value to me because of the calibre of the young – well, I think of them as young – theoretical physicists who are really dominating the field and who make up the selection committee.

The prize committee stated that you would be a recognised leader in the field even if you hadn’t made your seminal 1967 contribution to the genesis of the Standard Model. What do you view as Weinberg’s greatest hits?

There’s no way I can answer that and maintain modesty! That work on the electroweak theory leading to the mass of the W and Z, and the existence and properties of the Higgs, was certainly the biggest splash. But it was rather untypical of me. My style is usually not to propose specific models that will lead to specific experimental predictions, but rather to interpret in a broad way what is going on and make very general remarks, like with the development of the point of view associated with effective field theory. Doing this I hope to try and change the way my fellow physicists look at things, without usually proposing anything specific. I have occasionally made predictions, some which actually worked, like calculating the pion–nucleon and pion–pion scattering lengths in the mid-1960s using the broken symmetry that had been proposed by Nambu. There were other things, like raising the whole issue of the cosmological constant before the discovery of the accelerated expansion of the universe. I worried about that – I gave a series of lectures at Harvard in which I finally concluded that the only way I can understand why there isn’t an enormous vacuum energy is because of some kind of anthropic selection. Together with two guys here at Austin, Paul Shapiro and Hugo Martel, we worked out what was the most likely value that would be found in terms of order of magnitude, which was later found to be correct. So I was very pleased that the Breakthrough Prize acknowledged some of those things that didn’t lead to specific predictions but changed a general framework.

I wish I could claim that I had predicted the neutrino mass

You coined the term effective field theory (EFT) and recently inaugurated the online lecture series All Things EFT. What is the importance of EFT today?

My thinking about EFTs has always been in part conditioned by thinking about how we can deal with a quantum theory of gravitation. You can’t represent gravity by a simple renormalisable theory like the Standard Model, so what do you do? In fact, you treat general relativity the same way you treat low-energy pions, which are described by a low-energy non-renormalisable theory. (You could say it’s a low-energy limit of QCD but its ingredients are totally different – instead of quarks and gluons you have pions). I showed how you can generate a power series for any given scattering amplitude in powers of energy rather than some small coupling constant. The whole idea of EFT is that any possible interaction is there: if it’s not forbidden it’s compulsory. But the higher, more complicated terms are suppressed by negative powers of some very large mass because the dimensionality of the coupling constants is such that they have negative powers of mass, like the gravitational constant. That’s why they’re so weak.

If you recognise that the Standard Model is probably a low-energy limit of some more general theory, then you can consider terms that make the theory non-renormalisable and generate corrections to it. In particular, the Standard Model has this beautiful feature that in its simplest renormalisable version there are symmetries that are automatic: at least to all orders of perturbation theory, it can’t violate the conservation of baryon or lepton number. But if the Standard Model just generates the first term in a power series in energy and you allow for more complicated non-renormalisable terms in the Lagrangian, then you find it’s very natural that there would be baryon and lepton non-conservation. In fact, the leading term of this sort is a term that violates lepton number and gives neutrinos the masses we observe. I wish I could claim that I had predicted the neutrino mass, but there already was evidence from the solar neutrino deficit and also it’s not certain that this is the explanation of neutrino masses. We could have Dirac neutrinos in which you have left and right neutrinos and antineutrinos coupling to the Higgs, and in that way get masses without any violation of lepton-number conservation. But I find that thoroughly repulsive because there’s no reason in that case why the neutrino masses should be so small, whereas in the EFT case we have Majorana neutrinos whose small masses are much more natural.

On this point, doesn’t the small value of the cosmological constant and Higgs mass undermine the EFT view by pointing to extreme fine-tuning?

Yes, they are a warning about things we don’t understand. The Higgs mass less so, after all it’s only about a hundred times larger than the proton mass and we know why the proton mass is so small compared to the GUT or Planck scale; it is because the proton gets it mass not from the quark masses, which have to do with the Higgs, but from the QCD forces, and we know that those become strong very slowly as you come down from high energy. We don’t understand this for the Higgs mass, which, after all, is a term in the Lagrangian, not like the proton mass. But it may be similar – that’s the old technicolour idea, that there is another coupling alongside QCD that becomes strong at some energy where it leads to a potential for the Higgs field, which then breaks electroweak symmetry. Now, I don’t have such a theory, and if I did I wouldn’t know how to test it. But there’s at least a hope for that. Whereas regards to the cosmological constant, I can’t think of anything along that line that would explain it. I think it was Nima Arkani-Hamed who said to me, “If the anthropic effect works for the cosmological constant, maybe that’s the answer with the Higgs mass – maybe it’s got to be small for anthropic reasons.” That’s very disturbing if it’s true, as we’re going to be left waving our hands. But I don’t know.

Maybe we have 2500 years ahead of us before we get to the next big step

Early last year you posted a preprint “Models of lepton and quark masses” in which you returned to the problem of the fermion mass hierarchy. How was it received?

Even in the abstract I advertise how this isn’t a realistic theory. It’s a problem that I first worked on almost 50 years ago. Just looking at the table of elementary particle masses I thought that the electron and the muon were crying out for an explanation. The electron mass looks like a radiative correction to the muon mass, so I spent the summer of 1972 on the back deck of our house in Cambridge, where I said, “This summer I am going to solve the problem of calculating the electron mass as an order-alpha correction to the muon mass.” I was able to prove that if in a theory it was natural in the technical sense that the electron would be massless in the tree approximation as a result of an accidental symmetry, then at higher order the mass would be finite. I wrote a paper, but then I just gave it up after no progress, until now when I went back to it, no longer young, and again I found models in which you do have an accidental symmetry. Now the idea is not just the muon and the electron, but the third generation feeding down to give masses to the second, which would then feed down to give masses to the first. Others have proposed what might be a more promising idea, that the only mass that isn’t zero in the tree approximation is the top mass, which is so much bigger than the others, and everything else feeds down from that. I just wanted to show the kinds of cancellations in infinites that can occur, and I worked out the calculations. I was hoping that when this paper came out some bright young physicist would come up with more realistic models, and use these calculational techniques – that hasn’t happened so far but it’s still pretty early.

What other inroads are there to the mass/flavour hierarchy problem?

The hope would be that experimentalists discover some correction to the Standard Model. The problem is that we don’t have a theory that goes beyond the Standard Model, so what we’re doing is floundering around looking for corrections in the model. So far, the only one discovered was the neutrino mass and that’s a very valuable piece of data which we so far have not figured out how to interpret. It definitely goes beyond the Standard Model – as I mentioned, I think it is a dimension-five operator in the effective field theory of which the Standard Model is the renormalisable part.

Weinberg delivering a seminar at CERN in 1979

The big question is whether we can cut off some sub-problem that we can actually solve with what we already know. That’s what I was trying to do in my recent paper and did not succeed in getting anywhere realistically. If that is not possible, it may be that we can’t make progress without a much deeper theory where the constituents are much more massive, something like string theory or an asymptotically safe theory. I still think string theory is our best hope for the future, but this future seems to be much further away than we had hoped it would be. Then I keep being reminded of Democritus, who proposed the existence of atoms in around 400 BCE. Even as late as 1900 physicists like Mach doubted the existence of atoms. They didn’t become really nailed down until the first years of the 20th century. So maybe we have 2500 years ahead of us before we get to the next big step.

Recently the LHC produced the first evidence that the Higgs boson couples to a second-generation fermion, the muon. Is there reason to think the Higgs might not couple to all three generations?

Before the Higgs was discovered it seemed quite possible that the explanation of the hierarchy problem was that there was some new technicolour force that gradually became strong as you came from very high energy to lower energy, and that somewhere in the multi-TeV range it became strong enough to produce a breakdown of the electroweak symmetry. This was pushed by Lenny Susskind and myself, independently. The problem with that theory was then: how did the quarks and leptons get their masses? Because while it gave a very natural and attractive picture of how the W and Z get their masses, it left it really mysterious for the quarks and leptons. It’s still possible that something like technicolour is true. Then the Higgs coupling to the quarks and leptons gives them masses just as expected. But in the old days, when we took technicolour seriously as the mechanism for breaking electroweak symmetry, which, since the discovery of the Higgs we don’t take seriously anymore, even then there was the question of how, without a scalar field, can you give masses to the quarks and leptons. So, I would say today, it would be amazing if the quarks and leptons were not getting their masses from the expectation value of the Higgs field. It’s important now to see a very high precision test of all this, however, because small effects coming from new physics might show up as corrections. But these days any suggestion for future physics facilities gets involved in international politics, which I don’t include in my area of expertise.

It’s still possible that something like technicolour is true

Any more papers or books in the pipeline?

I have a book that’s in press at Cambridge University Press called Foundations of Modern Physics. It’s intended to be an advanced undergraduate textbook that takes you from the earliest work on atoms, through thermodynamics, transport theory, Brownian motion, to early quantum theory; then relativity and quantum mechanics, and I even have two chapters that probably go beyond what any undergraduate would want, on nuclear physics and quantum field theory. It unfortunately doesn’t fit into what would normally be the plan for an undergraduate course, so I don’t know if it will be widely
adopted as a textbook. It was the result of a lecture course I was asked to give called “thermodynamics and quantum physics” that has been taught at Austin for years. So, I said “alright”, and it gave me a chance to learn some thermodynamics and transport theory.

Martinus J G Veltman 1931–2021

Martinus Veltman

Eminent physicist and Nobel laureate Martinus Veltman passed away in his home town of Bilthoven, the Netherlands, on 4 January. Martinus (Tini) Veltman graduated in physics at Utrecht University, opting first for experimental physics but later switching to theory. After his conscript military service in 1959, Léon Van Hove offered him a position as a PhD student. Veltman started in Utrecht, but later followed Van Hove to the CERN theory division.

CERN opened up a new world, and Tini often mentioned how he benefited from contacts with John Bell, Gilberto Bernardini and Sam Berman. The latter got him interested in weak interactions and in particular neutrino physics. During his time there, Tini spent a short period at SLAC where he started to work on his computer algebra program “Schoonschip”. He correctly foresaw that practical calculations of Feynman diagrams would become more and more complicated, particularly for theories with vector bosons. Nowadays extended calculations beyond one loop are unthinkable without computer algebra.

In 1964 Murray Gell-Mann proposed an algebra of conserved current operators for hadronic matter, which included the weak and electromagnetic currents. He argued that commutators of two currents taken at the same instant in time should “close”, meaning that these commutators can be written as linear combinations of the same set of currents. From this relation one could derive so-called sum rules that can be compared to experiments. Facing the technical problems with this approach, Tini came up with an alternative proposal. In a 1966 paper he simply conjectured that the hadronic currents for the electromagnetic and weak interactions had to be covariantly conserved, where he assumed that the weak interactions were mediated by virtual vector bosons, just as electromagnetic processes are mediated by virtual photons. The current conservation laws therefore had to contain extra terms depending on the photon field and the fields associated with the weak intermediate vector bosons. Quite surprisingly, he could then demonstrate that these new conservation equations suffice to prove the same sum rules. A more important aspect of his approach was only gradually realised, namely that the conservation laws for these currents are characteristic of a non-abelian gauge theory as had been written down more than 10 years earlier by Yang and Mills. Hence Veltman started to work on the possible renormalisability of Yang–Mills theory.

From his early days at CERN it was clear that Tini had a strong interest in confronting theoretical predictions with experimental results

Meanwhile, Veltman had left CERN towards the end of 1966 to accept a professorship at Utrecht. At the end of 1969 a prospective PhD student insisted on working on Yang–Mills theories. Veltman, who was already well aware of many of the pitfalls, only gradually agreed, and so Gerard ’t Hooft joined the programme. This turned out to be a very fruitful endeavour and the work was proudly presented in the summer of 1971. Veltman and ’t Hooft continued to collaborate on Yang–Mills theory. Their 1972 papers are among the finest that have been written on the subject. In 1999 they shared the Nobel Prize in Physics “for elucidating the quantum structure of electroweak interactions”.

With the renormalisability of the electroweak theory established, precision comparisons with experiment were within reach, and Veltman started to work on these problems with postdocs and PhD students. One important tool was the so-called rho parameter (a ratio of the masses of the W and Z bosons and the weak mixing angle). Its experimental value was close to one, which showed that only models in which the Higgs field starts as a doublet are allowed. From the small deviations from one, it was possible to estimate the mass of the top quark, which was not yet discovered. Later, when CERN was planning to build the LEP collider, the emphasis changed to the calculation of one-loop corrections for various processes in e+e collisions. As a member of the CERN Scientific Policy Committee (SPC), Veltman strongly argued that LEP should operate at the highest possible energy, well above the W+W threshold, to study the electroweak theory with precision. The Standard Model has since passed all of these tests.

From his early days at CERN it was clear that Tini had a strong interest in confronting theoretical predictions with experimental results, and in the organisation needed to do so. To this end, he was one of a small group of colleagues in the Netherlands to push for a national institute for subatomic physics – Nikhef, which was founded in 1975. In 1981 Tini moved to the University of Michigan in Ann Arbor, returning to the Netherlands after his retirement in 1996.

Veltman made a lasting impact on the field of particle physics, and inspired many students. Until recently he followed what was happening in the field, regularly attending the September meetings of the SPC. Our community will miss his sharp reasoning and clear-eyed look at particle physics that are crucial for its development.

Connecting physics with society

Student analysing ATLAS collisions

Science and basic research are drivers of technologies and innovations, which in turn are key to solving global challenges such as climate change and energy. The United Nations has summarised these challenges in 17 “sustainable development goals”, but it is striking how little connection with science they include. Furthermore, as found by a UNESCO study in 2017, the interest of the younger generation in studying science, technology, engineering and mathematics is falling, despite jobs in these areas growing at a rate three times faster than in any other sector. Clearly, there is a gulf between scientists and non-scientists when it comes to the perception of the importance of fundamental research in their lives – to the detriment of us all.

Some in the community are resistant to communicate physics spin-offs because this is not our primary purpose

Try asking your neighbours, kids, family members or mayor of your city whether they know about the medical and other applications that come from particle physics, or the stream of highly qualified people trained at CERN who bring their skills to business and industry. While the majority of young people are attracted to physics by its mindboggling findings and intriguing open questions, our subject appeals even more when individuals find out about its usefulness outside academia. This was one of the key outcomes of a recent survey, Creating Ambassadors for Science in Society, organised by the International Particle Physics Outreach Group (IPPOG).

Do most “Cernois” even know about the numerous start-ups based on CERN technologies or the hundreds of technology disclosures from CERN, 31 of which came in 2019 alone? Or about the numerous success stories contained within the CERN impact brochure and the many resources of CERN’s knowledge-transfer group? Even though “impact” is gaining attention, anecdotally when I presented  these facts to my research colleagues they were not fully aware. Yet who else will be our ambassadors, if not us?

Some in the community are resistant to communicate physics spin-offs because this is not our primary purpose. Yet, millions of people who have lost their income as a result of COVID-19 are rather more concerned about where their next rent and food payments are coming from, than they are about the couplings of the Higgs boson. Reaching out to non-physicists is more important than ever, especially to those with an indifferent or even negative attitude to science. Differentiating audiences between students, general public and politicians is not relevant when addressing non-scientifically educated people. Strategic information should be proactively communicated to all stakeholders in society in a relatable way, via eye-opening, surprising and emotionally charged stories about the practical applications of curiosity-driven discoveries.

Barbora Bruant Gulejova

IPPOG has been working to provide such stories since 2017 – and there is no shortage of examples. Take the touchscreen technology first explored at CERN 40 years ago, or humanitarian satellite mapping carried out for almost 20 years by UNOSAT, which is hosted at CERN. Millions of patients are diagnosed daily thanks to tools like PET and MRI, while more recent medical developments include innovative radioisotopes from MEDICIS for precision medicine, the first 3D colour X-ray images, and novel cancer treatments  based on superconducting accelerator technology. In the environmental arena, recent CERN spin-offs include a global network of air-quality sensors and fibre-optic sensors for improved water and pesticide management, while CERN open-source software is used for digital preservation in libraries and its computing resources have been heavily deployed in fighting the pandemic.

Building trust

Credibility and trust in science can only be built by scientists themselves, while working hand in hand with professional communicators, but not relying only on them. Extracurricular activities, such as those offered by IPPOG, CERN, other institutions and individual initiatives, are crucial in changing the misperceptions of the public and bringing about fact-based decision-making to the young generation. Scientists should develop a proactive strategic approach and even consider becoming active in policy making, following the shining examples of those who helped realise the SESAME light source in the Middle East and the South East European International Institute for Sustainable Technologies.

Particle physics already inspires some of the brightest minds to enter science. But audiences never look at our subject with the same eyes once they’ve learned about its applications and science-for-peace initiatives.

Jack Steinberger 1921–2020

Jack Steinberger

Jack Steinberger, a giant of the field who witnessed and shaped the evolution of particle physics from its beginnings to the confirmation of the Standard Model, passed away on 12 December aged 99. Born in the Bavarian town of Bad Kissingen in 1921, his father was a cantor and religious teacher to the small Jewish community, and his mother gave English and French lessons to supplement the family income. In 1934, after new Nazi laws had excluded Jewish children from higher education, Jack’s parents applied for him and his brother to take part in a charitable scheme that saw 300 German refugee children transferred to the US. Jack found his home as a foster child, and was reunited with his parents and younger brother in 1938.

Jack studied chemistry at the University of Chicago until 1942, when he joined the army and was sent to the MIT radiation laboratory to work on radar bomb sights. He was assigned to the antenna group where his attention was brought to physics. After the war he returned to Chicago to embark on a career in theoretical physics. Under the guidance of Enrico Fermi, however, he switched to the experimental side of the field, conducting mountaintop investigations into cosmic rays. He was awarded a PhD in 1948. Fermi, who was probably Jack’s most influential physics teacher, described him as “direct, confident, without complication, he concentrated on physics, and that was enough”.

In 1949 Steinberger went to the Radiation Lab at the University of California at Berkeley, where he performed an experiment at the electron synchrotron that demonstrated the production of neutral pions and their decay to photon pairs. He stayed only one year in Berkeley, partly because he declined to sign the anti-communist loyalty oath, and moved on to Columbia University.

In the 1960s the construction of a high-energy, high-flux proton accelerator at Brookhaven opened the door to the study of weak interactions using neutrino-beam experiments. This marked the beginning of Jack’s interest in neutrino physics. Along with Mel Schwarz and Leon Lederman, he designed and built the experiment that established the difference between neutrinos associated with muons and those associated with electrons, for which they received the 1988 Nobel Prize in Physics.

He was a curious and imaginative physicist with an extraordinary rigour

Jack joined CERN in 1968, working on experiments at the Proton Synchrotron exploring CP violation in neutral kaons. In the 1970s, with the advent of new neutrino beams at the Super Proton Synchrotron, Jack became a founding member of the CERN–Dortmund–Heidelberg–Saclay (CDHS) collaboration. Running from 1976 to 1984, CDHS produced a string of important results using neutrino beams to probe the structure of the nucleon and the Standard Model in general. In particular, the collaboration confirmed the predicted variation of the structure function of the valence quarks with Q2 (nicknamed “scaling violations”), a milestone in the establishment of QCD.

When the Large Electron–Positron (LEP) collider was first proposed, a core group from CDHS joined physicists from other institutions to develop a detector for CERN’s new flagship collider. This initiative grew into the ALEPH experiment, and Jack, a curious and imaginative physicist with an extraordinary rigour, was the natural choice to become its first spokesperson in 1980, a position he held until 1990. From the outset, he stipulated that standard solutions should be adopted across the whole detector as far as possible. This led to the end-caps reflecting the design of the central detector, for example. Jack was also insistent that all technologies considered for the detector first had to be completely understood. As the LEP era got underway, this level of discipline was reflected in ALEPH’s results.

Next to physics, music formed an important part of Jack’s life. He organised gatherings of amateur, and occasionally professional, musicians at his house. These were usually marathons of Bach, starting in the late afternoon and continuing until the late evening. In his autobiography, Jack summarised: “I play the flute, unfortunately not very well, and have enjoyed tennis, mountaineering and sailing, passionately.”

Jack retired from CERN in 1986 and went on to become a professor at the Scuola Normale Superiore di Pisa. President Ronald Reagan awarded him the National Medal of Science in 1988. In 2001, on the occasion of his 80th birthday, the city of Bad Kissingen named its gymnasium in his honour. Jack continued his association with CERN throughout his 90s. He leaves his mark not just on particle physics but on all of us who had the opportunity to collaborate with him.

Accelerating talent at CERN

Natalia Magdalena Koziol

CERN enjoys a world-class reputation as a scientific laboratory, with the start-up of the Large Hadron Collider and the discovery of the Higgs boson propelling the organisation into the public spotlight. Less tangible and understood by the public, however, is that to achieve this level of success in cutting-edge research, you need the infrastructure and tools to perform it. CERN is an incredible hub for engineering and technology – hosting a vast complex of accelerators, detectors, experiments and computing infrastructure. Thus, CERN needs to attract candidates from across a wide spectrum of engineering and technical disciplines to fulfil its objectives.

CERN employs around 2600 staff members who design, build, operate, maintain and support an infrastructure used by a much larger worldwide community of physicists. Of these, only 3% are research physicists. The core hiring needs are for engineers, technicians and support staff in a wide variety of domains: mechanical, electrical, engineering, vacuum, cryogenics, civil engineering, radiation protection, radio­frequency, computing, software, hardware, data acquisition, materials science, health and safety… the list goes on. Furthermore, there are also competences needed in human resources, legal matters, communications, knowledge transfer, finance, firefighters, medical professionals and other support functions.

On the radar

CERN’s hiring challenge takes on even greater meaning when one considers the drive to attract students, graduates and professionals from across CERN’s 32 Member and Associate Member States. In what is already a competitive market, attracting people from a large multitude of disciplines to an organisation whose reputation revolves around particle physics can be a challenge. So how is this challenge tackled? CERN now has a well-established “employer brand”, developed in 2010 to promote its opportunities in an increasingly digitalised environment. The brand centres around factors that make working at CERN the rich experience that it is, namely challenge, purpose, imagination, collaboration, integrity and quality of life – underpinned by the slogan “Take part”. This serves as an identity to devise attractive campaigns through web content, video, online, social media and job-portal advertisements to promote CERN as an employer of choice to the audiences we seek to reach: from students to professionals, apprenticeships to PhDs, across all diversity dimensions. The intention is to put CERN “on the radar” of people who wouldn’t normally identify CERN as a possibility in their chosen career path.

CERN doesn’t just bring together people from a large scope of fields but unites people from all over the world

As no single channel exists that will allow targeting of, for example, a mechanical technician in all CERN Member States, creative and innovative approaches have to be utilised. The varying landscapes, cultural preferences and languages come into play, and this is compounded by the different job-seeking behaviours of students, graduates and experienced professionals through a constantly evolving ecosystem of channels and solutions. A widespread presence is key. The cornerstones are: an attractive careers website; professional networks such as LinkedIn to promote CERN’s employment opportunities and proactively search for candidates; social media to increase visibility of hiring campaigns; and being present on various job portals, for example in the oil, gas and energy arenas. Outreach events, presence at university career fairs and online webinars further serve to present CERN and its diverse opportunities to the targeted audiences.

Storytelling is an essential ingredient in promoting our opportunities, as are the experiences of those already working at CERN. In the words of Håvard, an electromechanical technician from Norway: “I get to challenge myself in areas and with technology you don’t see any other place in the world.” Gunnar, a firefighter from Germany describes, “I am working as a firefighter in one of the most international fire brigades at CERN in what is a very complex, challenging and interesting environment.” Katarina, a computing engineer from Sweden, says, “The diversity of skills needed at CERN is so much larger than what most people know!” While Julia, a former mechanical engineering technical student from the UK put it simply: “I never knew that CERN recruited students for internships.” Natasha, a former software engineering fellow from Pakistan, summed it up with, “Here I am, living my dreams, being a part of an organisation that’s helping me grow every single day.” Each individual experience is a rich insight for potential candidates to identify with and recognise the possibility of joining CERN in their own right.

CERN doesn’t just bring together people from a large scope of fields but unites people from all over the world. Working as summer, technical or doctoral student, as a graduate or professional, builds skills and knowledge that are highly transferable in today’s demanding and competitive job market, along with lasting connections. As the cherry on the cake, a job at CERN paves the way to become CERN’s future alumni and join the ever-growing High-Energy Network. Take part!

Quantum sensing for particle physics

AION’s 10 m stage

A particle physics-led experiment called AION (Atomic Interferometric Observatory and Network) is one of several multidisciplinary projects selected for funding by the UK’s new Quantum Technologies for Fundamental Physics programme. The successful projects, announced in January following a £31 million call for proposals from UK Research and Innovation (UKRI), will exploit recent advances in quantum technologies to tackle outstanding questions in fundamental physics, astrophysics and cosmology.

We have an opportunity to change the way we search for answers to some of the biggest mysteries of the universe

Mark Thomson

UKRI and university funding of about £10 million (UKRI part £7.2 million) will enable the AION team to prepare the construction of a 10 m-tall atomic interferometer at the University of Oxford to explore ultra-light dark matter and provide a pathway towards detecting gravitational waves in the unexplored mid-frequency band ranging from several mHz to a few Hz. The setup will use lasers to drive transitions between the ground and excited states of a cloud of cold strontium atoms in free fall, effectively acting as beam splitters and mirrors for the atomic de Broglie waves (see figure). Ultralight dark matter and exotic light bosons would be expected to have differential effects on the atomic transition frequencies, while a passing gravitational wave would generate a strain in the space through which the atoms fall freely. Either would create a difference between the phases of atomic beams following different paths – the greater their separations, the greater the sensitivity of the experiment.

“AION is a uniquely interdisciplinary mission that will harness cold-atom quantum technologies to address key issues in fundamental physics, astrophysics and cosmology that can be realised in the next few decades,” says AION principal investigator Oliver Buchmueller of Imperial College London, who is also a member of the CMS collaboration. “The AION project will also significantly contribute to MAGIS, a 100 m-scale partner experiment being prepared at Fermilab, and we are exploring the possibility of utilising a shaft in the UK or at the LHC for a similar second 100 m detector.”

Six other quantum-technology projects involving UK institutes are under way thanks to the UKRI scheme. One, led by experimental particle physicist Ruben Saakyan of University College London, will use ultra-precise B-field mapping and microwave spectrometry to determine the absolute neutrino mass in tritium beta-decay beyond the 0.2 eV sensitivity projected for the KATRIN experiment. Others include the use of new classes of detectors and coherent quantum amplifiers to search for hidden structure in the vacuum state; the development of ultra-low-noise quantum electronics to underpin searches for axions and other light hidden particles; quantum simulators to mimic the extreme conditions of the early universe and black holes; and the development of quantum-enhanced superfluid technologies for cosmology.

The UKRI call is part of a global effort to develop quantum technologies that could bring about a “second quantum revolution”. Several major international public and private initiatives are under way. Last autumn, CERN launched its own quantum technologies initiative.

“With the application of emerging quantum technologies, I believe we have an opportunity to change the way we search for answers to some of the biggest mysteries of the universe,” said Mark Thomson, executive chair of the UK’s Science and Technology Facilities Council. “These include exploring what dark matter is made of, finding the absolute mass of neutrinos and establishing how quantum mechanics fits with gravity.”

Iodine aerosol production could accelerate Arctic melting

Sea ice

Researchers at CERN’s CLOUD experiment have uncovered a new mechanism that could accelerate the loss of Arctic sea ice. In a paper published in Science on 5 February, the team showed that aerosol particles made of iodic acid can form extremely rapidly in the marine boundary layer – the portion of the atmosphere that is in direct contact with the ocean. Aerosol particles are important for the climate because they provide the seeds on which cloud droplets form. Marine new-particle formation is especially important since particle concentrations are low and the ocean is vast. However, how new aerosol particles form and influence clouds and climate remain relatively poorly understood.

In polar regions, aerosols and clouds have a warming effect because they absorb infrared radiation otherwise lost to space and then radiate it back down to the surface

Jasper Kirkby

“Our measurements are the first to show that the part-per-trillion-by-volume iodine levels found in marine regions will lead to rapid formation and growth of iodic acid particles,” says CLOUD spokesperson Jasper Kirkby of CERN, adding that the particle formation rate is also strongly enhanced by ions from galactic cosmic rays. “Although most atmospheric particles form from sulphuric acid, our study shows that iodic acid – which is produced by the action of sunlight and ozone on molecular iodine emitted by the sea surface, sea ice and exposed seaweed – may be the main driver in pristine marine regions.”

CLOUD is a one-of-a kind experiment that uses an ultraclean cloud chamber to measure the formation and growth of aerosol particles from a mixture of vapours under precisely controlled atmospheric conditions, including the use of a high-energy beam from the Proton Synchrotron  to simulate cosmic rays up to the top of the troposphere. Last year, the team found that small inhomogeneities in the concentrations of ammonia and nitric acid can have a major role in driving winter smog episodes in cities. The latest result is similarly important but in a completely different area, says Kirkby.

“In polar regions, aerosols and clouds have a warming effect because they absorb infrared radiation otherwise lost to space and then radiate it back down to the surface, whereas they reflect no more incoming sunlight than the snow-covered surface. As more sea surface is exposed by melting ice, the increased iodic acid aerosol and cloud-seed formation could provide a previously unaccounted positive feedback that accelerates the loss of sea ice. However, the effect has not yet been modelled so we can’t quantify it yet.”

ALICE shines light inside lead nuclei

An ultra-relativistic electromagnetically charged projectile carries a strongly contracted field that can be thought of as a flux of quasi-real photons. This is known as the equivalent-photon approximation, and was proposed by Fermi and later developed by Weizsäcker and Williams. In practice, this means that the proton or lead (Pb) beams of the LHC, moving at ultra-relativistic energies, also carry a quasi-real photon beam, which can be used to look inside protons or nuclei. The ALICE collaboration is in this way using the LHC as a photon–hadron collider, shining light inside lead nuclei to measure the photoproduction of charmonia and provide constraints on nuclear shadowing.

The intensity of the electromagnetic field, and the corresponding photon flux, is proportional to the square of the electric charge. This type of interaction is therefore greatly enhanced in the collisions of lead ions (Z = 82). Ultra-peripheral collisions (UPCs), in which the impact parameter is larger than the sum of the radii of two Pb nuclei, are a particularly useful way to study photonuclear collisions. Here, purely hadronic interactions are suppressed, due to the short range of the strong force, and photonuclear interactions dominate. The photoproduction of vector mesons in these reactions has a clean experimental signature: the decay products of the vector meson are the only signals in an otherwise empty detector.

Nuclear shadowing was first observed by the European Muon Collaboration at CERN in 1982

Coherent heavy-vector–meson photoproduction, wherein the photon interacts consistently with all the nucleons in a nucleus, is of particular interest because of its connection with gluon distribution functions (PDFs) in protons and nuclei. At low Bjorken-x values, gluon PDFs are significantly suppressed in the nucleus relative to free proton PDFs – a phenomenon known as nuclear shadowing that was first observed by the European Muon Collaboration at CERN in 1982 by comparing the structure functions of iron and deuterium in the deep inelastic scattering of muons.

Figure 1

Heavy-vector–meson photoproduction measurements provide a powerful tool to study poorly known gluon-shadowing effects at low x. The scale of the four-momentum transfer of the interaction corresponds to the perturbative regime of QCD in the case of heavy charmonium states. The gluon shadowing factor – the ratio of the nuclear PDF to the proton PDF – can be evaluated by measuring the nuclear suppression factor, defined to be the square root of the ratio of the coherent vector–meson photonuclear production cross section on nuclei to the photonuclear cross-section in the impulse approximation that is based on the exclusive photoproduction measurements with a proton target.

Ultra-peripheral collisions

The ALICE collaboration recently submitted for publication the measurement of the coherent photoproduction of J/ψ and ψ at midrapidity |y| < 0.8 in Pb–Pb UPCs at 5.02 TeV. The J/ψ is reconstructed using the dilepton (+) and proton–antiproton decay channels, while for the ψ, the dilepton and the + π+π decay channels are studied. These data complement the ALICE measurement of the coherent J/ψ cross-section at forward rapidity, –4 < y < –2.5, providing stringent constraints on nuclear gluon shadowing.

The nuclear gluon shadowing factor of about 0.65 at Bjorken-x between 0.3 × 10–3 and 1.4 × 10–3 is estimated from the comparison of the measured coherent J/ψ cross-section with the impulse approximation at midrapidity, which implies moderate nuclear shadowing. The measured rapidity dependence of the coherent cross-section is not completely reproduced by models in the full rapidity range. The leading twist approximation of the Glauber–Gribov shadowing (LTA-GKZ) and the energy-dependent hot-spot model (GG-HS (CCK)) gives the best overall description of the rapidity dependence but shows tension with data at semi-forward rapidities 2.5 < |y| < 3.5 (figure 1). The data might be better explained with a model where shadowing has a smaller effect at Bjorken x~ 10–2 or x~ 510–5, corresponding to this rapidity range.

The ratio of the ψ to J/ψ cross-sections at midrapidity is consistent with the ratio of photoproduction cross sections measured by the H1 and LHCb collaborations, with the leading twist approximation predictions for Pb–Pb UPCs as well as with the ALICE measurement at forward rapidities. This leads to the conclusion that shadowing effects are similar for 2S (ψ) and 1S (J/ψ) states.

In LHC Run 3 and 4, ALICE expects to collect a 10-times-larger data sample than in Run 2, taking data in a continuous mode, and thus with higher efficiency. UPC physics will profit from this by large integrated luminosity as well as lower systematic uncertainty connected to the measurement and will be able to provide the shadowing factor differentially in wide Bjorken-x intervals.

bright-rec iop pub iop-science physcis connect