Comsol -leaderboard other pages

Topics

The CERN Quantum Technology Initiative

By clicking the “Watch now” button you will be taken to our third-party webinar provider in order to register your details.

Want to learn more on this subject?

Quantum technologies have the potential to revolutionise science and society, but are still in their infancy. In recent years, the growing importance and the potential impact of quantum technology development has been highlighted by increasing investments in R&D worldwide in both academia and industry.

Cutting-edge research in quantum systems has been performed at CERN for many years to investigate the many open questions in quantum mechanics and particle physics. However, only recently, the different ongoing activities in quantum computing, sensing communications and theory have been brought under a common strategy to assess the potential impact on future CERN experiments.

This webinar, presented by Alberto Di Meglio, will introduce the new CERN Quantum Technology Initiative, give an overview of the Laboratory’s R&D activities and plans in this field, and give examples of the potential impact on research. It will also touch upon the rich international network of activities and how CERN fosters research collaborations.

Want to learn more on this subject?

Alberto Di Meglio is the head of CERN openlab in the IT Department at CERN and co-ordinator of the CERN Quantum Technology Initiative. Alberto is an aerospace engineer (MEng) and electronic engineer (PhD) by education and has extensive experience in the design, development and deployment of distributed computing and data infrastructures and software services for both commercial and research applications.

He joined CERN in 1998 as data centre systems engineer. In 2004, he took part in the early stages of development of the High-Energy Physics Computing Grid. From 2010 to 2013, Alberto was project director of the European Middleware Initiative (EMI), a project responsible for developing and maintaining most of the software services powering the Worldwide LHC Computing Grid.

Since 2013, Alberto has been leading CERN openlab, a long-term initiative to organise public–private collaborative R&D projects between CERN, academia and industry in ICT, computer and data science, covering many aspects of today’s technology, from heterogenous architecture and distributed computing to AI and quantum technologies.









Roger J N Phillips 1931–2020

R Phillips

The eminent theoretical physicist Roger Julian Noel Phillips died peacefully on 4 September 2020, aged 89, at his home in Abingdon, UK. Roger was educated at Trinity College, Cambridge, where he received his PhD in 1955. His thesis advisor was Paul Dirac. Roger transferred from the Harwell theory group to the Rutherford Appleton Laboratory (RAL) in 1962 where he led the theoretical high-energy physics group to international prominence. He also held visiting appointments at CERN, Berkeley, Madison and Riverside.

Roger was a giant in particle physics phenomenology and his book “Collider Physics” (Addison-Wesley, 1987), co-authored with his longstanding collaborator Vernon Barger, remains a classic. In 1990 Roger was awarded the Ernest Rutherford Prize & medal of the UK Institute of Physics. To experimenters, he was one of the rocks upon whom the UK high-energy physics community was built. To theorists, he was renowned for his deep understanding of particle-physics models. A career-long collaboration across the Atlantic with Barger ensued from their sharing an office at CERN in 1967. Their initial focus was the Regge-pole model to describe high-energy scattering of hadrons. Subsequently they inferred the momentum distribution of the light quarks and gluons from deep-inelastic scattering data and made studies to identify the charm-quark signal in a Fermilab neutrino experiment.

To experimenters, he was one of the rocks upon whom the UK high-energy physics community was built

In 1980, Phillips and collaborators discovered the resonance in neutrino oscillations when neutrinos propagate long distances through matter. This work is the basis of the ongoing Fermilab long-baseline neutrino program that will make precision determinations of neutrino masses and mixing. From 1983, Phillips and his collaborators developed pioneering strategies in collider physics for finding the W boson, the top quark, the Higgs boson and searches for physics beyond the Standard Model. In an influential 1990 publication, Phillips, Hewett and Barger showed that the decay of a b-quark to an s-quark and a photon is a highly sensitive probe of a charged Higgs boson through its one-loop virtual contribution.

After retiring in 1997, Roger maintained an active interest in particle physics. He struggled with Parkinson’s disease in recent years but continued to live with determination, wit and cheer. He joked that his Parkinson’s tremor made his mouse and keyboard run wild: “I know that an infinite number of random monkeys can eventually write Shakespeare, but I can’t wait that long!” One of his very last whispers to his son David was: “There are symmetries in mathematics which are like aspects of dreaming”. He did great things with his brain when he was alive that will continue as he donated his to the Parkinson’s UK Brain Bank.

Roger was highly respected for his intellectual brilliance, physics leadership and immense integrity, but also for his modesty and generosity in going out of his way to help others. He was a delight to work with and an inspiration to all who knew him. He is missed by his many friends around the world.

Odderon discovered

The TOTEM collaboration at the LHC, in collaboration with the DØ collaboration at the former Tevatron collider at Fermilab, have announced the discovery of the odderon – an elusive three-gluon state predicted almost 50 years ago. The result was presented in a “discovery talk” on Friday 5 March during the LHC Forward Physics meeting at CERN, and follows the joint publication of a CERN/Fermilab preprint by TOTEM and DØ reporting the observation in December 2020.

This result probes the deepest features of quantum chromodynamics

Simone Giani

“This result probes the deepest features of quantum chromodynamics, notably that gluons interact between themselves and that an odd number of gluons are able to be ‘colourless’, thus shielding the strong interaction,” says TOTEM spokesperson Simone Giani of CERN. “A notable feature of this work is that the results are produced by joining the LHC and Tevatron data at different energies.”

States comprising two, three or more gluons are usually called “glueballs”, and are peculiar objects made only of the carriers of the strong force. The advent of quantum chromodynamics (QCD) led theorists to predict the existence of the odderon in 1973. Proving its existence has been a major experimental challenge, however, requiring detailed measurements of protons as they glance off one another in high-energy collisions.

While most high-energy collisions cause protons to break into their constituent quarks and gluons, roughly 25% are elastic collisions where the protons remain intact but emerge on slightly different paths (deviating by around a millimetre over a distance of 200 m at the LHC). TOTEM measures these small deviations in proton–proton (pp) scattering using two detectors located 220 m on either side of the CMS experiment, while DØ employed a similar setup at the Tevatron proton–antiproton (pp̄) collider.

Pomerons and odderons

At low energies, differences in pp vs pp̄ scattering are due to the exchange of different virtual mesons. At multi-TeV energies, on the other hand, proton interactions are expected to be mediated purely by gluons. In particular, elastic scattering at low-momentum transfer and high energies has long been explained by the exchange of a pomeron – a colour-neutral virtual glueball made up of an even number of gluons.

However, in 2018 TOTEM reported measurements at high energies that could not easily be explained by this traditional picture. Instead, a further QCD object seemed to be at play, supporting models in which a three-gluon compound, or one containing higher odd numbers of gluons, was being exchanged. The discrepancy came to light via measurements of a parameter called ρ, which represents the ratio of the real and imaginary parts of the forward elastic-scattering amplitude when there is minimal gluon exchange between the colliding protons and thus almost no deviation in their trajectories. The results were sufficient to claim evidence for the odderon, although not yet its definitive observation.

The D⌀ experiment

The new work is based on a model-independent analysis of data at medium-range momenta transfer. The TOTEM and DØ teams compared LHC pp data (recorded at collision energies of 2.76, 7, 8 and 13 TeV and extrapolated to 1.96 TeV), with Tevatron pp̄ data measured at 1.96 TeV. The odderon would be expected to contribute with different signs to pp and pp̄ scattering. Supporting this picture, the two data sets disagree at the 3.4σ level, providing evidence for the t-channel exchange of a colourless, C-odd gluonic compound.

“When combined with the ρ and total cross-section result at 13 TeV, the significance is in the range 5.2-5.7σ and thus constitutes the first experimental observation of the odderon,” said Christophe Royon of University of Kansas, who presented the results on behalf of DØ and TOTEM last week. “This is a major discovery by CERN/Fermilab.”

In addition to the new TOTEM-DØ model-independent study, several theoretical papers based on data from the ISR, SPS, Tevatron and LHC, and model-dependent inputs, provide additional evidence supporting the conclusion that the odderon exists.

Precision leap for Bs0 fragmentation and decay

How likely is it for a b quark to partner itself with an s quark rather than a light d or u quark? This question is key for understanding the physics of fragmentation and decay following the production of a b quark in proton–proton collisions. In addition, the number of Bs0 mesons to be produced, formed by a pair of b and s quarks, is required for measuring its decay probabilities, most notably to final states that are sensitive to physics beyond the Standard Model, such as the Bs0 → μ+μ decay.

Figure 1

The knowledge of fs/fd – the ratio of the fragmentation fraction of a b quark to a Bs0 or a B0 meson – is thus a key parameter at the LHC. So far it has been measured with limited precision and has been the dominant systematic uncertainty for most B0s branching fractions. Now, however, the LHCb collaboration has, in a recent publication, combined the efforts of five different analyses with information on this parameter. The fs/fd ratio was measured in previous publications through semi­leptonic decays, hadronic decays with D mesons and hadronic decays with J/ψ mesons in the final state. Some of these measurements are only sensitive to the product of the fragmentation fraction and the branching fractions. This new work analyses these results simultaneously, obtaining a precise measurement of fs/fd as well as branching fraction measurements of two important decays, B0s → Ds π+ and B0s → J/ψ φ. These are golden channels for mixing and CP violation measurements in the B0ssector.

Precision leap

The results reduce the uncertainty on fs/fd by roughly a factor of two for collisions at 7 TeV, and a factor of 1.5 for collisions at 13 TeV, yielding a precision of about 3%. They also confirm the dependence of fs/fd on the transverse momentum of the B0s meson, and indicate a slight dependence on the centre-of-mass energy of proton–proton collisions (figure 1). The results are used in this work to update the previous branching-fraction measurements of about 50 different B0s decay channels, significantly improving their precision, and boosting several searches for new physics.

Tooling up to hunt dark matter

Bullet Cluster

The past century has seen ever stronger links forged between the physics of elementary particles and the universe at large. But the picture is mostly incomplete. For example, numerous observations indicate that 87% of the matter of the universe is dark, suggesting the existence of a new matter constituent. Given a plethora of dark-matter candidates, numerical tools are essential to advance our understanding. Fostering cooperation in the development of such software, the TOOLS 2020 conference attracted around 200 phenomenologists and experimental physicists for a week-long online workshop in November.

The viable mass range for dark matter spans 90 orders of magnitude, while the uncertainty about its interaction cross section with ordinary matter is even larger (see “Theoretical landscape” figure). Dark matter may be new particles belonging to theories beyond-the-Standard Model (BSM), an aggregate of new or SM particles, or very heavy objects such as primordial black holes (PBHs). On the latter subject, Jérémy Auffinger (IP2I Lyon) updated TOOLS 2020 delegates on codes for very light PBHs, noting that “BlackHawk” is the first open-source code for Hawking-radiation calculations.

Flourishing models

Weakly interacting massive particles (WIMPs) have enduring popularity as dark-matter candidates, and are amenable to search strategies ranging from colliders to astrophysical observations. In the absence of any clear detection of WIMPs at the electroweak scale, the number of models has flourished. Above the TeV scale, these include general hidden-sector models, FIMPs (feebly interacting massive particles), SIMPs (strongly interacting massive particles), super-heavy and/or composite candidates and PBHs. Below the GeV scale, besides FIMPs, candidates include the QCD axion, more generic ALPs (axion-like particles) and ultra-light bosonic candidates. ALPs are a class of models that received particular attention at TOOLS 2020, and is now being sought in fixed-target experiments across the globe.

For each dark-matter model, astro­particle physicists must compute the theoretical predictions and characteristic signatures of the model and confront those predictions with the experimental bounds to select the model parameter space that is consistent with observations. To this end, the past decade has seen the development of a huge variety of software – a trend mapped and encouraged by the TOOLS conference series, initiated by Fawzi Boudjema (LAPTh Annecy) in 1999, which has brought the community together every couple of years since.

Models connecting dark matter with collider experiments are becoming ever more optimised to the needs of users

Three continuously tested codes currently dominate generic BSM dark-matter model computations. Each allows for the computation of relic density from freeze-out and predictions for direct and indirect detection, often up to next-to-leading corrections. Agreement between them is kept below the percentage level. “micrOMEGAs” is by far the most used code, and is capable of predicting observables for any generic model of WIMPs, including those with multiple dark-matter candidates. “DarkSUSY” is more oriented towards supersymmetric theories, but it can be used for generic models as the code has a very convenient modular structure. Finally, “MadDM” can compute WIMP observables for any BSM model from MeV to hundreds of TeV. As MadDM is a plugin of MadGraph, it inherits unique features such as its automatic computation of new dark-matter observables, including indirect-detection processes with an arbitrary number of final-state particles and loop-induced processes. This is essential for analysing sharp spectral features in indirect-detection gamma-ray measurements that cannot be mimicked by any known astrophysical background.

Interaction cross sections versus mass

Both micrOMEGAs and MadDM permit the user to confront theories with recast experimental likelihoods for several direct and indirect detection experiments. Jan Heisig (UCLouvain) reported that this is a work in progress, with many more experimental data sets to be included shortly. Torsten Bringmann (University of Oslo) noted that a strength of DarkSUSY is the modelling of qualitatively different production mechanisms in the early universe. Alongside the standard freeze-out mechanism, several new scenarios can arise, such as freeze-in (FIMP models, as chemical and kinetic equilibrium cannot be achieved), dark freeze-out, reannihilation and “cannibalism”, to name just a few. Freeze-in is now supported by micrOMEGAs.

Models connecting dark matter with collider experiments are becoming ever more optimised to the needs of users. For example, micrOMEGAs interfaces with SModelS, which is capable of quickly applying all possible LHC-relevant supersymmetric searches. The software also includes long-lived particles, as commonly found in FIMP models. As MadDM is embedded in MadGraph, noted Benjamin Fuks (LPTHE Paris), tools such as MadAnalysis may be used to recast CMS and ATLAS searches. Celine Degrande (UCLouvain) described another nice tool, FeynRules, which produces model files in both the MadDM and micrOMEGAs formats given the Lagrangian for the BSM model, providing a very useful automatised chain from the model directly to the dark-matter observables, high-energy predictions and comparisons with experimental results. Meanwhile, MadDump expands MadGraph’s predictions and detector simulations from the high-energy collider limits to fixed-target experiments such as NA62. To complete a vibrant landscape of development efforts, Tomas Gonzalo (Monash) presented the GAMBIT collaboration’s work to provide tools for global fits to generic dark-matter models.

A phenomenologists dream

Huge efforts are underway to develop a computational platform to study new directions in experimental searches for dark matter, and TOOLS 2020 showed that we are already very close to the phenomenologist’s dream for WIMPs. TOOLS 2020 wasn’t just about dark matter either – it also covered developments in Higgs and flavour physics, precision tests and general fitting, and other tools. Interested parties are welcome to join in the next TOOLS conference due to take place in Annecy in 2022.

The Science of Learning Physics

A greying giant of the field speaks to the blackboard for 45 minutes before turning, dismissively seizing paper and scissors, and cutting a straight slit. The sheet is twisted to represent the conical space–time described by the symbols on the board. A lecture theatre of students is transfixed in admiration.

This is not the teaching style advocated by José Mestre and Jennifer Docktor in their new book The Science of Learning Physics. And it’s no longer typical, say the authors, who suggest that approximately half of physics lecturers use at least one “evidence-based instructional practice” – jargon, most often, for an interactive teaching method. As colleagues joked when I questioned them on their teaching styles, there is still a performative aspect to lecturing, but these days it is just as likely to reflect the rock-star feeling of having a hundred camera phones pointed at you – albeit so the students can snap a QR code on your slide to take part in an interactive mid-lecture quiz.

Swiss and Soviet developmental psychologists Jean Piaget and Lev Vygotsky are duly namechecked

Mestre and Docktor, who are both educational psychologists with a background in physics, offer intriguing tips to maximise the impact of such practices. After answering a snap poll, they say, students should discuss with their neighbour before being polled again. The goal is not just to allow the lecturer to tailor their teaching, but also to allow students to “construct” their knowledge. Lecturing, they say, gives piecemeal information, but does not connect it. Neurons fire, but synaptic connections are not trained. And as the list of neurotransmitters that reinforce synaptic connections includes dopamine and serotonin, making students feel good by answering questions correctly may be worth the time investment.

Relative to other sciences, physics lecturers are leading the way in implementing evidence-based instructional practices, but far too few are well trained, say Mestre and Docktor, who want to bring the tools and educational philosophies of the high-school physics teacher to the lecture theatre. Swiss and Soviet developmental psychologists Jean Piaget and Lev Vygotsky are duly namechecked. “Think–pair–share”, mini whiteboards and flipping the classroom (not a discourteous gesture but the advance viewing of pre-recorded lectures before a more participatory lecture), are the order of the day. Students are not blank slates, they write, but have strong attachments to deeply ingrained and often erroneous intuitions that they have previously constructed. Misconceptions cannot be supplanted wholesale, but must be unknotted strand by strand. Lecturers should therefore explicitly describe their thought processes and encourage students to reflect on “metacognition”, or “thinking about thinking”. Here the text is reminiscent of Nobelist Daniel Kahneman’s seminal text Thinking, Fast and Slow, which divides thinking into two types: “system 1”, which is instinctive and emotional, and “system 2”, which is logical but effortful. Lecturers must fight against “knee-jerk” reasoning, say Mestre and Docktor, by modelling the time-intensive construction of knowledge, rather than aspiring to misleading virtuoso displays of mathematical prowess. Wherever possible, this should be directly assessed by giving marks not just for correct answers, but also for identifying the “big idea” and showing your working.

Disappointingly, examples are limited to pulleys and ramps, and, somewhat ironically, the book’s dusty academic tone may prove ineffective at teaching teachers to teach. But no other book comes close to The Science of Learning Physics as a means for lecturers to reflect on and enrich their teaching strategies, and it is highly recommend on that basis. That said, my respect for my old general-relativity lecturer remained undimmed as I finished the last page. Those old-fashioned lectures were hugely inspiring – a “non-cognitive aspect” that Mestre and Docktor admit their book does not consider.

In search of WISPs

The ALPS II experiment at DESY

The Standard Model (SM) cannot be the complete theory of particle physics. Neutrino masses evade it. No viable dark-matter candidate is contained within it. And under its auspices the electric dipole moment of the neutron, experimentally compatible with zero, requires the cancellation of two non-vanishing SM parameters that are seemingly unrelated – the strong-CP problem. The physics explaining these mysteries may well originate from new phenomena at energy scales inaccessible to any collider in the foreseeable future. Fortunately, models involving such scales can be probed today and in the next decade by a series of experiments dedicated to searching for very weakly interacting slim particles (WISPs).

WISPs are pseudo Nambu–Goldstone bosons (pNGBs) that arise automatically in extensions of the SM from global symmetries which are broken both spontaneously and explicitly. NGBs are best known for being “eaten” by the longitudinal degrees of freedom of the W and Z bosons in electroweak gauge-symmetry breaking, which underpins the Higgs mechanism, but theorists have also postulated a bevy of pNGBs that get their tiny masses by explicit symmetry breaking and are potentially discoverable as physical particles. Typical examples arising in theoretically well-motivated grand-unified theories are axions, flavons and majorons. Axions arise from a broken “Peccei–Quinn” symmetry and could potentially explain the strong-CP problem, while flavons and majorons arise from broken flavour and lepton symmetries.

The Morpurgo magnet

Being light and very weakly interacting, WISPs would be non-thermally produced in the early universe and thus remain non-relativistic during structure formation. Such particles would inevitably contribute to the dark matter of the universe. WISPs are now the target of a growing number and type of experimental searches that are complementary to new-physics searches at colliders.

Among theorists and experimentalists alike, the axion is probably the most popular WISP. Recently, massive efforts have been undertaken to improve the calculations of model-dependent relic-axion production in the early universe. This has led to a considerable broadening of the mass range compatible with the explanation of dark matter by axions. The axion could make up all of the dark matter in the universe for a symmetry-breaking scale fa between roughly 108 and 1019 GeV (the lower limit being imposed by astrophysical arguments, the upper one by the Planck scale), corresponding to axion masses from 10–13 eV to 10 meV. For other light pNGBs, generically dubbed axion-like particles (ALPs), the parameter range is even broader. With many plausible relic-ALP-production mechanisms proposed by theorists, experimentalists need to cover as much of the unexplored parameter range as possible.

Although the strengths of the interactions between axions or ALPs and SM particles are very weak, being inversely proportional to fa, several strategies for observing them are available. Limits and projected sensitivities span several orders of magnitude in the mass-coupling plane (see “The field of play” figure).

IAXO’s design profited greatly from experience with the ATLAS toroid

Since axions or ALPs can usually decay to two photons, an external static magnetic field can substitute one of the two photons and induce axion-to-photon conversion. Originally proposed by Pierre Sikivie, this inverse Primakoff effect can classically be described by adding source terms proportional to B and E to Maxwell’s equations. Practically, this means that inside a static homogeneous magnetic field the presence of an axion or ALP field induces electric-field oscillations – an effect readily exploited by many experiments searching for WISPs. Other processes exploited in some experimental searches and suspected to lead to axion production are their interactions with electrons, leading to axion bremsstrahlung, and their interactions with nucleons or nuclei, leading to nucleon-axion bremsstrahlung or oscillations of the electric dipole moment of the nuclei or nucleons.

The potential to make fundamental discoveries from small-scale experiments is a significant appeal of experimental WISP physics, however the most solidly theoretically motivated WISP parameter regions and physics questions require setups that go well beyond “table-top” dimensions. They target WISPs that flow through the galactic halo, shine from the Sun, or spring into existence when lasers pass through strong magnetic fields in the laboratory.

Dark-matter halo

Haloscopes target the detection of dark-matter WISPs in the halo of our galaxy, where non-relativistic cold-dark-matter axions or ALPs induce electric field oscillations as they pass through a magnetic field. The frequency of the oscillations corresponds to the axion mass, and the amplitude to B/fa. When limits or projections are given for these kinds of experiments, it is assumed that the particle under scrutiny homogeneously makes up all of the dark matter in the universe, introducing significant cosmological model dependence.

Axion–photon coupling versus axion mass plane

The furthest developed currently operating haloscopes are based on resonant enhancement of the axion-induced electric-field oscillations in tunable resonant cavities. Using this method, the presently running ADMX project at the University of Washington has the sensitivity to discover dark-matter axions with masses of a few µeV. Nuclear resonance methods could be sensitive to halo dark-matter axions with mass below 1 neV and “fuzzy” dark-matter ALPs down to 10–22 eV within the next decade, for example at the CASPEr experiments being developed at the University of Mainz and Boston University. Meanwhile, experiments based on classical LC circuits, such as ABRACADABRA at MIT, are being designed to measure ALP- or axion-induced magnetic field oscillations in the centre of a toroidal magnet. These could be sensitive in a mass range between 10 neV and 1 µeV.

ALPS II is the first laser-based setup to fully exploit resonance techniques

For dark-matter axions with masses up to approximately 50 µeV, promising developments in cavity technologies such as multiple matched cavities and superconducting or diel­ectric cavities are ongoing at several locations, including at CAPP in South Korea, the University of Western Australia, INFN Legnaro and the RADES detector, which has taken data as part of the CAST experiment at CERN. Above ~40 µeV, however, the cavity concept becomes more and more challenging, as sensitivity scales with the volume of the resonant cavity, which decreases dramatically with increasing mass (as roughly 1/ma3). To reach sensitivity at higher masses, in the region of a few hundred µeV, a novel “dielectric haloscope” is being developed by the MADMAX (Magnetized Disk and Mirror Axion experiment) collaboration for potential installation at DESY. It exploits the fact that static magnetic-field boundaries between media with different dielectric constants lead to tiny power emissions that compensate the discontinuity in the axion-induced electric fields in neighbouring media. If multiple surfaces are stacked in front of each other, this should lead to constructive interference, boosting the emitted power from the expected axion dark matter in the desired mass range to detectable levels. Other novel haloscope concepts, based on meta-materials (“plasma haloscopes”, for example) and topological insulators, are also currently being developed. These could have sensitivity to even higher axion masses, up to a few meV.

Staying in tune

In principle, axion-dark-matter detection should be relatively simple, given the very high number density of particles – approximately 3 × 1013 axions/cm3 for an axion mass of 10 µeV – and the well-established technique of resonant axion-to-photon conversion. But, as the axion mass is unknown, the experiments must be painstakingly tuned to each possible mass value in turn. After about 15 years of steady progress, the ADMX experiment has reached QCD-axion dark-matter sensitivity in the mass regime of a few µeV.

ADMX uses tunable microwave resonators inside a strong solenoidal magnetic field, and modern quantum sensors for readout. Unfortunately, however, this technology is not scalable to the higher axion-mass regions as preferred, for example, by cosmological models where Peccei–Quinn symmetry breaking happened after an inflationary phase of the universe. That’s where MADMAX comes in. The collaboration is working on the dielectric-haloscope concept – initiated and led by scientists at the Max Planck Institute for Physics in Munich – to investigate the mass region around 100 µeV.

Astrophysical hints

Globular clusters

Weakly interacting slim particles (WISPs) could be produced in hot astrophysical plasmas and transport energy out of stars, including the Sun, stellar remnants and other dense sources. Observed lifetimes and energy-loss rates can therefore probe their existence. For the axion, or an axion-like particle (ALP) with sub-MeV mass that couples to nucleons, the most stringent limit, fa > ~108 GeV, stems from the duration of the neutrino signal from the progenitor neutron star of Supernova 1987A.

Tantalisingly, there are stellar hints from observations of red giants, helium-burning stars, white dwarfs and pulsars that seem to indicate energy losses with slight excesses with respect to those expected from standard energy emission by neutrinos. These hints may be explained by axions with masses below 100 meV or sub-keV-mass ALPs with a coupling to both electrons and photons.

Other observations suggest that TeV photons from distant blazars are less absorbed than expected by standard interactions with extragalactic background light – the so-called transparency hint. This could be explained by the conversion of photons into ALPs in the magnetic field of the source, and back to photons in astrophysical magnetic fields. Interestingly, these would have about the same ALP–photon coupling strength as indicated by the observed stellar anomalies, though with a mass that is incompatible with both ALPs which can explain dark matter and with QCD axions (see “The field of play” figure).

MADMAX will use a huge ~9 T superconducting dipole magnet with a bore of about 1.35 m and a stored energy of roughly 480 MJ. Such a magnet has never been built before. The MADMAX collaboration teamed up with CEA-IRFU and Bilfinger-Noell and successfully worked out a conceptual design. First steps towards qualifying the conductor are under way. The plan is for the magnet to be installed at DESY inside the old iron yoke of the former HERA experiment H1. DESY is already preparing the required infrastructure, including the liquid-helium supply necessary to cool the magnet. R&D for the dielectric booster, with up to 80 adjustable 1.25 m2 disks, is in full swing.

A first prototype, containing a more modest 20 discs of 30 cm diameter, will be tested in the “Morpurgo” magnet at CERN during future accelerator shutdowns (see “Haloscope home” figure). With a peak field strength of 1.6 T, its dipole field will allow new ALP-dark-matter parameter regions to be probed, though the main purpose of the prototype is to demonstrate the operation of the booster system in cryogenic surroundings inside a magnetic field. The MADMAX collaboration is extremely happy to have found a suitable magnet at CERN for such tests. If sufficient funds can be acquired within the next two to three years for magnet construction, and provided that the prototype efforts at CERN are successful, MADMAX could start data taking at DESY in 2028.

While direct dark-matter search experiments like ADMX and MADMAX offer by far the highest sensitivity for axion searches, this is based on the assumption that the dark matter problem is solved by axions, and if no signal is discovered any claim of an exclusion limit must rely on specific cosmological assumptions. Therefore, other less model-dependent experiments, such as helioscopes or light shining through a wall (LSW) experiments, are extremely beneficial in addition to direct dark-matter searches.

Solar axions

In contrast to dark-matter axions or ALPs, those produced in the Sun or in the laboratory should have considerable momentum. Indeed, solar axions or ALPs should have energies of a few keV, corresponding to the temperature at which they are produced. These could be detected by helioscopes, which seek to use the inverse Primakoff effect to convert solar axions or ALPs into X-rays in a magnet pointed towards the Sun, as at the CERN Axion Solar Telescope (CAST) experiment. Helioscopes could cover the mass range compatible with the simplest axion models, in the vicinity of 10 meV, and could be sensitive to ALPs with masses below 1 eV without any tuning at all.

The CAST helioscope, which reused an LHC prototype dipole magnet, has driven this field in the past decade, and provides the most sensitive exclusion limits to date. Going beyond CAST calls for a much larger magnet. For the next-generation International Axion Observatory (IAXO) helioscope, CERN members of the international collaboration worked out a conceptual design for a 20 m-long toroidal magnet with eight 60 cm-diameter bores. IAXO’s design profited greatly from experience with the ATLAS toroid.

BabyIAXO helioscope

In the past three years, the collaboration, led by the University of Zaragoza, has been concentrating its activities on the BabyIAXO prototype in order to finesse the magnet concept, the X-ray telescopes necessary to focus photons from solar axion conversion and the low-background detectors. BabyIAXO will increase the signal-to-noise ratio of CAST by two orders of magnitude; IAXO by a further two orders of magnitude.

In December 2020 the directorates of CERN and DESY signed a collaboration agreement regarding BabyIAXO: CERN will provide the detailed design of the prototype magnet including its cryostat, while DESY will design and prepare the movable platform and infrastructure (see “Prototype” figure). BabyIAXO will be located at DESY in Hamburg. The collaboration hopes to attract the remaining funds for BabyIAXO so construction can begin in 2021 and first science runs could take place in 2025. The timeline for IAXO will depend strongly on experiences during the construction and operation of BabyIAXO, with first light potentially possible in 2028.

Light shining through a wall

In contrast to haloscopes, helioscopes do not rely on the assumption that all dark matter is made up by axions. But light-shining-through-wall (LSW) experiments are even less model dependent with respect to ALP production. Here, intense laser light could be converted to axions or ALPs inside a strong magnetic field by the Primakoff effect. Behind a light-impenetrable wall they would be re-converted to photons and detected at the same wavelength as the laser light. The disadvantage of LSW experiments is that they only reach sensitivity to ALPs with a mass up to a few hundred µeV with comparably high coupling to photons. However, this is sensitive enough to test the parameter range consistent with the transparency hint and parts of the mass range consistent with the stellar hints (see “Astrophysical hints” panel).

The Any Light Particle Search (ALPS II) at DESY follows this approach. By seeking to observe light shining through a wall, any ALPs would be generated in the experiment itself, removing the need to make assumptions about their production. ALPS II is based on 24 modified superconducting dipole magnets that have been straightened by brute-force deformation, following their former existence in the proton accelerator of the HERA complex. With the help of two 124 m-long high-finesse optical resonators, encompassed by the magnets on both sides of the wall, ALPS II is also the first laser-based setup to fully exploit resonance techniques. Two readout systems capable of measuring a 1064 nm photon flux down to a rate of 2 × 10–5 s–1 have been developed by the collaboration. Compared to the present best LSW limits provided by OSQAR at CERN, the signal-to-noise ratio will rise by no less than 12 orders of magnitude at ALPS II. Nevertheless, MADMAX would surpass ALPS II in the sensitivity for the axion-photon coupling strength by more than three orders of magnitude. This is the price to pay for a model-independent experiment – however, ALPS II principally targets not dark-matter candidates but ALPs indicated by astrophysical phenomena.

Tunelling ahead

The installation of the 24 dipole magnets in a straight section of the HERA tunnel was completed in 2020. Three clean rooms at both ends and in the centre of the experiment were also installed, and optics commissioning is under way. A first science run is expected for autumn 2021.

ALPS II

In the overlapping mass region up to 0.1 meV, the sensitivities of ALPS II and BabyIAXO are roughly equal. In the event of a discovery, this would provide a unique opportunity to study the new WISP. Excitingly, a similar case might be realised for IAXO: combining the optics and detectors of ALPS II with simplified versions of the dipole magnets being studied for FCC-hh would provide an LSW experiment with “IAXO sensitivity” regarding the axion-photon coupling, albeit in a reduced mass range. This has been outlined as the putative JURA (Joint Undertaking on Research for Axions) experiment in the context of the CERN-led Physics Beyond Colliders study.

The past decade has delivered significant developments in axion and ALP theory and phenomenology. This has been complemented by progress in experimental methods to cover a large fraction of the interesting axion and ALP parameter range. In close collaboration with universities and institutes across the globe, CERN, DESY and the Max Planck society will together pave the road to the exciting results that are expected this decade.

Still seeking solutions

How did winning a Special Breakthrough Prize last year compare with the Nobel Prize?

Steven Weinberg

It came as quite a surprise because as far as I know, none of the people who have been honoured with the Breakthrough Prize had already received the Nobel Prize. Of course nothing compares with the Nobel Prize in prestige, if only because of the long history of great scientists to whom it has been awarded in the past. But the Breakthrough Prize has its own special value to me because of the calibre of the young – well, I think of them as young – theoretical physicists who are really dominating the field and who make up the selection committee.

The prize committee stated that you would be a recognised leader in the field even if you hadn’t made your seminal 1967 contribution to the genesis of the Standard Model. What do you view as Weinberg’s greatest hits?

There’s no way I can answer that and maintain modesty! That work on the electroweak theory leading to the mass of the W and Z, and the existence and properties of the Higgs, was certainly the biggest splash. But it was rather untypical of me. My style is usually not to propose specific models that will lead to specific experimental predictions, but rather to interpret in a broad way what is going on and make very general remarks, like with the development of the point of view associated with effective field theory. Doing this I hope to try and change the way my fellow physicists look at things, without usually proposing anything specific. I have occasionally made predictions, some which actually worked, like calculating the pion–nucleon and pion–pion scattering lengths in the mid-1960s using the broken symmetry that had been proposed by Nambu. There were other things, like raising the whole issue of the cosmological constant before the discovery of the accelerated expansion of the universe. I worried about that – I gave a series of lectures at Harvard in which I finally concluded that the only way I can understand why there isn’t an enormous vacuum energy is because of some kind of anthropic selection. Together with two guys here at Austin, Paul Shapiro and Hugo Martel, we worked out what was the most likely value that would be found in terms of order of magnitude, which was later found to be correct. So I was very pleased that the Breakthrough Prize acknowledged some of those things that didn’t lead to specific predictions but changed a general framework.

I wish I could claim that I had predicted the neutrino mass

You coined the term effective field theory (EFT) and recently inaugurated the online lecture series All Things EFT. What is the importance of EFT today?

My thinking about EFTs has always been in part conditioned by thinking about how we can deal with a quantum theory of gravitation. You can’t represent gravity by a simple renormalisable theory like the Standard Model, so what do you do? In fact, you treat general relativity the same way you treat low-energy pions, which are described by a low-energy non-renormalisable theory. (You could say it’s a low-energy limit of QCD but its ingredients are totally different – instead of quarks and gluons you have pions). I showed how you can generate a power series for any given scattering amplitude in powers of energy rather than some small coupling constant. The whole idea of EFT is that any possible interaction is there: if it’s not forbidden it’s compulsory. But the higher, more complicated terms are suppressed by negative powers of some very large mass because the dimensionality of the coupling constants is such that they have negative powers of mass, like the gravitational constant. That’s why they’re so weak.

If you recognise that the Standard Model is probably a low-energy limit of some more general theory, then you can consider terms that make the theory non-renormalisable and generate corrections to it. In particular, the Standard Model has this beautiful feature that in its simplest renormalisable version there are symmetries that are automatic: at least to all orders of perturbation theory, it can’t violate the conservation of baryon or lepton number. But if the Standard Model just generates the first term in a power series in energy and you allow for more complicated non-renormalisable terms in the Lagrangian, then you find it’s very natural that there would be baryon and lepton non-conservation. In fact, the leading term of this sort is a term that violates lepton number and gives neutrinos the masses we observe. I wish I could claim that I had predicted the neutrino mass, but there already was evidence from the solar neutrino deficit and also it’s not certain that this is the explanation of neutrino masses. We could have Dirac neutrinos in which you have left and right neutrinos and antineutrinos coupling to the Higgs, and in that way get masses without any violation of lepton-number conservation. But I find that thoroughly repulsive because there’s no reason in that case why the neutrino masses should be so small, whereas in the EFT case we have Majorana neutrinos whose small masses are much more natural.

On this point, doesn’t the small value of the cosmological constant and Higgs mass undermine the EFT view by pointing to extreme fine-tuning?

Yes, they are a warning about things we don’t understand. The Higgs mass less so, after all it’s only about a hundred times larger than the proton mass and we know why the proton mass is so small compared to the GUT or Planck scale; it is because the proton gets it mass not from the quark masses, which have to do with the Higgs, but from the QCD forces, and we know that those become strong very slowly as you come down from high energy. We don’t understand this for the Higgs mass, which, after all, is a term in the Lagrangian, not like the proton mass. But it may be similar – that’s the old technicolour idea, that there is another coupling alongside QCD that becomes strong at some energy where it leads to a potential for the Higgs field, which then breaks electroweak symmetry. Now, I don’t have such a theory, and if I did I wouldn’t know how to test it. But there’s at least a hope for that. Whereas regards to the cosmological constant, I can’t think of anything along that line that would explain it. I think it was Nima Arkani-Hamed who said to me, “If the anthropic effect works for the cosmological constant, maybe that’s the answer with the Higgs mass – maybe it’s got to be small for anthropic reasons.” That’s very disturbing if it’s true, as we’re going to be left waving our hands. But I don’t know.

Maybe we have 2500 years ahead of us before we get to the next big step

Early last year you posted a preprint “Models of lepton and quark masses” in which you returned to the problem of the fermion mass hierarchy. How was it received?

Even in the abstract I advertise how this isn’t a realistic theory. It’s a problem that I first worked on almost 50 years ago. Just looking at the table of elementary particle masses I thought that the electron and the muon were crying out for an explanation. The electron mass looks like a radiative correction to the muon mass, so I spent the summer of 1972 on the back deck of our house in Cambridge, where I said, “This summer I am going to solve the problem of calculating the electron mass as an order-alpha correction to the muon mass.” I was able to prove that if in a theory it was natural in the technical sense that the electron would be massless in the tree approximation as a result of an accidental symmetry, then at higher order the mass would be finite. I wrote a paper, but then I just gave it up after no progress, until now when I went back to it, no longer young, and again I found models in which you do have an accidental symmetry. Now the idea is not just the muon and the electron, but the third generation feeding down to give masses to the second, which would then feed down to give masses to the first. Others have proposed what might be a more promising idea, that the only mass that isn’t zero in the tree approximation is the top mass, which is so much bigger than the others, and everything else feeds down from that. I just wanted to show the kinds of cancellations in infinites that can occur, and I worked out the calculations. I was hoping that when this paper came out some bright young physicist would come up with more realistic models, and use these calculational techniques – that hasn’t happened so far but it’s still pretty early.

What other inroads are there to the mass/flavour hierarchy problem?

The hope would be that experimentalists discover some correction to the Standard Model. The problem is that we don’t have a theory that goes beyond the Standard Model, so what we’re doing is floundering around looking for corrections in the model. So far, the only one discovered was the neutrino mass and that’s a very valuable piece of data which we so far have not figured out how to interpret. It definitely goes beyond the Standard Model – as I mentioned, I think it is a dimension-five operator in the effective field theory of which the Standard Model is the renormalisable part.

Weinberg delivering a seminar at CERN in 1979

The big question is whether we can cut off some sub-problem that we can actually solve with what we already know. That’s what I was trying to do in my recent paper and did not succeed in getting anywhere realistically. If that is not possible, it may be that we can’t make progress without a much deeper theory where the constituents are much more massive, something like string theory or an asymptotically safe theory. I still think string theory is our best hope for the future, but this future seems to be much further away than we had hoped it would be. Then I keep being reminded of Democritus, who proposed the existence of atoms in around 400 BCE. Even as late as 1900 physicists like Mach doubted the existence of atoms. They didn’t become really nailed down until the first years of the 20th century. So maybe we have 2500 years ahead of us before we get to the next big step.

Recently the LHC produced the first evidence that the Higgs boson couples to a second-generation fermion, the muon. Is there reason to think the Higgs might not couple to all three generations?

Before the Higgs was discovered it seemed quite possible that the explanation of the hierarchy problem was that there was some new technicolour force that gradually became strong as you came from very high energy to lower energy, and that somewhere in the multi-TeV range it became strong enough to produce a breakdown of the electroweak symmetry. This was pushed by Lenny Susskind and myself, independently. The problem with that theory was then: how did the quarks and leptons get their masses? Because while it gave a very natural and attractive picture of how the W and Z get their masses, it left it really mysterious for the quarks and leptons. It’s still possible that something like technicolour is true. Then the Higgs coupling to the quarks and leptons gives them masses just as expected. But in the old days, when we took technicolour seriously as the mechanism for breaking electroweak symmetry, which, since the discovery of the Higgs we don’t take seriously anymore, even then there was the question of how, without a scalar field, can you give masses to the quarks and leptons. So, I would say today, it would be amazing if the quarks and leptons were not getting their masses from the expectation value of the Higgs field. It’s important now to see a very high precision test of all this, however, because small effects coming from new physics might show up as corrections. But these days any suggestion for future physics facilities gets involved in international politics, which I don’t include in my area of expertise.

It’s still possible that something like technicolour is true

Any more papers or books in the pipeline?

I have a book that’s in press at Cambridge University Press called Foundations of Modern Physics. It’s intended to be an advanced undergraduate textbook that takes you from the earliest work on atoms, through thermodynamics, transport theory, Brownian motion, to early quantum theory; then relativity and quantum mechanics, and I even have two chapters that probably go beyond what any undergraduate would want, on nuclear physics and quantum field theory. It unfortunately doesn’t fit into what would normally be the plan for an undergraduate course, so I don’t know if it will be widely
adopted as a textbook. It was the result of a lecture course I was asked to give called “thermodynamics and quantum physics” that has been taught at Austin for years. So, I said “alright”, and it gave me a chance to learn some thermodynamics and transport theory.

Martinus J G Veltman 1931–2021

Martinus Veltman

Eminent physicist and Nobel laureate Martinus Veltman passed away in his home town of Bilthoven, the Netherlands, on 4 January. Martinus (Tini) Veltman graduated in physics at Utrecht University, opting first for experimental physics but later switching to theory. After his conscript military service in 1959, Léon Van Hove offered him a position as a PhD student. Veltman started in Utrecht, but later followed Van Hove to the CERN theory division.

CERN opened up a new world, and Tini often mentioned how he benefited from contacts with John Bell, Gilberto Bernardini and Sam Berman. The latter got him interested in weak interactions and in particular neutrino physics. During his time there, Tini spent a short period at SLAC where he started to work on his computer algebra program “Schoonschip”. He correctly foresaw that practical calculations of Feynman diagrams would become more and more complicated, particularly for theories with vector bosons. Nowadays extended calculations beyond one loop are unthinkable without computer algebra.

In 1964 Murray Gell-Mann proposed an algebra of conserved current operators for hadronic matter, which included the weak and electromagnetic currents. He argued that commutators of two currents taken at the same instant in time should “close”, meaning that these commutators can be written as linear combinations of the same set of currents. From this relation one could derive so-called sum rules that can be compared to experiments. Facing the technical problems with this approach, Tini came up with an alternative proposal. In a 1966 paper he simply conjectured that the hadronic currents for the electromagnetic and weak interactions had to be covariantly conserved, where he assumed that the weak interactions were mediated by virtual vector bosons, just as electromagnetic processes are mediated by virtual photons. The current conservation laws therefore had to contain extra terms depending on the photon field and the fields associated with the weak intermediate vector bosons. Quite surprisingly, he could then demonstrate that these new conservation equations suffice to prove the same sum rules. A more important aspect of his approach was only gradually realised, namely that the conservation laws for these currents are characteristic of a non-abelian gauge theory as had been written down more than 10 years earlier by Yang and Mills. Hence Veltman started to work on the possible renormalisability of Yang–Mills theory.

From his early days at CERN it was clear that Tini had a strong interest in confronting theoretical predictions with experimental results

Meanwhile, Veltman had left CERN towards the end of 1966 to accept a professorship at Utrecht. At the end of 1969 a prospective PhD student insisted on working on Yang–Mills theories. Veltman, who was already well aware of many of the pitfalls, only gradually agreed, and so Gerard ’t Hooft joined the programme. This turned out to be a very fruitful endeavour and the work was proudly presented in the summer of 1971. Veltman and ’t Hooft continued to collaborate on Yang–Mills theory. Their 1972 papers are among the finest that have been written on the subject. In 1999 they shared the Nobel Prize in Physics “for elucidating the quantum structure of electroweak interactions”.

With the renormalisability of the electroweak theory established, precision comparisons with experiment were within reach, and Veltman started to work on these problems with postdocs and PhD students. One important tool was the so-called rho parameter (a ratio of the masses of the W and Z bosons and the weak mixing angle). Its experimental value was close to one, which showed that only models in which the Higgs field starts as a doublet are allowed. From the small deviations from one, it was possible to estimate the mass of the top quark, which was not yet discovered. Later, when CERN was planning to build the LEP collider, the emphasis changed to the calculation of one-loop corrections for various processes in e+e collisions. As a member of the CERN Scientific Policy Committee (SPC), Veltman strongly argued that LEP should operate at the highest possible energy, well above the W+W threshold, to study the electroweak theory with precision. The Standard Model has since passed all of these tests.

From his early days at CERN it was clear that Tini had a strong interest in confronting theoretical predictions with experimental results, and in the organisation needed to do so. To this end, he was one of a small group of colleagues in the Netherlands to push for a national institute for subatomic physics – Nikhef, which was founded in 1975. In 1981 Tini moved to the University of Michigan in Ann Arbor, returning to the Netherlands after his retirement in 1996.

Veltman made a lasting impact on the field of particle physics, and inspired many students. Until recently he followed what was happening in the field, regularly attending the September meetings of the SPC. Our community will miss his sharp reasoning and clear-eyed look at particle physics that are crucial for its development.

Connecting physics with society

Student analysing ATLAS collisions

Science and basic research are drivers of technologies and innovations, which in turn are key to solving global challenges such as climate change and energy. The United Nations has summarised these challenges in 17 “sustainable development goals”, but it is striking how little connection with science they include. Furthermore, as found by a UNESCO study in 2017, the interest of the younger generation in studying science, technology, engineering and mathematics is falling, despite jobs in these areas growing at a rate three times faster than in any other sector. Clearly, there is a gulf between scientists and non-scientists when it comes to the perception of the importance of fundamental research in their lives – to the detriment of us all.

Some in the community are resistant to communicate physics spin-offs because this is not our primary purpose

Try asking your neighbours, kids, family members or mayor of your city whether they know about the medical and other applications that come from particle physics, or the stream of highly qualified people trained at CERN who bring their skills to business and industry. While the majority of young people are attracted to physics by its mindboggling findings and intriguing open questions, our subject appeals even more when individuals find out about its usefulness outside academia. This was one of the key outcomes of a recent survey, Creating Ambassadors for Science in Society, organised by the International Particle Physics Outreach Group (IPPOG).

Do most “Cernois” even know about the numerous start-ups based on CERN technologies or the hundreds of technology disclosures from CERN, 31 of which came in 2019 alone? Or about the numerous success stories contained within the CERN impact brochure and the many resources of CERN’s knowledge-transfer group? Even though “impact” is gaining attention, anecdotally when I presented  these facts to my research colleagues they were not fully aware. Yet who else will be our ambassadors, if not us?

Some in the community are resistant to communicate physics spin-offs because this is not our primary purpose. Yet, millions of people who have lost their income as a result of COVID-19 are rather more concerned about where their next rent and food payments are coming from, than they are about the couplings of the Higgs boson. Reaching out to non-physicists is more important than ever, especially to those with an indifferent or even negative attitude to science. Differentiating audiences between students, general public and politicians is not relevant when addressing non-scientifically educated people. Strategic information should be proactively communicated to all stakeholders in society in a relatable way, via eye-opening, surprising and emotionally charged stories about the practical applications of curiosity-driven discoveries.

Barbora Bruant Gulejova

IPPOG has been working to provide such stories since 2017 – and there is no shortage of examples. Take the touchscreen technology first explored at CERN 40 years ago, or humanitarian satellite mapping carried out for almost 20 years by UNOSAT, which is hosted at CERN. Millions of patients are diagnosed daily thanks to tools like PET and MRI, while more recent medical developments include innovative radioisotopes from MEDICIS for precision medicine, the first 3D colour X-ray images, and novel cancer treatments  based on superconducting accelerator technology. In the environmental arena, recent CERN spin-offs include a global network of air-quality sensors and fibre-optic sensors for improved water and pesticide management, while CERN open-source software is used for digital preservation in libraries and its computing resources have been heavily deployed in fighting the pandemic.

Building trust

Credibility and trust in science can only be built by scientists themselves, while working hand in hand with professional communicators, but not relying only on them. Extracurricular activities, such as those offered by IPPOG, CERN, other institutions and individual initiatives, are crucial in changing the misperceptions of the public and bringing about fact-based decision-making to the young generation. Scientists should develop a proactive strategic approach and even consider becoming active in policy making, following the shining examples of those who helped realise the SESAME light source in the Middle East and the South East European International Institute for Sustainable Technologies.

Particle physics already inspires some of the brightest minds to enter science. But audiences never look at our subject with the same eyes once they’ve learned about its applications and science-for-peace initiatives.

bright-rec iop pub iop-science physcis connect