Comsol -leaderboard other pages

Topics

Energy efficiency – a new frontier

A household freezer consumes about 1 kWh of electrical energy per day. An average household in Western Europe uses about 6 MWh per year. CERN’s total daily electricity consumption on average is about 3.5 GWh, about half of which is needed for the LHC. For reference, the total daily energy consumption of humankind is about 440 TWh – some three quarters of which is currently produced from a finite source of fossil fuels that is driving global temperature rises.

Speaking on the second day of the update of the European Strategy for Particle Physics, which is taking place this week in Granada, Erk Jensen of CERN used these striking figures to illustrate the importance of energy efficiency in high-energy physics. In some proposals for post-LHC colliders, the energy consumed by radio-frequency (RF) and cryogenics systems is in the region of gigawatt hours per day, he said. This puts accelerators into the range where they become relevant for society and public discussion.

The production of renewable energy is enjoying huge growth. Recently, the SESAME lightsource in Jordan became the first major accelerator facility to be powered entirely by renewable energy (solar). For larger research infrastructures, in the absence of better energy-storage technologies, this approach is not yet realistic. The only alternative is to make facilities much more energy efficient. “This is our duty to society, but also a necessity for acceptance!” stated Jensen. The large scale of projects in high-energy physics allows dedicated R&D into more efficient technologies and practices to take place. Not only would this bring significant cost savings, explained Jensen, but concepts and designs developed to improve energy efficiency in accelerators will be relevant for society at large.

A new energy-management panel established at CERN in 2015 has already led to actions that significantly reduce energy consumption in specific areas. These include 5 GWh/y from free cooling and air-flow optimisation, 20 GWh/y from better optimised LHC cryogenics, and 40 GWh/y from the implantation of SPS magnetic cycles and stand-by modes. Recovering waste heat is another line of attack. A project at CERN that is now in its final phase will see thermal energy from LHC Point 8 (where the LHCb experiment is located) to a heating network in the nearby town of Ferney-Voltaire.

For a collider that requires an RF power of 105 MW, such as the proposed electron–positron Future Circular Collider, a 10% increase (from 70 to 80%) in the efficiency of technologies such as high-efficiency klystrons could reduce energy consumption by around 1 TWh in a period of 10 years. This corresponds to a saving of tens of millions of Swiss francs. The adoption of novel neon–helium refrigeration cycles to cool the magnets of a future hadron collider could save up to 3 TWh in 10 years, offering even greater cost reductions. Such savings could, for example, go into R&D for more performant power converters, better designed magnets and RF cavities, and other technologies. Novel accelerator schemes such as energy recovery linacs are another way in which the field can reduce the energy consumption and thus cost of its machines. “Energy efficiency is not an option, it is a must!” concluded Jensen. “A few million investment, to my mind, is well worth it.”

Sustainable future

Energy efficiency is one of several important factors in making high-energy physics more sustainable in the long term. In one of the 160 written inputs to the ESPP Veronique Boisvert of Royal Holloway, University of London, and colleagues made three recommendations to ensure a more sustainable future in view of climate change.

The first is that European laboratories and funding agencies should include, as part of their grant-giving process, criteria evaluating the energy efficiency and carbon footprint of particle physics proposals. The second is that designs for major experiments and their associated buildings should consider plans for reduction of energy consumption, increased energy efficiency, energy recovery and carbon offset mechanisms. The third is that European laboratories should invest in the development and affordable deployment of next-generation digital meeting spaces such as virtual-reality tools, to minimise travel.

“Following the Paris agreement, it will be imperative to have a climate-neutral Europe by 2050,” says Boisvert. “It is therefore vital that big-science initiatives lead the way in greening their technologies and facilities.”

Addressing the outstanding questions

The success of the Standard Model (SM) in describing elementary particles and their interactions is beyond doubt. Yet, as an all-encompassing theory of nature, it falls short. Why are the fermions arranged into three neat families? Why do neutrinos have a vanishingly small but non-zero mass? Why does the Higgs boson discovered fit the simplest “toy model” of itself? And what lies beneath the SM’s 26 free parameters? Similarly profound questions persist in the universe at large: the mechanism of inflation; the matter–antimatter asymmetry; and the nature of dark energy and dark matter.

Surveying outstanding questions in particle physics during the opening session of the update of the European Strategy for Particle Physics (ESPP) on Monday, theorist Pilar Hernández of the University of Valencia discussed the SM’s unique weirdness. Quoting Newton’s assertion “that truth is ever to be found in simplicity, and not in the multiplicity and confusion of things”, she argued that a deeper theory is needed to solve the model’s many puzzles. “At some energy scale the SM stops making sense, so there is a cut off,” she stated. “The question is where?”

This known unknown has occupied theorists ever since the SM came into existence. If it is assumed that the natural cut-off is the Planck scale, 12 orders of magnitude above the energies at the LHC where gravity becomes relevant to the quantum world, then fine tuning is necessary to explain why the Higgs boson (which generates mass via its interactions) is so light. Traditional theoretical solutions to this hierarchy problem – such as supersymmetry or large extra dimensions – imply the existence of new phenomena at scales higher than the mass of the Higgs boson. While initial results from the LHC severely constrain the most natural parameter spaces, the 10­–100 TeV region is still an interesting scale to explore, says Hernández. At the same time, continues Hernández, there is a shift to more “bottom-up, rather than top-down”, approaches to beyond-SM (BSM) physics. “Particle physics could be heading to crisis or revolution. New BSM avenues focus on solving open problems such as the flavour puzzle, the origin of neutrino masses and the baryon asymmetry at lower scales.”

Introducing a “motivational toolkit” to plough the new territories ahead, Hernández named targets such as axion-like and long-lived particles, and the search for connections between the SM’s various puzzles. She noted in particular that 23 of the 26 free parameters of the SM are related in one way or another to the Higgs boson. “If we are looking for the suspect that could be hiding some secret, obviously the Higgs is the one!”

Linear versus circular

The accelerator, detector and computing technology needed for future fundamental exploration was the main focus of the scientific plenary session on day one of the ESPP update. Reviewing Higgs factory programmes, Vladimir Shiltsev, head of Fermilab’s Accelerator Physics Center, weighed up the pros and cons of linear versus circular machines. The former includes the International Linear Collider (ILC) and the Compact Linear Collider (CLIC); the latter a future circular electron–positron collider at CERN (FCCee) and the Circular Electron Positron Collider in China (CEPC). All need a high luminosity at the Higgs energy scale.

Linear colliders, said Shiltsev, are based on mature designs and organisation, are expandable to higher energies, and draw a wall-plug power similar to that of the LHC. On the other hand, they face potential challenges linked to their luminosity spectrum and beam current. Circular Higgs factories are also based on mature technology, with a strong global collaboration in the case of FCC. They offer a higher luminosity and more interaction points than linear options but require strategic R&D into high-efficiency RF sources and superconducting cavities, said Shiltsev. He also described a potential muon collider with a centre of mass energy of 126 GeV, which could be realised in a machine as short as 10 km. Although the cost would be relatively low, he said, the technology is not yet ready.

coffee break at open symposium of the European Strategy for Particle Physics

For energy-frontier colliders, the three current options – CERN’s HE-LHC (27 TeV) and FCC-hh (100 TeV), and China’s SppC (75 TeV) – demand high-field superconducting dipole magnets. These machines also present challenges such as how to deal with extreme levels of synchrotron radiation, collimation, injection and the overall machine design and energy efficiency. In a talk about the state-of-the-art and challenges in accelerator technology, Akira Yamamoto of CERN/KEK argued that, while a lepton collider could begin construction in the next few years, the dipoles necessary for a hadron collider might take 10 to 15 years of R&D before construction could start. There are natural constraints in such advanced-magnet development regardless of budget and manpower, he remarked.

Concerning more futuristic acceleration technologies based on plasma wakefields, which offer a factor 1000 more power than today’s RF systems, impressive results have been achieved recently at facilities such as BELLA at Berkeley and AWAKE at CERN. Responding to a question about when these technologies might supersede current ones, Shiltsev said: “Hopefully 20–30 years from now we should be able to know how many thousands of TeV will be possible by the end of the century.”

Recognising detectors and computing

An energy-frontier hadron collider would produce radiation environments that current detectors cannot deal with, said Francesco Forti of INFN and the University of Pisa in his talk about the technological challenges of particle-physics experiments. Another difficulty for detectors is how to handle non-standard physics signals, such as long-lived particles and monopoles. Like accelerators, detectors require long time scales – it was the very early 1990s when the first LHC detector CDRs were written. From colliders to fixed-target to astrophysics experiments, detectors in high-energy physics face a huge variety of operating conditions and employ technologies that are often deeply entwined with developments in industry. The environmental credentials of detectors are also increasingly in the spotlight.

The focus of detector R&D should follow a “70–20–10” model, whereby 70% of efforts go into current detectors, 20% on future detectors and 10% blue-sky R&D, argued Forti. Given that detector expertise is distributed among many institutions, the field also needs solid co-ordination. Forti cited CERN’s “RD” projects in diamond detectors, silicon radiation-hard devices, micro-pattern gas detectors and pixel readout chips for ATLAS and CMS as good examples of coordination towards common goals. Finally, he argued strongly for greater consideration of the “human factor”, stating that the current career model “just doesn’t work very well.” Your average particle physicist cannot be expert and innovative simultaneously in analysis, detectors, computing, teaching, outreach and other areas, he reasoned. “Career opportunities for detector physicists must be greatly strengthened and kept open in a systematic way, he said. “Invest in the people and in the murky future.”

Computing for high-energy physics faces similar challenges. “There is an increasing gap between early-career physicists and the profile needed to program new architectures, such as greater parallelisation,” said Simone Campana of CERN and the HEP software foundation in a presentation about future computing challenges. “We should recognise the efforts of those who specialise in software because they can really change things like the speed of analyses and simulations.”

In terms of data processing, the HL-LHC presents a particular challenge. DUNE, FAIR, BELLE II and other experiments will also create massive data samples. Then there is the generation of Monte Carlo samples. “Computing resources in HEP will be more constrained in the future,” said Campana. “We enter a regime where existing projects are entering a challenging phase, and many new projects are competing for resources – not just in HEP but in other sciences, too.” At the same time, the rate of advances in hardware performance has slowed in recent years, encouraging the community to adapt to take advantage of developments such as GPUs, high-performance computing and commercial cloud services.

The HEP software foundation released a community white paper in 2018 setting out the radical changes in computing and software – not just for processing but also for data storage and management – required to ensure the success of the LHC and other high-energy physics experiments into the 2020s.

Closing out

Closer examination of linear and circular colliders took place during subsequent parallel sessions on the first day of the ESPP update. Dark matter, flavour physics and electroweak and Higgs measurements were the other parallel themes. A final discussion session focusing on the capability of future machines for precision Higgs physics generated particularly lively exchanges between participants. It illuminated both the immensity of efforts to evaluate the physics reach of the high-luminosity LHC and future colliders, and the unenviable task faced by ESPP committees in deciding which post-LHC project is best for the field. It’s a point summed up well in the opening address by the chair of the ESPP strategy secretariat, Halina Abramowicz: “This is a very strange symposium. Normally we discuss results at conferences, but here we are discussing future results.”

Communicating the next collider

These days, in certain parts of the world at least, “hadron collider” and “Higgs boson” are practically household names. This is a consequence of the LHC, and the global communications that surrounded its construction, switch-on and eventual discovery of the Higgs boson. How should the next major project in particle physics be communicated to ensure its reach and success?

Communication is increasingly seen as integral to the research process, and is one of the strands of the open symposium of the European Strategy for Particle Physics (ESPP) update, which takes place from today in Granada, Spain. The ESPP update takes on board worldwide activities in particle physics and related topics, and is due to conclude early next year. It aims to reach a consensus on the scientific goals of the community and assess the proposed projects and technologies to achieve these goals. Though no decisions will be made now, the process is hoped to bring clarity to the question of which project will succeed the LHC following the end of its high-luminosity operations in the mid-2030s.

The landscape of communications has changed dramatically since the pre-web/mobile days of the nascent LHC. In one of 160 written contributions to the ESPP update, the International Particle Physics Outreach Group (IPPOG) emphasises the strategic relevance of concerted, global outreach activities for future colliders, stating that the success of such endeavours “depends greatly on the establishment of broad public support, as well as the commitment of key stakeholders and policymakers throughout Europe and the world”. IPPOG proposes that particle physics outreach and communication be explicitly recognised as strategic pillars in the final ESPP update document in 2020.

A contribution from the European Particle Physics Communication Network with support from the Interactions Collaboration highlights specific challenges that communicators face. These include the pace of change in social media, the speed of dissemination of good news, bad news and rumours, and the need to maintain trust and transparency in an era where there appears to be a popular backlash against expert opinion. The document notes the complexities of maintaining press interest over long timescales, and in conveying the costs involved: “Proposals for major international particle-physics experiments are infrequent, and when they are proposed, they seem disproportionately expensive when compared to other science disciplines”. A plenary talk on education, communication and outreach will take place on Wednesday at this week’s symposium. The European Strategy Group has also established a working group to recognise and support researchers who devote their time to such activities.

ESPP participants

Consensus in the community is a further factor for communications. In the early 1990s, when the LHC was seeking approval, there was broad agreement that a circular hadron collider was the right step for the field. The machine had a new energy territory to explore and a clear physics target (the mechanism of electroweak symmetry breaking), around which narratives could be built. The ability to witness the construction of the LHC and its four detectors itself was a massive draw. On the big-collider menu today, against a backdrop of the LHC’s discovery of a light Higgs boson but no particles beyond the Standard Model, is an International Linear Collider in Japan, a Compact Linear Collider or Future Circular Collider at CERN, and a Circular Electron Positron Collider in China. The projects would span decades and may require international collaboration on an entirely new scale. While not all equally mature, each has its own detailed physics and technology case that will be dissected this week.

Whether straight or circular, European or Asian, the next big collider requires a fresh narrative if it is to inspire the wider world. The rosy picture of eager experimentalists uncovering new elementary particles and wispy-haired theorists travelling to Stockholm to pick up prizes seems antiquated, now that all the particles of the Standard Model have been found. Short of major new theoretical insights, the best signposts in the dark and possibly hidden sectors ahead may come from experimental exploration. As Nima Arkani-Hamed put it recently in an interview with the Courier: “When theorists are more confused, it’s the time for more, not less experiments.”

Interrogating the Higgs boson – a completely new form of scalar matter with connections to the dynamics of the vacuum and other deep puzzles in the Standard Model – is the focus of all future collider proposals. Direct and indirect searches for new physics at much higher energy scales is another. However, as the range of contributions to the ESPP update illustrates, and which is integral to communications efforts, frontier colliders are only one tool to enable progress. Enigmas such as dark matter and energy are being probed from multiple angles both on the ground and in space; gravitational-wave astronomy is revolutionising astroparticle physics. Experiments large and small are closing in on the neutrino’s unique properties; heavy-ion, flavour, antimatter, fixed-target and numerous other programmes are thriving. A CERN initiative to specifically explore experimental programmes beyond high-energy colliders is advancing rapidly.

The LHC has demonstrated that there is a huge public appetite for the abstract, mind-expanding science made possible by awesomely large machines. There is no reason to think that the next leg of the journey in fundamental exploration is any less inspiring, and every reason to shout about its impact. Above and beyond the knowledge it creates and the advanced technologies that it drives, particle physics is one of the subjects that attracts young people into STEM subjects, many going on to pursue more applied research or industry careers. Large research infrastructures also have direct, though little reported, economic and societal benefits. Last but not least, the success of big science sends a positive message about human progress and global collaboration at a time when many nations are looking inwards. Clearly, engaging the public, politicians and fellow scientists in the next high-energy physics adventure presents a golden opportunity for those of us in the comms business.

For now, though, it’s over to the 600 or so physicists here in Granada to carve out the new physics avenues ahead. The Courier will be following discussions throughout the week in an attempt to unravel the big picture.

Science communication: a new frontier

In the world of communication, everyone has a role to play. During the past two decades, the ability of researchers to communicate their work to funding agencies, policymakers, entrepreneurs and the public at large has become an increasingly important part of their job. Scientists play a fundamental role in society, generally enjoying an authoritative status, and this makes us accountable.

Science communication is not just a way to share knowledge, it is also about educating new generations in the scientific approach and attracting young people to scientific careers. In addition, fundamental research drives the development of technology and innovation, playing an important role in providing solutions in challenging areas such as health care, the provision of food and safety. This obliges researchers to disseminate the results of their work.

Evolving attitudes

Although science communication is becoming increasingly unavoidable, the skills it requires are not yet universal and some scientists are not prepared to do it. Of course there are risks involved. Communication can distract individuals from research and objectives, or, if done badly, can undermine the very messages that the scientist needs to convey. The European Researchers’ Night is a highly successful annual event that was initiated in 2005 as a European Commission Marie Skłodowska-Curie Action, and offers an opportunity for scientists to get more involved in science communication. It falls every final Friday of September, and illustrates how quickly attitudes are evolving.

In 2006, with a small group of researchers from the Italian National Institute for Nuclear Physics (INFN) located close to Frascati, we took part in one of the first Researchers’ Night events. Frascati is surrounded by important scientific institutions and universities, and from the start the Italian National Agency for New Technologies, Energy and Sustainable Economic Development, the European Space Agency and the National Institute for Astrophysics joined the collaboration with INFN, along with the Municipality of Frascati and the Cultural and Research Department of the Lazio region, which co-funded the initiative.

Since then, thousands of researchers, citizens, public and private institutions have worked together to change the public perception of science and of the infrastructure in the Frascati and Lazio regions, supported by the programme. Today, after 13 editions, it involves more than 60 scientific partners spread from the north to the south of Italy in 30 cities, and attracts more than 50,000 attendees, with significant media impact (figure 1). Moreover, it has now evolved to become a week-long event, is linked to many related events throughout the year, and has triggered many institutions to develop their own science-communication projects.

Analysing the successive Frascati Researchers’ Night projects allows a better understanding of the evolution of science-communication methodology. Back in 2006, scientists started to open their laboratories and research infrastructures to present their jobs in the most comprehensible way, with a view to increasing the scientific literacy of the public and to fill their “deficit” of knowledge. They then tried to create a direct dialogue by meeting people in public spaces such as squares and bars, discussing the more practical aspects of science, such as how public money is spent, and how much researchers are responsible for their work. Those were the years in which the socio-economic crisis started to unfold. It was also the beginning of the European Union’s Horizon 2020 programme, when economic growth and terms such as “innovation” started to substitute scientific progress and discovery. It was therefore becoming more important than ever to engage with the public and keep the science flag flying.

In recent years, this approach has changed. Two biannual projects that are also part of a Marie Skłodowska-Curie Action – Made in Science and BEES (BE a citizEn Scientist) underline a different vision of science and of the methodology of communication. Made in Science (which was live between 2016 and 2017) was supposed to represent the “trademark” of research, aiming to communicate to society the importance of the science production chain in terms of quality, identity, creativity, know-how and responsibility. In this chain, which starts from fundamental research and ends with social benefits, no one is excluded and must take part in the decision process and, where possible, in the research itself. Its successor, BEES (2018–2019), on the other hand, aims to bring citizens up close to the discovery process, showing how long it takes and how it can be tough and frustrating. Both projects follow the most recent trends in science communication based on a participative or “public engagement” model, rather than the traditional “deficit” model. Here, researchers are not the main actors but facilitators of the learning process with a specific role: the expert one.

Nerd or not a nerd?

Nevertheless, this evolution of science communication isn’t all positive. There are many examples of problems in science communication: the explosion of concerns about science (vaccines, autism, GMO, homeopathy, etc); the avoidance of science and technology in preference to returning to a more “natural” life; the exploitation of science results (positive or negative) to support conspiracy theories or influence democracies; and overplaying the benefits for knowledge and technology transfer, to list a few examples. Last but not least, some strong bias still remains among both scientists and audiences, limiting the effectiveness of communication.

The first, and probably the hardest, is the stereotype bias: are you a “nerd”, or do you feel like a nerd? Often scientists refer to themselves as a category that can’t be understood by society, consequently limiting their capacity to interact with the public. On the other hand, scientists are sometimes real nerds, and seen by the public as nerds. This is true for all job categories, but in the case of scientists this strongly conditions their ability to communicate.

Age, gender and technological bias also still play a fundamental role, especially in the most developed European countries. Young people may understand science and technology more easily, while women still do not seem to have full access to scientific careers and to the exploitation of technology. Although the transition from a deficit to a participative model is already common in education and democratic societies, it is not yet completed in science, which is likely because of the strong bias that still seems to exist among researchers and audiences. The Marie Skłodowska-Curie European Researchers’ Night is a powerful way in which scientists can address such issues.

Artistic encounters of the quantum kind

Take a leap and enter, past the chalkboard wall filled with mathematical equations written, erased and written again, into the darkened room of projected questions where it all begins. What is reality? How do we describe nature? And for that matter, what is science and what is art?

Quàntica, which opened on 9 April at the Centre de Cultural Contemporània de Barcelona, invites you to explore quantum physics through the lens of both art and science. Curated by Mónica Bello, head of Arts at CERN, and art curator José-Carlos Mariátegui, with particle physicist José Ignacio Latorre serving as its scientific adviser, Quàntica is the second iteration of an exhibition that brings together 10 artworks resulting from Collide International art residences at CERN.

The exhibition illustrates how interdisciplinary intersections can present scientific concepts regarded by the wider public as esoteric, in ways that bridge the gap, engage the senses and create meaning. Punctuating each piece is the idea that the principles of quantum physics, whether we like it or not, are pervasive in our lives today – from technological applications in smart phones and satellites to our philosophies and world views.

Nine key concepts – “scales”, “quantum states”, “overlap”, “intertwining”, “indeterminacy”, “randomness”, “open science”, “everyday quantum” and “change-evolution” – guide visitors through the meandering hallway. Each display point prompts pause to consider a question that underlies the fundamental principles of quantum physics. Juxtaposed in the shared space is an artist-made particle detector and parts of experiments displayed as artistic objects. Video art installations are interspersed with video interviews of CERN physicists, including Helga Timko, who asks: what if we were to teach children quantum physics at a very young age, would they perceive the world as we do? On the ceiling above is a projection of a spiral galaxy, a part of Juan Cortés’ Supralunar. Inspired by Vera Rubin’s work on dark matter and the rotational motion of galaxies, Cortés made a two-part multisensorial installation: a lens through which you see flashing lights and vibrating plates to rest your chin and experience, on some level, the intensity of a galaxy’s formation.

From the very large scale, move to the very small. A recording of Richard Feynman explaining the astonishing double-slit experiment plays next to a standing demonstration allowing you to observe the counterintuitive possibilities that exist at the subatomic level. You can put on goofy glasses for Lea Porsager’s Cosmic Strike, an artwork with a sense of humour, which offers an immersive 3D animation described as “hard science and loopy mysticism”. She engages the audience’s imagination to meditate on being a neutrino as it travels through the neutrino horn, one of the many scientific artefacts from CERN’s archives that pepper the path.

Around the corner is Erwin Schrödinger’s 1935 article where he first used the word “Verschränkung” (or entanglement) and Anton Zeilinger’s notes explaining the protocol for quantum teleportation. Above these is projected a scene from Star Trek, which popularised the idea of teleportation.

The most visually striking piece in the exhibition is Cascade by Yunchul Kim, made up of three live elements. The first part is Argos (see image), splayed metallic hands that hang like lamps from the ceiling – an operational muon detector made of 41 channels blinking light as it records the particles passing through the gallery. Each signal triggers the second element, Impulse, a chandelier-like fluid-transfer system that sends drops of liquid through microtubes that flow into transparent veins of the final element, Tubular. Kim, who won the 2016 Arts at CERN Collide International Award, is an artist who employs rigorous methods and experiments in his laboratory with liquid and materials. Cascade encapsulates the surprising results knowledge-sharing can yield.

Quàntica is a must-see for anyone who views art and science as opposite ends of the academic spectrum. The first version of the exhibition was held at Liverpool in the UK last year. Co-produced by the ScANNER network (CERN, FACT, CCCB, iMAL and Le Lieu Unique), the exhibition continues until 24 September in Barcelona, before travelling to Brussels.

Rutherford in three movements

Professor Radium, the Atom Splitter, the Crocodile. Each is a nickname pointing to Ernest Rutherford, who made history by explaining radioactivity, discovering the proton and splitting the atom. All his scientific and personal milestones are described in great detail in the three-part documentary Rutherford, produced by Spacegirls Production Ltd in 2011.

Accompanied by physics historian John Campbell, the viewer learns about this great scientist from his ordinary childhood as a “Kiwi boy” to his untimely death in 1937. Historical reconstructions and trips to the places (New Zealand, the UK and Canada) that characterised his life bring Rutherford back to life.

When it was still heresy to think that there existed objects smaller than an atom, Rutherford was exploring the secrets of the invisible. During his first stay in Cambridge (UK), he discovered that uranium emits two types of radiation, which he named alpha and beta. Then, continuing his research at McGill University (Canada), he discovered that radioactivity has to do with the instability of the atom. He was rewarded with the Nobel Prize in Chemistry in 1908, and called Professor Radium after a comic book character of that name. In those years, people did not know the effects of radiation and “radio-toothpaste” was available to buy.

Then in Manchester (UK), he conducted the first artificial-induced nuclear reaction and described a new model of the atom, where a proton is like a fly in the middle of an empty cathedral. He fired alpha particles at nitrogen gas and obtained oxygen plus hydrogen, thus the epithet of the world’s first “atom splitter”.

In-between these big discoveries, the documentary points out that Rutherford blew tobacco smoke into his ionisation chamber, providing the groundwork for modern smoke detectors, proposed a more accurate dating system for the Earth’s age based on the rate of decay of uranium atoms, and campaigned for women’s opportunities and saving scientists from war.

The name “Crocodile” came later, from soviet physicist Pyotr Kapitza, as it is an animal that never turns back – or perhaps a reference to Rutherford’s loud voice that preceded his visits. The carving of a crocodile on the outer wall of the Mond Laboratory at the Cavendish site, commissioned by Kapitza, still reminds Cambridge students and tourists of this outstanding physicist.

  • Spacegirls Production Ltd

Multi-messenger adventures

Recent years have seen enormous progress in astroparticle physics, with the detection of gravitational waves, very-high-energy neutrinos, combined neutrino–gamma observation and the discovery of a binary neutron-star merger, which was seen across the electromagnetic spectrum by some 70 observatories. These important advances opened a new and fascinating era for multi-messenger astronomy, which is the study of astronomical phenomena based on the coordinated observation and interpretation of disparate “messenger” signals.

This book, first published in 2015, is now released in a renewed version to include such recent discoveries and to describe present research lines.

The Standard Model (SM) of particle physics and the lambda-cold-dark-matter theory, also referred to as the SM of cosmology, have both proved to be tremendously successful. However, they leave a few important unsolved puzzles. One issue is that we are still missing a description of the main ingredients of the universe from an energy-budget perspective. This volume provides a clear and updated description of the field, preparing and possibly inspiring students towards a solution to these puzzles.

The book introduces particle physics together with astrophysics and cosmology, starting from experiments and observations. Written by experimentalists actively working on astroparticle physics and with extensive experience in sub-nuclear physics, it provides a unified view of these fields, reflecting the very rapid advances that are being made.

The first eight chapters are devoted to the construction of the SM of particle physics, beginning from the Rutherford experiment up to the discovery of the Higgs particle and the study of its decay channels. The next chapter describes the SM of cosmology and the dark universe. Starting from the observational pillars of cosmology (the expansion of the universe, the cosmic microwave background and primordial nucleosynthesis), it moves on to a discussion about the origins and the future of our universe. Astrophysical evidence for dark matter is presented and its possible constituents and their detection are discussed. A separate chapter is devoted to neutrinos, covering natural and man-made sources; it presents the state of the art and the future prospects in a detailed way. Next, the “messengers from the high-energy universe”, such as high-energy charged cosmic rays, gamma rays, neutrinos and gravitational waves, are explored. A final chapter is devoted to astrobiology and the relations between fundamental physics and life.

This book offers a well-balanced introduction to particle and astroparticle physics, requiring only a basic background of classical and quantum physics. It is certainly a valuable resource that can be used as a self-study book, a reference or a textbook. In the preface, the authors suggest how different parts of the essay can serve as introductory courses on particle physics and astrophysics, and for advanced classes of high-energy astroparticle physics. Its 700+ pages allow for a detailed and clear presentation of the material, contain many useful references and include proposed exercises.

DESY’s astroparticle aspirations

What is your definition of astroparticle physics?

There is no general definition, but let me try nevertheless. Astroparticle physics addresses astrophysical questions through particle-physics experimental methods and, vice versa, questions from particle physics are addressed via astronomical methods. This approach has enabled many scientific breakthroughs and opened new windows to the universe in recent years. In Germany, what drives us is the question of the influence of neutrinos and high-energy processes in the development of our universe, and the direct search for dark matter. There are differences to particle physics both in the physics questions and in the approach: we observe high-energy radiation from our cosmos or rare events in underground laboratories. But there are also many similarities between the two fields of research that make a fruitful exchange possible.

What was your path into the astroparticle field?

I grew up in particle physics: I did my PhD on b-physics at the OPAL experiment at CERN’s LEP collider and then worked for a few years on the HERA-B experiment at DESY. I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY. Particle physics and astroparticle physics overcome borders, and this is a feat that is particularly important again today. Around 20 years ago I switched to ground-based gamma astronomy. I became fascinated in understanding how nature manages to accelerate particles to such enormous energies as we see them in cosmic rays and what role they play in the development of our universe. I experienced very closely how astroparticle physics has developed into an independent field. Seven years ago, I became head of the DESY site in Zeuthen near Berlin. My task is to develop DESY and in particular the Zeuthen site into an international centre for astroparticle physics. The new research division is also a recognition of the work of the people in Zeuthen and an important step for the future.

What are DESY’s strengths in astroparticle research?

Astroparticle physics began in Zeuthen with neutrino astronomy around 20 years ago. It has evolved from humble beginnings, from a small stake in the Lake Baikal experiment to a major role in the km3-sized IceCube array deep in the Antarctic ice. Having entered high-energy gamma-ray astronomy only a few years ago, the Zeuthen location is now a driving force behind the next-generation gamma-ray observatory the Cherenkov Telescope Array (CTA). The campus in Zeuthen will host the CTA Science Data Management Centre and we are participating in almost all currently operating major gamma-ray experiments to prepare for the CTA science harvest. A growing theoretical group supports all experimental activities. The combination of high-energy neutrinos and gamma rays offers unique opportunities to study processes at energies far beyond those reachable by human-made particle accelerators.

Why did DESY establish a dedicated division?

A dedicated research division underlines the importance of astroparticle physics in general and in DESY’s scientific programme in particular, and offers promising opportunities for the future. Astroparticle physics with cosmic messengers has experienced a tremendous development in recent years. The discovery of a large number of gamma-ray sources, the observation of cosmic neutrinos in 2013, the direct detection of gravitational waves in 2015, the observation of the merger of two neutron stars with more than 40 observatories worldwide triggered by its gravitational waves in August 2017, and the simultaneous observation of neutrinos and high-energy gamma radiation from the direction of a blazar the following month are just a few prominent examples. We are on the threshold of a golden age of multi-messenger astronomy, with gamma rays, neutrinos, gravitational waves and cosmic rays together promising completely new insights into the origins and evolution of our universe.

What are the divisions scale and plans?

The next few years will be exciting for us. We have just completed an architectural competition, new buildings will be built and the entire campus will be redesigned in the coming years. We expect well over 350 people to work on the Zeuthen campus, and hosting the CTA data centre will make us a contact point for astroparticle physicists globally. In addition to the growth through CTA, we are expanding our scientific portfolio to include radio detection of high-energy neutrinos and increased activities in astronomical-transient-event follow-up. We are also establishing close cooperation with other partners. Together with the Weizmann Institute in Israel, the University of Potsdam and the Humboldt University in Berlin, we are currently establishing an international doctoral school for multi-messenger astronomy funded by the Helmholtz Association.

How can we realise the full potential of multi-messenger astronomy?

Our potential lies primarily in committed scientists who use their creativity and ideas to take advantage of existing opportunities. For years we have experienced a large number of young people moving into astroparticle physics. We need new, highly sensitive instruments and there is a whole series of outstanding project proposals waiting to be implemented. CTA is being built, the upgrade of the Pierre Auger Observatory is progressing and the first steps for the further upgrade of IceCube have been taken. The funding for the next generation of gravitational-wave experiments, the Einstein Telescope in Europe, is not yet secured. We are currently discussing a possible participation of DESY in gravitational-wave astronomy. Multi-messenger astronomy promises a breathtaking amount of new discoveries. However, the findings will only be possible if, in addition to the instruments, the data are also made available in a form that allows scientists to jointly analyse the information from the various instruments. DESY will play an important role in all these tasks – from the construction of instruments to the training of young scientists. But we will also be involved in the development of the research-data infrastructure required for multi-messenger astronomy.

I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY

How would you describe the astroparticle physics landscape?

The community in Europe is growing. Not only in terms of the number of scientists, but also the size and variety of experiments. In many areas, European astroparticle physics is in transition from medium-sized experiments to large research infrastructures. CTA is the outstanding example of this. The large number of new scientists and the ideas for new research infrastructures show the great appeal of astroparticle physics as a young and exciting field. The proposed Einstein Telescope will cross the threshold of projects requiring investments of more than one billion Euros, requiring coordination at European and international level. With the Astroparticle Physics European Consortium (APPEC) we have taken a step towards improved coordination. DESY is one of the founding members of APPEC and I have been elected vice-chairman of the APPEC general assembly for the next two years. In this area, too, we can learn something from particle physics and are very pleased that CERN is an associate member of APPEC.

What implication does the update of the European strategy for particle physics have for your field?

European astroparticle physics provides a wide range of input to the European Strategy for particle physics, from concrete proposals for experiments to contributions from national committees for astroparticle physics. The contribution to the construction of the Einstein Telescope deserves special attention, and my personal wish is that CERN will coordinate the Einstein Telescope, as suggested in the contribution. With the LHC, CERN has again demonstrated in an outstanding way that it can successfully implement major research projects. With the first gravitational- wave events, we saw only the first flashes of a completely unknown part of our universe. The Einstein Telescope would revolutionise our new view of the world.

Building scientific resilience

Brest-Litovsk, Utrecht, Westphalia… at first sight, intergovernmental treaties belong more to the world of Bismarck and Napoleon than that of modern science. Yet, in March this year we celebrated the signing of a new treaty establishing the world’s largest radio telescope, the Square Kilometre Array (SKA). Why use a tool of 19th-century great-power politics to organise a 21st century big-science project?

Large-science projects like SKA require multi-billion budgets and decades-long commitment. Their resources must come from many countries, and they need mutual assurance for all contributors that none will renege. The board for SKA, of which I was formerly chair, rapidly concluded that only an intergovernmental organisation could give the necessary stability. It is a very European approach, born of our need to bring together many smaller countries. But it is flexible and resilient.

Of course there are other ways to do this. A European Research Infrastructure Consortium (ERIC) is a lighter weight, faster way to set up an intergovernmental research organisation and is the model that we have used for the European Spallation Source (ESS) in Sweden. The ERIC is part of European Union (EU) legislation and provides many of the benefits in VAT and purchasing rules that an international convention or treaty would, without a convoluted approval process. Once the UK (one of the 13 ESS member nations) withdraws from the EU, it will need legislation to recognise the status of ERICs, just as non-EU Switzerland and Norway have done.

Research facilities can also be run by organisations without any intergovernmental authority: charities, not-for-profit companies or university consortia. This may seem quick and agile, but it is risky. For example, the large US telescope projects TMT and GMT are university-led and have been able to get started, but it seems that US federal involvement will now be essential for their success.

In fact, US participation in international organisations is often an issue because it requires senate approval. The last time this happened for a science project was the ITER fusion experiment, which today is making good progress but had a rocky start. The EU is one of ITER’s seven member entities and its involvement is facilitated via EUROfusion – one of eight European intergovernmental research organisations that are members of EIROforum. Most were established decades ago, and their stable structure has helped them invest in major new facilities such as ESO’s European Extremely Large Telescope.

So international treaty-based science organisations are great for delivering big-science projects, while also promoting understanding between the science communities of different countries. In the aftermath of the Second World War that was really important, and was a founding motivation for CERN. More recently, the SESAME light source in Jordan adopted the CERN model to bring the Middle East’s scientific communities together.

Today the word faces new political challenges, and international treaties don’t do much to address the growing gap between angry, disenfranchised voters and an educated, internationally minded “elite”. We scientists often see nationalism as the problem, but the issue is more one of populism – and by being international we merely seem remote. We are used to speaking about outreach,  but we also need to think seriously about “in-reach” within our own countries and regions, to engage better with groups such as Trump voters and Brexit supporters.

There’s also the risk that too much stability can become rigidity. Organisations like SKA or ESS aim to provide room for negotiation and for substantial amounts of contributions to be made in-kind. They are free of commitments such as pension schemes and, in the case of SKA, membership levels are tied to the size of a country’s astronomy community and not to GDP. Were a future, global project like a Future Circular Collider to be hosted at CERN, a purpose-built intergovernmental agreement would surely be the best way to manage it. CERN is the archetype of intergovernmental organisations in science, and offers great stability in the face of political upheavals such as Brexit. Its challenge today is to think outside the box.

The same applies to all big projects in physics today. Our future prosperity and ability to address major challenges depend on investments in large, cutting-edge research infrastructures. Intergovernmental organisations provide the framework for those investments to flourish.

The proton laid bare

Every student of physics learns that the nucleus was discovered by firing alpha particles at atoms. The results of this famous experiment by Rutherford in 1911 indicated the existence of a hard-scattering core of positive charge, and, within a few years, led to his discovery of the proton (see Rutherford, transmutation and the proton). Decades later, similar experiments with electrons revealed point-like scattering centres inside the proton itself. Today we know these to be quarks, antiquarks and gluons, but the glorious complexity of the proton is often swept under the carpet. Undergraduate physicists are more often introduced to quarks as objects with flavour quantum numbers that build up mesons and baryons in bound states of twos and threes. Indeed, in the 1960s, many people regarded quarks simply as a useful book-keeping device to classify the many new “elementary” particles that had been discovered in cosmic rays and bubble-chamber experiments. Few people were aware of the inelastic-scattering experiments at SLAC with 20 GeV electrons, which were beginning to reveal a much richer picture of the proton.

The results of these experiments in the 1960s and early 1970s were remarkable. Elastic scattering by the point-like electrons revealed the spatial distribution of the proton’s charge, and cross sections had to be modified by form-factors as a result. These varied strongly depending on how hard the proton was struck – a hardness called the scale of the process, Q2, defined by the negative squared four-momentum transfer between incoming and outgoing electrons. At high enough scales the proton broke up, a phenomenon that can be quantified by x, a kinematic variable related to the inelasticity of the interaction. Both the scale and the inelasticity could be determined from the dynamics of the outgoing electron. Physicists anticipated a complicated dependence on both variables. Studies of scattering at ever higher and lower scales continue to bear fruit to this day.

A surprise at SLAC

The big surprise from the SLAC experiments was that the cross section did not depend strongly on Q2, a phenomenon called “scaling”. The only explanation for scaling was that the electrons were scattering from point-like centres within the proton. Feynman worked out the formalism to understand this by picturing the electron as hitting a point-like “parton” inside the proton. With elegant simplicity, he deduced that the partons each carried a fraction x of the proton’s longitudinal momentum.

Gell-Mann and Zweig had proposed the existence of quarks in 1964, but at first it was by no means obvious that they were partons. The SLAC experiments established that the scattering centres had spin ½ as required by the quark model, but there were two problems. On the one hand there appeared to be not only three, but many scattering centres. On the other, Feynman’s formalism required the partons to be “free” and independent of each other, yet they could hardly be independent if they remained confined in the proton.

Painting a picture

The picture became even more interesting in the late 1970s and 1980s when scattering experiments started to use neutrinos and antineutrinos as probes. Since neutrinos and antineutrinos have a definite handedness, or helicity, such that their spin is aligned against their direction of motion for neutrinos and with it for antineutrinos, their weak interaction with quarks and antiquarks gives different angular distributions. This showed that there must be antiquarks as well as quarks within the proton. In fact, it led to a picture in which the flavour properties of the proton are governed by three valence quarks immersed in a sea of quark–antiquark pairs. But this is not all: the same experiments indicated that the total momentum carried by the valence quarks and the sea still amounts to only around half of that of the proton. This missing momentum was termed an energy crisis, and was solved by the existence of gluons with spin 1, which bind the quarks together and confine them inside the proton.

In fact, the SLAC experiments had been lucky to be making measurements in the kinematic region where scaling holds almost perfectly – where the cross section is independent of Q2. The quark–parton model had to be extended, and became the field theory of quantum chromodynamics (QCD), in which the gluons are field carriers, just like photons in quantum electrodynamics (QED). Formulated in 1973, QCD has a much richer structure than QED. There are eight kinds of gluons that are characterised in terms of a new quantum number called colour, which is carried by both quarks and the gluons themselves, in contrast to QED, where the field carrier is uncharged. The gluon can thus interact with itself as well as with quarks.

From the 1980s onwards, a series of experiments probed increasingly deeply into the proton. Deep-inelastic-scattering experiments using neutrino and muon beams were performed at CERN and Fermilab, before the HERA electron–proton collider at DESY made definitive measurements from 1992 to 2007 (figure 1). The aim was to test the predictions of QCD as much as to investigate the structure of the proton, the goal being not just to list the constituents of the proton, but also to understand the forces between them.

Meanwhile, the EMC experiment at CERN had unearthed a mystery concerning the origin of the proton’s spin (see “The proton spin crisis”), while elsewhere, entirely different experiments were placing increasingly tough limits on the proton’s lifetime (see “The pursuit of proton decay”).

The proton spin crisis

Among many misconceptions in the description of the proton presented in undergraduate physics lectures is the origin of the proton’s spin. When we tell students about the three quarks in a proton, we usually say that its spin (equal to one half) comes from the arithmetic of three spin-½ quarks that align themselves such that two point “up” and one points “down”. However, as shown in measurements of the spin taken by quarks in deep-inelastic-scattering experiments in which both the lepton beam and the proton target are polarised, this is not the case. Rather, as first revealed in results from the European Muon Collaboration in CERN’s North Area in 1987, the quarks account for less than a third of the total proton spin. This was nicknamed the proton’s “spin crisis”, and attempts to fully resolve it remain the goal of experiments today.

Physicists had to develop cleverer experiments, for example looking at semi-inclusive measurements of fast pions and kaons in the final state, and using polarised proton–proton scattering, to determine where the missing spin comes from. It is now established that about 30% of the proton spin is in the valence quarks. Intriguingly, this is made up of +65% from up-valence and –35% from down-valence quarks. The sea seems to be unpolarised, and about 20% of the proton’s spin is in gluon polarisation, though it is not possible to measure this accurately across a wide kinematic range. Nevertheless, it seems unlikely that all of the missing spin is in gluons, and the puzzle is not yet solved.

What could the origin of the remaining ~50% of the proton’s spin be? The answer may lie in the orbital angular momentum of both the quarks and the gluons, but it is difficult to measure this directly. Orbital angular momentum is certainly connected to the transverse structure of the proton. The partons’ transverse momentum must also be considered, and there is the transverse position of the partons, and the transverse, as opposed to longitudinal, spin. Multi-dimensional measurements of transverse momentum distributions and generalised parton distributions can give access to orbital angular momentum. Such measurements are underway at Jefferson Laboratory, and are also a core part of the future Electron-Ion Collider programme.

Amanda Cooper-Sarkar, University of Oxford.

Quantum considerations

As with all quantum phenomena, what is in a proton depends on how you look at it. A more energetic probe has a smaller wavelength and therefore can reveal smaller structures, but it also injects energy into the system, and this allows the creation of new particles. The question then is whether we regard these particles as having been inside the proton in the first place. At higher scales quarks radiate gluons that then split into quark–antiquark pairs, which again radiate gluons: and the gluons themselves can also radiate gluons. The valence quarks thus lose momentum, distributing it between the sea quarks and gluons – increasingly many, with smaller and smaller amounts of momentum. A proton at rest is therefore very different to a proton, say, circulating in the Large Hadron Collider (LHC) at an energy of 7 TeV.

The deep-inelastic-scattering data from muon, neutrino and electron collisions established that QCD was the correct theory of the strong interaction. Experiments found that the structure functions which describe the scattering cross sections are not completely independent of scale, but depend on it logarithmically – in exactly the way that QCD predicts. This allowed the determination of the strong coupling “constant” αs, in analogy with the fine structure constant of QED, and it is now understood that both parameters vary with the scale of the process. In contrast with QED, the strong-coupling constant varies very quickly, from αs ~1 at low energy to ~0.1 at the energy scale of the mass of the Z boson. Thus the quarks become “asymptotically free” when examined at high energy, but are strongly confined at low energy – an insight leading to the award of the 2004 Nobel Prize in Physics to Gross, Politzer and Wilczek.

Once QCD had emerged as the definitive theory, the focus turned to measuring the momentum distributions of the partons, dubbed parton distribution functions (PDFs, figure 2). Several groups work on these determinations using both deep-inelastic-scattering data and related scattering processes, and presently there is agreement between theory and experiment within a few percent across a very wide range of x and Q2 values. However, this is not quite good enough. Today, knowledge of PDFs is increasingly vital for discovery physics at the LHC. Predictions of all cross sections measured at the LHC – whether Standard Model or beyond – need to use input PDFs. After all, when we are colliding protons it is actually the partons inside the proton that are having hard collisions and the rates of these collisions can only be predicted if we know the PDFs in the proton very accurately.

The dominant uncertainty on the direct production of particles predicted by physics beyond the Standard Model now comes from the limited precision of the PDFs of high-x gluons. Indirect searches for new physics are also affected: precision measurements of Standard Model parameters, such as the mass of the W-boson and the weak mixing angle sin2θW, are also limited by the precision of PDFs in the regions where we currently have the best precision.

The pursuit of proton decay

When Rutherford discovered the proton in 1919, the only other basic constituent of matter that was known of was the electron. There was no way that the proton could decay without violating charge conservation. Ten years later, Hermann Weyl went further, proposing the first version of what would become a law for baryon conservation. Even after the discoveries of the positron, and positive muons and pions – all lighter than the proton – there was little reason to question the proton’s stability. As Maurice Goldhaber famously pointed out, were the proton lifetime to be less than 1016 years we should feel it in our bones, because our bodies would be lethally radioactive. In 1954 he improved on this estimate. Arguing that the disappearance of a nucleon would leave a nucleus in an excited state that could lead to fission, he used the observed absence of spontaneous fission in 232Th to calculate a lifetime for bound nucleons of > 1020 years, which Georgy Flerov soon extended to > 3 × 1023 years.

Goldhaber also teamed up with Fred Reines and Clyde Cowan to test the possibility of directly observing proton decay using a 500 l tank of liquid scintillator surrounded by 90 photomultiplier tubes (PMTs) that was designed originally to detect reactor neutrinos. They found no signal, indicating that free protons must live for > 1021 years and bound nucleons for > 1022  years. By 1974, in a cosmic-ray experiment based on 20 tonnes of liquid scintillator, Reines and other colleagues had pushed the proton lifetime to > 1030 years.

Meanwhile, in 1966, Andrei Sakharov  had set out conditions that could yield the observed particle–antiparticle asymmetry of the universe. One of these was that baryon conservation is only approximate and could have been violated during the expansion phase of the early universe. The interactions that could violate baryon conservation would allow the proton to decay, but Sakharov’s suggested proton lifetime of > 1050 years provided little encouragement for experimenters. This all changed around 1974, when proposals for grand unified theories (GUTs) came along. GUTs not only unified the strong, weak and electromagnetic forces, but also closely linked quarks and leptons, allowing for non-conservation of baryon number. In particular, the minimal SU(5) theory of Howard Georgi and Sheldon Glashow led to predicted lifetimes for the decay p  e+π0 in the region of 1031±1 years – not so far beyond the observed lower limit of around 1030 years.

This provided the justification for dedicated proton-decay experiments. By 1981 seven such experiments installed deep underground were using either totally active water Cherenkov detectors or sampling calorimeters to monitor large numbers of protons. These included the Irvine–Michigan–Brookhaven (IMB) detector based on 3300 tonnes of water and 2048 5-inch PMTs and KamiokaNDE in Japan with 1000 tonnes of water and 1000 20-inch PMTs. These experiments were able to push the lower limits on the proton lifetime to > 1032 years and so discount the viability of minimal SU(5) GUTs.

However, in 1987 IMB and Kamiokande II achieved greater fame by each detecting a handful of neutrinos from the supernova SN1987a. Kamiokande II was already studying solar and atmospheric neutrinos, but it was its successor, Super-Kamiokande, that went on to make pioneering observations of atmospheric and solar neutrino oscillations. And it is Super-Kamiokande that currently has the highest lower-limit for proton decay: 1.6 × 1034 years for the decay to e+π0.

Today, the theoretical development of GUTs continues, with predictions in some models of proton lifetimes up to around 1036 years. Future large neutrino experiments – such as DUNE, Hyper-Kamiokande and JUNO – feature proton decay among their goals, with the possibility of extending the limits on the proton lifetime to 1035 years. So the study of proton stability goes on, continuing the symbiosis with neutrino research.

Chris Sutton, former CERN Courier editor.

Strange sightings at the LHC

Standard Model processes at the LHC are now able to contribute to our knowledge of the proton. As well as reducing the uncertainty on PDFs, however, the LHC data have led to a surprise: there seem to be more strange quark–antiquark pairs in the proton than we had thought (CERN Courier April 2017 p11). A recent study of the potential of the High-Luminosity LHC suggests that we can improve the present uncertainty on the gluon PDF by more than a factor of two by studying jet production, direct photon production and top quark–antiquark pair production. Measurements of the W-boson mass or the weak mixing angle will be improved by precision measurements of W and Z-boson production in previously unexplored kinematic regions, and strangeness can be further probed by measurements of these bosons in association with heavy quarks. We also look forward to possible future developments such as a Large Hadron-Electron Collider or a Future Circular Electron Hadron Collider – not least because new kinematic ranges continue to reveal more about the structure of QCD in the high-density regime.

In fact the HERA data already give hints that we may be entering a new phase of QCD at very low x, where the gluon density is very large (figure 3). Such large densities could lead to nonlinear effects in which gluons recombine. When the rate of recombination equals the rate of gluon splitting we may get gluon saturation. This state of matter has been described as a colour glass condensate (CGC) and has been further probed in heavy-ion experiments at the LHC and at RHIC at Brookhaven National Laboratory. The higher gluon densities involved in experiments with heavy nuclei enhance the impact of nonlinear gluon interactions. Interpretations of the data are consistent with the CGC but not definitive. A future electron–ion collider, such as that currently proposed in the US (CERN Courier October 2018, p31), will go further, enabling complete tomographic information about the proton and allowing us to directly connect fundamental partonic behaviour to the proton’s “bulk” properties such as its mass, charge and spin. Meanwhile, table-top spectroscopy experiments are shedding new light on a seemingly mundane yet key property of the proton: its radius (see “Solving the proton-radius puzzle”).

Together with the neutron, the proton constitutes practically all of the mass of the visible matter in the universe. A hundred years on from Rutherford’s discovery, it is clear that much remains to be learnt about the structure of this complex and ubiquitous particle.

Solving the proton-radius puzzle

How big is a proton? Experiments during the past decade have called well-established measurements of the proton’s radius into question – even prompting somewhat outlandish suggestions that new physics might be at play. Soon-to-be-published results promise to settle the proton-radius puzzle once and for all.

Contrary to popular depictions, the proton does not have a hard physical boundary like a snooker ball. Its radius was traditionally deduced from its charge distribution via electron-scattering experiments. Scattering from a charge distribution is different from scattering from a point-like charge: the extended charge distribution modifies the differential cross section by a form factor (the Fourier transform of the charge distribution). For a proton this takes the form of a dipole with respect to the scale of the interaction, and an exponentially decaying charge distribution as a function of the distance from the centre of the proton. Scattering experiments found the root mean square (RMS) radius to be about 0.88 fm.

Since the turn of the millennium, a modest increase in precision on the proton radius was made possible by comparing measurements of transitions in hydrogen with quantum electrodynamics (QED) calculations. Since atomic energy levels need to be corrected due to overlapping electron clouds in the extended charge distribution of the proton, precise measurements of the transition frequencies provide a handle on the proton’s radius. A combination of these measurements yielded the most recent CODATA value of 0.8751(61) fm.

The surprise came in 2010, when the CREMA collaboration at the Paul Scherrer Institute (PSI) in Switzerland achieved a 10-fold improvement in precision via the Lamb shift (the 2S–2P transition) in muonic hydrogen, the bound state of a muon orbiting a proton. As the muon is 200 times heavier than the electron, its Bohr radius is 200 times smaller, and the QED correction due to overlapping electron clouds is more substantial. CREMA observed an RMS proton radius of 0.8418(7) fm, which was five sigma below the world average, giving rise to the so-called “proton radius puzzle”. The team confirmed the measurement in 2013, reporting a radius of 0.8409(4) fm. These observations appeared to call into question the cherished principle of lepton universality.

More recent measurements have reinforced the proton’s slimmed-down nature. In 2016 CREMA reported a radius of 0.8356(20) fm by measuring the Lamb shift in muonic deuterium (the bound state of a muon orbiting a proton and a neutron). Most interestingly, in 2017 Axel Beyer of the Max Planck Institute of Quantum Optics in Garching and collaborators reported a similarly lithe radius of 0.8335(95) fm from observations of the 2S–4P transition in ordinary hydrogen. This low value is confirmed by soon-to-be-published measurements of the 1S–3S transition by the same group, and of the 2S–2P transition by Eric Hessels of York University, Canada, and colleagues. “We can no longer speak about a discrepancy between measurements of the proton radius in muonic and electronic spectroscopy,” says Krzysztof Pachucki of CODATA TGFC and the University of Warsaw.

But what of the discrepancy between spectroscopic and scattering experiments? The calculation of the RMS proton radius using scattering data is tricky due to the proton’s recoil, and analyses must extrapolate the form factor to a scale of Q2 = 0. Model uncertainties can therefore be reduced by performing scattering experiments at increasingly low scales. Measurements may now be aligning with a lower value consistent with the latest results in electronic and muonic spectroscopy. In 2017 Miha Mihovilovic of the University of Mainz and colleagues reported an interestingly low value of 0.810(82) fm using the Mainz Microtron, and results due from the Proton Radius Experiment (pRad) at Jefferson Lab will access a similarly low scale with even smaller uncertainties. Preliminary pRad results presented in October 2018 at the 5th Joint Meeting of the APS Division of Nuclear Physics and the Physical Society of Japan in Hawaii indicate a proton radius of 0.830(20) fm. These electron-scattering results will be complemented by muon-scattering results from the COMPASS experiment at CERN, and the MUSE experiment at PSI.

For now, says Pachucki, the latest CODATA recommendations published in 2016 list the higher value obtained from electron scattering and pre-2015 hydrogen-spectroscopy experiments. If the latest experiments continue to line up with the slimmed-down radius of CREMA’s 2010 result, however, the proton radius puzzle may soon be solved, and the world average revised downwards.

Mark Rayner, CERN.

bright-rec iop pub iop-science physcis connect