Professor Radium, the Atom Splitter, the Crocodile. Each is a nickname pointing to Ernest Rutherford, who made history by explaining radioactivity, discovering the proton and splitting the atom. All his scientific and personal milestones are described in great detail in the three-part documentary Rutherford, produced by Spacegirls Production Ltd in 2011.
Accompanied by physics historian John Campbell, the viewer learns about this great scientist from his ordinary childhood as a “Kiwi boy” to his untimely death in 1937. Historical reconstructions and trips to the places (New Zealand, the UK and Canada) that characterised his life bring Rutherford back to life.
When it was still heresy to think that there existed objects smaller than an atom, Rutherford was exploring the secrets of the invisible. During his first stay in Cambridge (UK), he discovered that uranium emits two types of radiation, which he named alpha and beta. Then, continuing his research at McGill University (Canada), he discovered that radioactivity has to do with the instability of the atom. He was rewarded with the Nobel Prize in Chemistry in 1908, and called Professor Radium after a comic book character of that name. In those years, people did not know the effects of radiation and “radio-toothpaste” was available to buy.
Then in Manchester (UK), he conducted the first artificial-induced nuclear reaction and described a new model of the atom, where a proton is like a fly in the middle of an empty cathedral. He fired alpha particles at nitrogen gas and obtained oxygen plus hydrogen, thus the epithet of the world’s first “atom splitter”.
In-between these big discoveries, the documentary points out that Rutherford blew tobacco smoke into his ionisation chamber, providing the groundwork for modern smoke detectors, proposed a more accurate dating system for the Earth’s age based on the rate of decay of uranium atoms, and campaigned for women’s opportunities and saving scientists from war.
The name “Crocodile” came later, from soviet physicist Pyotr Kapitza, as it is an animal that never turns back – or perhaps a reference to Rutherford’s loud voice that preceded his visits. The carving of a crocodile on the outer wall of the Mond Laboratory at the Cavendish site, commissioned by Kapitza, still reminds Cambridge students and tourists of this outstanding physicist.
Recent years have seen enormous progress in astroparticle physics, with the detection of gravitational waves, very-high-energy neutrinos, combined neutrino–gamma observation and the discovery of a binary neutron-star merger, which was seen across the electromagnetic spectrum by some 70 observatories. These important advances opened a new and fascinating era for multi-messenger astronomy, which is the study of astronomical phenomena based on the coordinated observation and interpretation of disparate “messenger” signals.
This book, first published in 2015, is now released in a renewed version to include such recent discoveries and to describe present research lines.
The Standard Model (SM) of particle physics and the lambda-cold-dark-matter theory, also referred to as the SM of cosmology, have both proved to be tremendously successful. However, they leave a few important unsolved puzzles. One issue is that we are still missing a description of the main ingredients of the universe from an energy-budget perspective. This volume provides a clear and updated description of the field, preparing and possibly inspiring students towards a solution to these puzzles.
The book introduces particle physics together with astrophysics and cosmology, starting from experiments and observations. Written by experimentalists actively working on astroparticle physics and with extensive experience in sub-nuclear physics, it provides a unified view of these fields, reflecting the very rapid advances that are being made.
The first eight chapters are devoted to the construction of the SM of particle physics, beginning from the Rutherford experiment up to the discovery of the Higgs particle and the study of its decay channels. The next chapter describes the SM of cosmology and the dark universe. Starting from the observational pillars of cosmology (the expansion of the universe, the cosmic microwave background and primordial nucleosynthesis), it moves on to a discussion about the origins and the future of our universe. Astrophysical evidence for dark matter is presented and its possible constituents and their detection are discussed. A separate chapter is devoted to neutrinos, covering natural and man-made sources; it presents the state of the art and the future prospects in a detailed way. Next, the “messengers from the high-energy universe”, such as high-energy charged cosmic rays, gamma rays, neutrinos and gravitational waves, are explored. A final chapter is devoted to astrobiology and the relations between fundamental physics and life.
This book offers a well-balanced introduction to particle and astroparticle physics, requiring only a basic background of classical and quantum physics. It is certainly a valuable resource that can be used as a self-study book, a reference or a textbook. In the preface, the authors suggest how different parts of the essay can serve as introductory courses on particle physics and astrophysics, and for advanced classes of high-energy astroparticle physics. Its 700+ pages allow for a detailed and clear presentation of the material, contain many useful references and include proposed exercises.
There is no general definition, but let me try nevertheless. Astroparticle physics addresses astrophysical questions through particle-physics experimental methods and, vice versa, questions from particle physics are addressed via astronomical methods. This approach has enabled many scientific breakthroughs and opened new windows to the universe in recent years. In Germany, what drives us is the question of the influence of neutrinos and high-energy processes in the development of our universe, and the direct search for dark matter. There are differences to particle physics both in the physics questions and in the approach: we observe high-energy radiation from our cosmos or rare events in underground laboratories. But there are also many similarities between the two fields of research that make a fruitful exchange possible.
What was your path into the astroparticle field?
I grew up in particle physics: I did my PhD on b-physics at the OPAL experiment at CERN’s LEP collider and then worked for a few years on the HERA-B experiment at DESY. I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY. Particle physics and astroparticle physics overcome borders, and this is a feat that is particularly important again today. Around 20 years ago I switched to ground-based gamma astronomy. I became fascinated in understanding how nature manages to accelerate particles to such enormous energies as we see them in cosmic rays and what role they play in the development of our universe. I experienced very closely how astroparticle physics has developed into an independent field. Seven years ago, I became head of the DESY site in Zeuthen near Berlin. My task is to develop DESY and in particular the Zeuthen site into an international centre for astroparticle physics. The new research division is also a recognition of the work of the people in Zeuthen and an important step for the future.
What are DESY’s strengths in astroparticle research?
Astroparticle physics began in Zeuthen with neutrino astronomy around 20 years ago. It has evolved from humble beginnings, from a small stake in the Lake Baikal experiment to a major role in the km3-sized IceCube array deep in the Antarctic ice. Having entered high-energy gamma-ray astronomy only a few years ago, the Zeuthen location is now a driving force behind the next-generation gamma-ray observatory the Cherenkov Telescope Array (CTA). The campus in Zeuthen will host the CTA Science Data Management Centre and we are participating in almost all currently operating major gamma-ray experiments to prepare for the CTA science harvest. A growing theoretical group supports all experimental activities. The combination of high-energy neutrinos and gamma rays offers unique opportunities to study processes at energies far beyond those reachable by human-made particle accelerators.
Why did DESY establish a dedicated division?
A dedicated research division underlines the importance of astroparticle physics in general and in DESY’s scientific programme in particular, and offers promising opportunities for the future. Astroparticle physics with cosmic messengers has experienced a tremendous development in recent years. The discovery of a large number of gamma-ray sources, the observation of cosmic neutrinos in 2013, the direct detection of gravitational waves in 2015, the observation of the merger of two neutron stars with more than 40 observatories worldwide triggered by its gravitational waves in August 2017, and the simultaneous observation of neutrinos and high-energy gamma radiation from the direction of a blazar the following month are just a few prominent examples. We are on the threshold of a golden age of multi-messenger astronomy, with gamma rays, neutrinos, gravitational waves and cosmic rays together promising completely new insights into the origins and evolution of our universe.
The next few years will be exciting for us. We have just completed an architectural competition, new buildings will be built and the entire campus will be redesigned in the coming years. We expect well over 350 people to work on the Zeuthen campus, and hosting the CTA data centre will make us a contact point for astroparticle physicists globally. In addition to the growth through CTA, we are expanding our scientific portfolio to include radio detection of high-energy neutrinos and increased activities in astronomical-transient-event follow-up. We are also establishing close cooperation with other partners. Together with the Weizmann Institute in Israel, the University of Potsdam and the Humboldt University in Berlin, we are currently establishing an international doctoral school for multi-messenger astronomy funded by the Helmholtz Association.
How can we realise the full potential of multi-messenger astronomy?
Our potential lies primarily in committed scientists who use their creativity and ideas to take advantage of existing opportunities. For years we have experienced a large number of young people moving into astroparticle physics. We need new, highly sensitive instruments and there is a whole series of outstanding project proposals waiting to be implemented. CTA is being built, the upgrade of the Pierre Auger Observatory is progressing and the first steps for the further upgrade of IceCube have been taken. The funding for the next generation of gravitational-wave experiments, the Einstein Telescope in Europe, is not yet secured. We are currently discussing a possible participation of DESY in gravitational-wave astronomy. Multi-messenger astronomy promises a breathtaking amount of new discoveries. However, the findings will only be possible if, in addition to the instruments, the data are also made available in a form that allows scientists to jointly analyse the information from the various instruments. DESY will play an important role in all these tasks – from the construction of instruments to the training of young scientists. But we will also be involved in the development of the research-data infrastructure required for multi-messenger astronomy.
I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY
How would you describe the astroparticle physics landscape?
The community in Europe is growing. Not only in terms of the number of scientists, but also the size and variety of experiments. In many areas, European astroparticle physics is in transition from medium-sized experiments to large research infrastructures. CTA is the outstanding example of this. The large number of new scientists and the ideas for new research infrastructures show the great appeal of astroparticle physics as a young and exciting field. The proposed Einstein Telescope will cross the threshold of projects requiring investments of more than one billion Euros, requiring coordination at European and international level. With the Astroparticle Physics European Consortium (APPEC) we have taken a step towards improved coordination. DESY is one of the founding members of APPEC and I have been elected vice-chairman of the APPEC general assembly for the next two years. In this area, too, we can learn something from particle physics and are very pleased that CERN is an associate member of APPEC.
What implication does the update of the European strategy for particle physics have for your field?
European astroparticle physics provides a wide range of input to the European Strategy for particle physics, from concrete proposals for experiments to contributions from national committees for astroparticle physics. The contribution to the construction of the Einstein Telescope deserves special attention, and my personal wish is that CERN will coordinate the Einstein Telescope, as suggested in the contribution. With the LHC, CERN has again demonstrated in an outstanding way that it can successfully implement major research projects. With the first gravitational- wave events, we saw only the first flashes of a completely unknown part of our universe. The Einstein Telescope would revolutionise our new view of the world.
Brest-Litovsk, Utrecht, Westphalia… at first sight, intergovernmental treaties belong more to the world of Bismarck and Napoleon than that of modern science. Yet, in March this year we celebrated the signing of a new treaty establishing the world’s largest radio telescope, the Square Kilometre Array (SKA). Why use a tool of 19th-century great-power politics to organise a 21st century big-science project?
Large-science projects like SKA require multi-billion budgets and decades-long commitment. Their resources must come from many countries, and they need mutual assurance for all contributors that none will renege. The board for SKA, of which I was formerly chair, rapidly concluded that only an intergovernmental organisation could give the necessary stability. It is a very European approach, born of our need to bring together many smaller countries. But it is flexible and resilient.
Of course there are other ways to do this. A European Research Infrastructure Consortium (ERIC) is a lighter weight, faster way to set up an intergovernmental research organisation and is the model that we have used for the European Spallation Source (ESS) in Sweden. The ERIC is part of European Union (EU) legislation and provides many of the benefits in VAT and purchasing rules that an international convention or treaty would, without a convoluted approval process. Once the UK (one of the 13 ESS member nations) withdraws from the EU, it will need legislation to recognise the status of ERICs, just as non-EU Switzerland and Norway have done.
Research facilities can also be run by organisations without any intergovernmental authority: charities, not-for-profit companies or university consortia. This may seem quick and agile, but it is risky. For example, the large US telescope projects TMT and GMT are university-led and have been able to get started, but it seems that US federal involvement will now be essential for their success.
In fact, US participation in international organisations is often an issue because it requires senate approval. The last time this happened for a science project was the ITER fusion experiment, which today is making good progress but had a rocky start. The EU is one of ITER’s seven member entities and its involvement is facilitated via EUROfusion – one of eight European intergovernmental research organisations that are members of EIROforum. Most were established decades ago, and their stable structure has helped them invest in major new facilities such as ESO’s European Extremely Large Telescope.
So international treaty-based science organisations are great for delivering big-science projects, while also promoting understanding between the science communities of different countries. In the aftermath of the Second World War that was really important, and was a founding motivation for CERN. More recently, the SESAME light source in Jordan adopted the CERN model to bring the Middle East’s scientific communities together.
Today the word faces new political challenges, and international treaties don’t do much to address the growing gap between angry, disenfranchised voters and an educated, internationally minded “elite”. We scientists often see nationalism as the problem, but the issue is more one of populism – and by being international we merely seem remote. We are used to speaking about outreach,but we also need to think seriously about “in-reach” within our own countries and regions, to engage better with groups such as Trump voters and Brexit supporters.
There’s also the risk that too much stability can become rigidity. Organisations like SKA or ESS aim to provide room for negotiation and for substantial amounts of contributions to be made in-kind. They are free of commitments such as pension schemes and, in the case of SKA, membership levels are tied to the size of a country’s astronomy community and not to GDP. Were a future, global project like a Future Circular Collider to be hosted at CERN, a purpose-built intergovernmental agreement would surely be the best way to manage it. CERN is the archetype of intergovernmental organisations in science, and offers great stability in the face of political upheavals such as Brexit. Its challenge today is to think outside the box.
The same applies to all big projects in physics today. Our future prosperity and ability to address major challenges depend on investments in large, cutting-edge research infrastructures. Intergovernmental organisations provide the framework for those investments to flourish.
Every student of physics learns that the nucleus was discovered by firing alpha particles at atoms. The results of this famous experiment by Rutherford in 1911 indicated the existence of a hard-scattering core of positive charge, and, within a few years, led to his discovery of the proton (see Rutherford, transmutation and the proton). Decades later, similar experiments with electrons revealed point-like scattering centres inside the proton itself. Today we know these to be quarks, antiquarks and gluons, but the glorious complexity of the proton is often swept under the carpet. Undergraduate physicists are more often introduced to quarks as objects with flavour quantum numbers that build up mesons and baryons in bound states of twos and threes. Indeed, in the 1960s, many people regarded quarks simply as a useful book-keeping device to classify the many new “elementary” particles that had been discovered in cosmic rays and bubble-chamber experiments. Few people were aware of the inelastic-scattering experiments at SLAC with 20 GeV electrons, which were beginning to reveal a much richer picture of the proton.
The results of these experiments in the 1960s and early 1970s were remarkable. Elastic scattering by the point-like electrons revealed the spatial distribution of the proton’s charge, and cross sections had to be modified by form-factors as a result. These varied strongly depending on how hard the proton was struck – a hardness called the scale of the process, Q2, defined by the negative squared four-momentum transfer between incoming and outgoing electrons. At high enough scales the proton broke up, a phenomenon that can be quantified by x, a kinematic variable related to the inelasticity of the interaction. Both the scale and the inelasticity could be determined from the dynamics of the outgoing electron. Physicists anticipated a complicated dependence on both variables. Studies of scattering at ever higher and lower scales continue to bear fruit to this day.
A surprise at SLAC
The big surprise from the SLAC experiments was that the cross section did not depend strongly on Q2, a phenomenon called “scaling”. The only explanation for scaling was that the electrons were scattering from point-like centres within the proton. Feynman worked out the formalism to understand this by picturing the electron as hitting a point-like “parton” inside the proton. With elegant simplicity, he deduced that the partons each carried a fraction x of the proton’s longitudinal momentum.
Gell-Mann and Zweig had proposed the existence of quarks in 1964, but at first it was by no means obvious that they were partons. The SLAC experiments established that the scattering centres had spin ½ as required by the quark model, but there were two problems. On the one hand there appeared to be not only three, but many scattering centres. On the other, Feynman’s formalism required the partons to be “free” and independent of each other, yet they could hardly be independent if they remained confined in the proton.
Painting a picture
The picture became even more interesting in the late 1970s and 1980s when scattering experiments started to use neutrinos and antineutrinos as probes. Since neutrinos and antineutrinos have a definite handedness, or helicity, such that their spin is aligned against their direction of motion for neutrinos and with it for antineutrinos, their weak interaction with quarks and antiquarks gives different angular distributions. This showed that there must be antiquarks as well as quarks within the proton. In fact, it led to a picture in which the flavour properties of the proton are governed by three valence quarks immersed in a sea of quark–antiquark pairs. But this is not all: the same experiments indicated that the total momentum carried by the valence quarks and the sea still amounts to only around half of that of the proton. This missing momentum was termed an energy crisis, and was solved by the existence of gluons with spin 1, which bind the quarks together and confine them inside the proton.
In fact, the SLAC experiments had been lucky to be making measurements in the kinematic region where scaling holds almost perfectly – where the cross section is independent of Q2. The quark–parton model had to be extended, and became the field theory of quantum chromodynamics (QCD), in which the gluons are field carriers, just like photons in quantum electrodynamics (QED). Formulated in 1973, QCD has a much richer structure than QED. There are eight kinds of gluons that are characterised in terms of a new quantum number called colour, which is carried by both quarks and the gluons themselves, in contrast to QED, where the field carrier is uncharged. The gluon can thus interact with itself as well as with quarks.
From the 1980s onwards, a series of experiments probed increasingly deeply into the proton. Deep-inelastic-scattering experiments using neutrino and muon beams were performed at CERN and Fermilab, before the HERA electron–proton collider at DESY made definitive measurements from 1992 to 2007 (figure 1). The aim was to test the predictions of QCD as much as to investigate the structure of the proton, the goal being not just to list the constituents of the proton, but also to understand the forces between them.
Meanwhile, the EMC experiment at CERN had unearthed a mystery concerning the origin of the proton’s spin (see “The proton spin crisis”), while elsewhere, entirely different experiments were placing increasingly tough limits on the proton’s lifetime (see “The pursuit of proton decay”).
Among many misconceptions in the description of the proton presented in undergraduate physics lectures is the origin of the proton’s spin. When we tell students about the three quarks in a proton, we usually say that its spin (equal to one half) comes from the arithmetic of three spin-½ quarks that align themselves such that two point “up” and one points “down”. However, as shown in measurements of the spin taken by quarks in deep-inelastic-scattering experiments in which both the lepton beam and the proton target are polarised, this is not the case. Rather, as first revealed in results from the European Muon Collaboration in CERN’s North Area in 1987, the quarks account for less than a third of the total proton spin. This was nicknamed the proton’s “spin crisis”, and attempts to fully resolve it remain the goal of experiments today.
Physicists had to develop cleverer experiments, for example looking at semi-inclusive measurements of fast pions and kaons in the final state, and using polarised proton–proton scattering, to determine where the missing spin comes from. It is now established that about 30% of the proton spin is in the valence quarks. Intriguingly, this is made up of +65% from up-valence and –35% from down-valence quarks. The sea seems to be unpolarised, and about 20% of the proton’s spin is in gluon polarisation, though it is not possible to measure this accurately across a wide kinematic range. Nevertheless, it seems unlikely that all of the missing spin is in gluons, and the puzzle is not yet solved.
What could the origin of the remaining ~50% of the proton’s spin be? The answer may lie in the orbital angular momentum of both the quarks and the gluons, but it is difficult to measure this directly. Orbital angular momentum is certainly connected to the transverse structure of the proton. The partons’ transverse momentum must also be considered, and there is the transverse position of the partons, and the transverse, as opposed to longitudinal, spin. Multi-dimensional measurements of transverse momentum distributions and generalised parton distributions can give access to orbital angular momentum. Such measurements are underway at Jefferson Laboratory, and are also a core part of the future Electron-Ion Collider programme.
Amanda Cooper-Sarkar, University of Oxford.
Quantum considerations
As with all quantum phenomena, what is in a proton depends on how you look at it. A more energetic probe has a smaller wavelength and therefore can reveal smaller structures, but it also injects energy into the system, and this allows the creation of new particles. The question then is whether we regard these particles as having been inside the proton in the first place. At higher scales quarks radiate gluons that then split into quark–antiquark pairs, which again radiate gluons: and the gluons themselves can also radiate gluons. The valence quarks thus lose momentum, distributing it between the sea quarks and gluons – increasingly many, with smaller and smaller amounts of momentum. A proton at rest is therefore very different to a proton, say, circulating in the Large Hadron Collider (LHC) at an energy of 7 TeV.
The deep-inelastic-scattering data from muon, neutrino and electron collisions established that QCD was the correct theory of the strong interaction. Experiments found that the structure functions which describe the scattering cross sections are not completely independent of scale, but depend on it logarithmically – in exactly the way that QCD predicts. This allowed the determination of the strong coupling “constant” αs, in analogy with the fine structure constant of QED, and it is now understood that both parameters vary with the scale of the process. In contrast with QED, the strong-coupling constant varies very quickly, from αs ~1 at low energy to ~0.1 at the energy scale of the mass of the Z boson. Thus the quarks become “asymptotically free” when examined at high energy, but are strongly confined at low energy – an insight leading to the award of the 2004 Nobel Prize in Physics to Gross, Politzer and Wilczek.
Once QCD had emerged as the definitive theory, the focus turned to measuring the momentum distributions of the partons, dubbed parton distribution functions (PDFs, figure 2). Several groups work on these determinations using both deep-inelastic-scattering data and related scattering processes, and presently there is agreement between theory and experiment within a few percent across a very wide range of x and Q2 values. However, this is not quite good enough. Today, knowledge of PDFs is increasingly vital for discovery physics at the LHC. Predictions of all cross sections measured at the LHC – whether Standard Model or beyond – need to use input PDFs. After all, when we are colliding protons it is actually the partons inside the proton that are having hard collisions and the rates of these collisions can only be predicted if we know the PDFs in the proton very accurately.
The dominant uncertainty on the direct production of particles predicted by physics beyond the Standard Model now comes from the limited precision of the PDFs of high-x gluons. Indirect searches for new physics are also affected: precision measurements of Standard Model parameters, such as the mass of the W-boson and the weak mixing angle sin2θW, are also limited by the precision of PDFs in the regions where we currently have the best precision.
When Rutherford discovered the proton in 1919, the only other basic constituent of matter that was known of was the electron. There was no way that the proton could decay without violating charge conservation. Ten years later, Hermann Weyl went further, proposing the first version of what would become a law for baryon conservation. Even after the discoveries of the positron, and positive muons and pions – all lighter than the proton – there was little reason to question the proton’s stability. As Maurice Goldhaber famously pointed out, were the proton lifetime to be less than 1016 years we should feel it in our bones, because our bodies would be lethally radioactive. In 1954 he improved on this estimate. Arguing that the disappearance of a nucleon would leave a nucleus in an excited state that could lead to fission, he used the observed absence of spontaneous fission in 232Th to calculate a lifetime for bound nucleons of > 1020 years, which Georgy Flerov soon extended to > 3 × 1023 years.
Goldhaber also teamed up with Fred Reines and Clyde Cowan to test the possibility of directly observing proton decay using a 500 l tank of liquid scintillator surrounded by 90 photomultiplier tubes (PMTs) that was designed originally to detect reactor neutrinos. They found no signal, indicating that free protons must live for > 1021 years and bound nucleons for > 1022 years. By 1974, in a cosmic-ray experiment based on 20 tonnes of liquid scintillator, Reines and other colleagues had pushed the proton lifetime to > 1030 years.
Meanwhile, in 1966, Andrei Sakharovhad set out conditions that could yield the observed particle–antiparticle asymmetry of the universe. One of these was that baryon conservation is only approximate and could have been violated during the expansion phase of the early universe. The interactions that could violate baryon conservation would allow the proton to decay, but Sakharov’s suggested proton lifetime of > 1050 years provided little encouragement for experimenters. This all changed around 1974, when proposals for grand unified theories (GUTs) came along. GUTs not only unified the strong, weak and electromagnetic forces, but also closely linked quarks and leptons, allowing for non-conservation of baryon number. In particular, the minimal SU(5) theory of Howard Georgi and Sheldon Glashow led to predicted lifetimes for the decay p → e+π0 in the region of 1031±1 years – not so far beyond the observed lower limit of around 1030 years.
This provided the justification for dedicated proton-decay experiments. By 1981 seven such experiments installed deep underground were using either totally active water Cherenkov detectors or sampling calorimeters to monitor large numbers of protons. These included the Irvine–Michigan–Brookhaven (IMB) detector based on 3300 tonnes of water and 2048 5-inch PMTs and KamiokaNDE in Japan with 1000 tonnes of water and 1000 20-inch PMTs. These experiments were able to push the lower limits on the proton lifetime to > 1032 years and so discount the viability of minimal SU(5) GUTs.
However, in 1987 IMB and Kamiokande II achieved greater fame by each detecting a handful of neutrinos from the supernova SN1987a. Kamiokande II was already studying solar and atmospheric neutrinos, but it was its successor, Super-Kamiokande, that went on to make pioneering observations of atmospheric and solar neutrino oscillations. And it is Super-Kamiokande that currently has the highest lower-limit for proton decay: 1.6 × 1034 years for the decay to e+π0.
Today, the theoretical development of GUTs continues, with predictions in some models of proton lifetimes up to around 1036 years. Future large neutrino experiments – such as DUNE, Hyper-Kamiokande and JUNO – feature proton decay among their goals, with the possibility of extending the limits on the proton lifetime to 1035 years. So the study of proton stability goes on, continuing the symbiosis with neutrino research.
Chris Sutton, former CERN Courier editor.
Strange sightings at the LHC
Standard Model processes at the LHC are now able to contribute to our knowledge of the proton. As well as reducing the uncertainty on PDFs, however, the LHC data have led to a surprise: there seem to be more strange quark–antiquark pairs in the proton than we had thought (CERN Courier April 2017 p11). A recent study of the potential of the High-Luminosity LHC suggests that we can improve the present uncertainty on the gluon PDF by more than a factor of two by studying jet production, direct photon production and top quark–antiquark pair production. Measurements of the W-boson mass or the weak mixing angle will be improved by precision measurements of W and Z-boson production in previously unexplored kinematic regions, and strangeness can be further probed by measurements of these bosons in association with heavy quarks. We also look forward to possible future developments such as a Large Hadron-Electron Collider or a Future Circular Electron Hadron Collider – not least because new kinematic ranges continue to reveal more about the structure of QCD in the high-density regime.
In fact the HERA data already give hints that we may be entering a new phase of QCD at very low x, where the gluon density is very large (figure 3). Such large densities could lead to nonlinear effects in which gluons recombine. When the rate of recombination equals the rate of gluon splitting we may get gluon saturation. This state of matter has been described as a colour glass condensate (CGC) and has been further probed in heavy-ion experiments at the LHC and at RHIC at Brookhaven National Laboratory. The higher gluon densities involved in experiments with heavy nuclei enhance the impact of nonlinear gluon interactions. Interpretations of the data are consistent with the CGC but not definitive. A future electron–ion collider, such as that currently proposed in the US (CERN Courier October 2018, p31), will go further, enabling complete tomographic information about the proton and allowing us to directly connect fundamental partonic behaviour to the proton’s “bulk” properties such as its mass, charge and spin. Meanwhile, table-top spectroscopy experiments are shedding new light on a seemingly mundane yet key property of the proton: its radius (see “Solving the proton-radius puzzle”).
Together with the neutron, the proton constitutes practically all of the mass of the visible matter in the universe. A hundred years on from Rutherford’s discovery, it is clear that much remains to be learnt about the structure of this complex and ubiquitous particle.
How big is a proton? Experiments during the past decade have called well-established measurements of the proton’s radius into question – even prompting somewhat outlandish suggestions that new physics might be at play. Soon-to-be-published results promise to settle the proton-radius puzzle once and for all.
Contrary to popular depictions, the proton does not have a hard physical boundary like a snooker ball. Its radius was traditionally deduced from its charge distribution via electron-scattering experiments. Scattering from a charge distribution is different from scattering from a point-like charge: the extended charge distribution modifies the differential cross section by a form factor (the Fourier transform of the charge distribution). For a proton this takes the form of a dipole with respect to the scale of the interaction, and an exponentially decaying charge distribution as a function of the distance from the centre of the proton. Scattering experiments found the root mean square (RMS) radius to be about 0.88 fm.
Since the turn of the millennium, a modest increase in precision on the proton radius was made possible by comparing measurements of transitions in hydrogen with quantum electrodynamics (QED) calculations. Since atomic energy levels need to be corrected due to overlapping electron clouds in the extended charge distribution of the proton, precise measurements of the transition frequencies provide a handle on the proton’s radius. A combination of these measurements yielded the most recent CODATA value of 0.8751(61) fm.
The surprise came in 2010, when the CREMA collaboration at the Paul Scherrer Institute (PSI) in Switzerland achieved a 10-fold improvement in precision via the Lamb shift (the 2S–2P transition) in muonic hydrogen, the bound state of a muon orbiting a proton. As the muon is 200 times heavier than the electron, its Bohr radius is 200 times smaller, and the QED correction due to overlapping electron clouds is more substantial. CREMA observed an RMS proton radius of 0.8418(7) fm, which was five sigma below the world average, giving rise to the so-called “proton radius puzzle”. The team confirmed the measurement in 2013, reporting a radius of 0.8409(4) fm. These observations appeared to call into question the cherished principle of lepton universality.
More recent measurements have reinforced the proton’s slimmed-down nature. In 2016 CREMA reported a radius of 0.8356(20) fm by measuring the Lamb shift in muonic deuterium (the bound state of a muon orbiting a proton and a neutron). Most interestingly, in 2017 Axel Beyer of the Max Planck Institute of Quantum Optics in Garching and collaborators reported a similarly lithe radius of 0.8335(95) fm from observations of the 2S–4P transition in ordinary hydrogen. This low value is confirmed by soon-to-be-published measurements of the 1S–3S transition by the same group, and of the 2S–2P transition by Eric Hessels of York University, Canada, and colleagues. “We can no longer speak about a discrepancy between measurements of the proton radius in muonic and electronic spectroscopy,” says Krzysztof Pachucki of CODATA TGFC and the University of Warsaw.
But what of the discrepancy between spectroscopic and scattering experiments? The calculation of the RMS proton radius using scattering data is tricky due to the proton’s recoil, and analyses must extrapolate the form factor to a scale of Q2 = 0. Model uncertainties can therefore be reduced by performing scattering experiments at increasingly low scales. Measurements may now be aligning with a lower value consistent with the latest results in electronic and muonic spectroscopy. In 2017 Miha Mihovilovic of the University of Mainz and colleagues reported an interestingly low value of 0.810(82) fm using the Mainz Microtron, and results due from the Proton Radius Experiment (pRad) at Jefferson Lab will access a similarly low scale with even smaller uncertainties. Preliminary pRad results presented in October 2018 at the 5th Joint Meeting of the APS Division of Nuclear Physics and the Physical Society of Japan in Hawaii indicate a proton radius of 0.830(20) fm. These electron-scattering results will be complemented by muon-scattering results from the COMPASS experiment at CERN, and the MUSE experiment at PSI.
For now, says Pachucki, the latest CODATA recommendations published in 2016 list the higher value obtained from electron scattering and pre-2015 hydrogen-spectroscopy experiments. If the latest experiments continue to line up with the slimmed-down radius of CREMA’s 2010 result, however, the proton radius puzzle may soon be solved, and the world average revised downwards.
In 1971, at a Baskin-Robbins ice-cream store in Pasadena, California, Murray Gell-Mann and his student Harald Fritzsch came up with the term “flavour” to describe the different types of quarks. From the three types known at the time – up, down and strange – the list of quark flavours grew to six. A similar picture evolved for the leptons: the electron and the muon were joined by the unexpected discovery of the tau lepton at SLAC in 1975 and completed with the three corresponding neutrinos. These 12 elementary fermions are grouped into three generations of increasing mass.
The three flavours of charged leptons – electron, muon and tau – are the same in many respects. This “flavour universality” is deeply ingrained in the symmetry structure of the Standard Model (SM) and applies to both the electroweak and strong forces (though the latter is irrelevant for leptons). It directly follows from the assumption that the SM gauge group, SU(3) × SU(2) × U(1), is one and the same for all three generations of fermions. The Higgs field, on the other hand, distinguishes between fermions of different flavours and endows them with different masses – sometimes strikingly so. In other words, the gauge forces, such as the electroweak force, are flavour-universal in the SM, while the exchange of a Higgs particle is not.
Today, flavour physics is a major field of activity. A quick look at the Particle Data Group (PDG) booklet, with its long lists of the decays of B mesons, D mesons, kaons and other hadrons, gives an impression of the breadth and depth of the field. Even in the condensed version of the PDG booklet, such listings run to more than 170 pages. Still, the results can be summarised succinctly: all the measured decays agree with SM predictions, with the exception of measurements that probe LFU in two quark-level transitions: b → cτ–ν̅τ and b → sμ+μ–.
Oddities in decays to D mesons
In the SM the b → cτ–ν̅τ process is due to a tree-level exchange of a virtual W boson (figure 1, left). The W boson, being much heavier than the amount of energy that is released in the decay of the b quark, is virtual. Rather than materialising as a particle, it leaves its imprint as a very short-range potential that has the property of changing one quark (a b quark) into a different one (a c quark) with the simultaneous emission of a charged lepton and an antineutrino.
Flavour universality is probed by measuring the ratio of branching fractions: RD(*) = Br(B → D(*)τ–ν̅τ)/Br(B → D(*)l–ν̅l), where l = e, μ. Two ratios can be measured, since the charm quark is either bound inside a D meson or its excited version, the D*, and the two ratios, RD and RD*, have the very welcome property that they can be precisely predicted in the SM. Importantly, since the hadronic inputs that describe the b → c transition do not depend on which lepton flavour is in the final state, the induced uncertainties mostly cancel in the ratios. Currently, the SM prediction is roughly three standard deviations away from the global average of results from the LHCb, BaBar and Belle experiments (figure 2).
A possible explanation for this discrepancy is that there is an additional contribution to the decay rate, due to the exchange of a new virtual particle. For coupling strengths that are of order unity, such that they are appreciably large yet small enough to keep our calculations reliable, the mass of such a new particle needs to be about 3 TeV to explain the reported hints for the increased b → cτ–ν̅τ rates. This is light enough that the new particle could even be produced directly at the LHC. Even better, the options for what this new particle could be are quite restricted.
There are two main possibilities. One is a colour singlet that does not feel the strong force, for which candidates include a new charged Higgs boson or a new vector boson commonly denoted W′ (figure 1, middle). However, both of these options are essentially excluded by other measurements that do agree with the SM: the lifetime of the Bc meson; searches at the LHC for anomalous signals with tau leptons in the final state; decays of weak W and Z bosons into leptons; and by Bs mixing and B → Kνν̅ decays.
The second possible type of new particle is a leptoquark that couples to one quark and one lepton at each vertex (figure 3, right). Typically, the constraints from other measurements are less severe for leptoquarks than they are for new colour-singlet bosons, making them the preferred explanation for the b → cτ–ν̅τ anomaly. For instance, they contribute to Bs mixing at the one-loop level, making the resulting effect smaller than the present uncertainties. Since leptoquarks are charged under the strong force, in the same way as quarks are, they can be copiously produced at the LHC via strong interactions. Searches for pair- or singly-produced leptoquarks at the future high-luminosity LHC and at a proposed high-energy LHC will cover most of the available parameter space of current models.
Oddities in decays to kaons
The other decay showing interesting flavour deviations (b → sμ+μ–) is probed via the ratios RK(*) = Br(B → K(*)μ+μ–)/Br(B → K(*)e+e–), which test whether the rate for the b → sμ+μ– quark-level transition equals the rate for the b → se+e– one. The SM very precisely predicts RK(*) = 1, up to small corrections due to the very different masses of the muon and the electron. Measurements from LHCb on the other hand, are consistently below 1, with statistical significances of about 2.5 standard deviations, while less precise measurements from Belle are consistent with both LHCb and the SM (figure 3). Further support for these discrepancies is obtained from other observables, for which theoretical predictions are more uncertain. These include the branching ratios for decays induced by the b → sμ+μ– quark-level transition, and the distributions of the final-state particles.
In contrast to the tree-level b → cτ–ν̅τ process underlying the semileptonic B decays to D mesons, the b → sμ+μ– decay is induced via quantum corrections at the one-loop level (figure 4, left) and is therefore highly suppressed in the SM. Potential new-physics contributions, on the other hand, can be exchanged either at tree level or also at one-loop level. This means that there is quite a lot of freedom in what kind of new physics could explain the b → sμ+μ– anomaly. The possible tree-level mediators are a Z′ and leptoquarks with masses of about 30 TeV or lighter, if the couplings are smaller. For loop-induced models the new particles are necessarily light, with masses in the TeV range or below. This means that the searches for direct production of new particles at the LHC can probe a significant range of explanations for the LHCb anomalies. However, for many of the possibilities the high-energy upgrade to the LHC or a future circular collider with much higher energy would be required for the new particles to be discovered or ruled out.
Taking stock
Could the two anomalies be due to a single new lepton non-universal force? Interestingly, a leptoquark dubbed U1 – a spin-one particle that is a colour triplet, charged under hypercharge but not weak isospin – can explain both anomalies. With some effort it can be embedded in consistent theoretical constructions, albeit those with very non-trivial flavour structures. These models are based on modified versions of grand unified theories (GUTs) from the 1980s. Since GUTs unify the leptons and quarks, some of the force carriers can change quarks to leptons and vice versa, i.e. some of the force carriers are leptoquarks. The U1 leptoquark could be one such force carrier, coupling predominantly to the third generation of fermions. In all cases the U1 leptoquark is accompanied by many other particles with masses not much above the mass of U1.
While intriguing, the two sets of B-physics anomalies are by no means confirmed. None of the measurements have separately reached the five standard deviations needed to claim a discovery and, indeed, most are hovering around the 1–3 sigma mark. However, taken together, they form an interesting and consistent picture that something is potentially going on. We are in a lucky position that new measurements are expected to be finished soon, some in a few months, others in a few years.
First of all, the observables showing the discrepancy from the SM, RD(*) and RK(*), will be measured more precisely at LHCb and at Belle II, which is currently ramping up at KEK in Japan. In addition, there are many related measurements that are planned, both at Belle II as well as at LHCb, and also at ATLAS and CMS. For instance, measuring the same transitions, but with different initial- and final-state hadrons, should give further insights into the structure of new-physics contributions. If the anomalies are confirmed, this would then set a clear target for the next collider such as the high-energy LHC or the proposed proton–proton Future Circular Collider, since the new particles cannot be arbitrarily heavy.
If this exciting scenario plays out, it would not be the first time that indirect searches foretold the existence of new physics at the next energy scale. Nuclear beta decay and other weak transitions prognosticated the electroweak W and Z gauge bosons, the rare kaon decay KL→ μ+μ– pointed to the existence of the charm quark, including the prediction for its mass from kaon mixing, while B-meson mixings and measurements of electroweak corrections accurately predicted the top-quark mass before it was discovered. Finally, the measurement of CP violation in kaons led to the prediction of the third generation of fermions. If the present flavour anomalies stand firm, they will become another important item on this historic list, offering a view of a new energy scale to explore.
In his early days, Ernest Rutherford was the right man in the right place at the right time. After obtaining three degrees from the University of New Zealand, and with two years’ original research at the forefront of the electrical technology of the day, in 1895 he won an Exhibition of 1851 Science Scholarship, which took him to the Cavendish Laboratory at the University of Cambridge in the UK. Just after his arrival, the discoveries of X-rays and radioactivity were announced and J J Thomson discovered the electron. Rutherford was an immediate believer in objects smaller than the atom. His life’s work changed to understanding radioactivity and he named the alpha and beta rays.
In 1898 Rutherford took a chair in physics at McGill University in Canada, where he achieved several seminal results. He discovered radon, demonstrated that radio-activity was just the natural transmutation of certain elements, showed that alpha particles could be deviated in electric and magnetic fields (and hence were likely to be helium atoms minus two electrons), dated minerals and determined the age of the Earth, among other achievements.
In 1901, the McGill Physical Society called a meeting titled “The existence of bodies smaller than an atom”. Its aim was to demolish the chemists. Rutherford spoke to the motion and was opposed by a young Oxford chemist, Frederick Soddy, who was at McGill by chance. Soddy’s address “Chemical evidence for the indivisibility of the atom” attacked physicists, especially Thomson and Rutherford, who “… have been known to give expression to opinions on chemistry in general and the atomic theory in particular which call for strong protest.” Rutherford invited Soddy, who specialised in gas analysis, to join him. It was a short but fruitful collaboration in which the pair determined the first few steps in the natural transmutation of the heavy elements.
Manchester days
For some years Rutherford had wished to be more in the centre of research, which was Europe, and in 1907 moved to the University of Manchester. Here he began to follow up on experiments at McGill in which he had noted that a beam of alpha particles became fuzzy if passed through air or a thin slice of mica. They were scattered by an angle of about two degrees, indicating the presence of electric fields of 100 MV/cm, prompting his statement that “the atoms of matter must be the seat of very intense electrical forces”.
At Manchester he inherited an assistant, Hans Geiger, who was soon put to work making accurate measurements of the number of alpha particles scattered by a gold foil over these small angles. Geiger, who trained the senior undergraduates in radioactive techniques, told Rutherford in 1909 that one, Ernest Marsden, was ready for a subject of his own. Everyone knew that beta particles could be scattered off a block of metal, but no one thought that alpha particles would be. So Rutherford told Marsden to examine this. Marsden quickly found that alpha particles are indeed scattered – even if the block of metal was replaced by Geiger’s gold foils. This was entirely unexpected. It was, as Rutherford later declared, as if you fired a 15 inch naval shell at a piece of tissue paper and it came back and hit you.
One day, a couple of years later, Rutherford exclaimed to Geiger that he knew what the atom looked like: a nuclear structure with most of the mass and all of one type of charge in a tiny nucleus only a thousandth the size of an atom. This is the work for which he is most famous today, eight decades after his death (CERN Courier May 2011 p20).
Around 1913, Rutherford asked Marsden to “play marbles” with alphas and light atoms, especially hydrogen. Classical calculations showed that an alpha colliding head-on with a hydrogen nucleus would cause the hydrogen to recoil with a speed 1.6 times, and a range four times, that of the alpha particle that struck it. The recoil of the less-massive, less-charged hydrogen could be detected as lighter flashes on the scintillation screen at much greater range than the alphas could travel. Marsden indeed observed such long-range “H” particles, as he named them, produced in hydrogen gas and in thin films of materials rich in hydrogen, such as paraffin wax. He also noticed that the long-ranged H particles were sometimes produced when alpha particles travelled through air, but he did not know where they came from: water vapour in the gas, absorbed water on the apparatus or even emission from the alpha source, were suggested.
Mid-1914 bought an end to the collaboration. Marsden wrote up his work before accepting a job in New Zealand. Meanwhile, Rutherford had sailed to Canada and the US to give lectures, spending just a month back at Manchester before heading to Australia for the annual meeting of the British Association for the Advancement of Science. Three days before his arrival, war was declared in Europe.
Splitting the atom
Rutherford arrived back in Manchester in January 1915, via a U-boat-laced North Atlantic. It was a changed world, with the young men off fighting in the war. On behalf of the Admiralty, Rutherford turned his mind to one of the most pressing problems of the war: how to detect submarines when submerged. His directional hydrophone (patented by Bragg and Rutherford) was to be fitted to fleet ships. It was not until 1917 when Rutherford could return to his scientific research, specifically alpha-particle scattering from light atoms. By December of that year, he reported to Bohr that “I am also trying to break up the atom by this method. – Regard this as private.”
He studied the long-range hydrogen-particle recoils in several media (hydrogen gas, solid materials with a lot of hydrogen present and gases such as CO2 and oxygen), and was surprised to find that the number of these “recoil” particles increased when air or nitrogen was present. He deduced that the alpha particle had entered the nucleus of the nitrogen atom and a hydrogen nucleus was emitted. This marked the discovery that the hydrogen nucleus – or the proton, to give it the name coined by Rutherford in 1920– is a constituent of larger atomic nuclei.
Marsden was again available to help with the experiments for a few months from January 1919, whilst awaiting transport back to New Zealand after the war, and that year Rutherford accepted the position of director of the Cavendish Laboratory. Having delayed publication of the 1917 results until the war ended, Rutherford produced four papers on the light-atom work in 1919. In the fourth, “An anomalous effect in nitrogen.”, he wrote “we must conclude that the nitrogen atom disintegrated … and that the hydrogen atom which is liberated formed a constituent part of the nitrogen nucleus.” He also stated: “Considering the enormous intensity of the forces brought into play, it is not so much a matter of surprise that the nitrogen atom should suffer disintegration as that the α particle itself escapes disruption into its constituents”.
In 1920 Rutherford first proposed building up atoms from stable alphas and H ions. He also proposed that a particle of mass one but zero charge had to exist (neutron) to account for isotopes. With Wilson’s cloud chamber he had observed branched tracks of alpha particles at the end of their range. A Japanese visitor, Takeo Shimizu, built an automated Wilson cloud chamber capable of being expanded several times per second and built two cameras to photograph the tracks at right angles. Patrick Blackett, after graduating in 1921, took over the project when Shimizu returned to Japan. After modifications, by 1924 he had some 23,000 photographs showing some 400,000 tracks. Eight were forked, confirming Rutherford’s discovery. As Blackett later wrote: “The novel result deduced from these photographs was that the α was itself captured by the nitrogen nucleus with the ejection of a hydrogen atom, so producing a new and then unknown isotope of oxygen, 17O.”
As Blackett’s work confirmed, Rutherford had split the atom, and in doing so had become the world’s first successful alchemist, although this was a term that he did not like very much. Indeed, he also preferred to use the word “disintegration” rather than “transmutation”. When Rutherford and Soddy realised that radioactivity caused an element to naturally change into another, Soddy has written that he yelled “Rutherford, this is transmutation: the thorium is disintegrating and transmuting itself into argon (sic) gas.” Rutherford replied, “For Mike’s sake, Soddy, don’t call it transmutation. They’ll have our heads off as alchemists!”
In 1908 Rutherford had been awarded the Nobel Prize in Chemistry “for his investigations into the disintegration of the elements, and the chemistry of radioactive substances”. There was never a second prize for his detection of individual alpha particles, unearthing the nuclear structure of atoms, or the discovery of the proton. But few would doubt the immense contributions of this giant of physics.
PHYSTAT-nu 2019 was held at CERN from 22 to 25 January. Counted among the 130 participants were LHC physicists and professional statisticians as well as neutrino physicists from across the globe. The inaugural meeting took place at CERN in 2000 and PHYSTAT has gone from strength to strength since, with meetings devoted to specific topics in data analysis in particle physics. The latest PHYSTAT-nu event is the third of the series to focus on statistical issues in neutrino experiments. The workshop focused on the statistical tools used in data analyses, rather than experimental details and results.
Modern neutrino physics is geared towards understanding the nature and mixing of the three neutrinos’ mass and flavour eigenstates. This mixing can be inferred by observing “oscillations” between flavours as neutrinos travel through space. Neutrino experiments come in many different types and scales, but they tend to have one calculation in common: whether the neutrinos are created in an accelerator, a nuclear reactor, or by any number of astrophysical sources, the number of events expected in the detector is the product of the neutrino flux and the interaction cross section. Given the ghostly nature of the neutrino, this calculation presents subtle statistical challenges. To cancel common systematics, many facilities have two or more detectors at different distances from the neutrino source. However, as was shown for the NOVA and T2K experiments, competitors to observe CP violation using an accelerator-neutrino beam, it is difficult to correlate the neutrino yields in the near and far detectors. A full cancellation of the systematic uncertainties is complicated by the different detector acceptances, possible variations in the detector technologies, and the compositions of different neutrino interaction modes. In the coming years these two experiments plan to combine their data in a global analysis to increase their discovery power – lessons can be learnt from the LHC experience.
The problem of modelling the interactions of neutrinos with nuclei – essentially the problem of calculating the cross section in the detector – forces researchers to face the thorny statistical challenge of producing distributions that are unadulterated by detector effects. Such “unfolding” corrects kinematic observables for the effects of detector acceptance and smearing, but correcting for these effects can cause huge uncertainties. To counter this, strong “regularisation” is often applied, biasing the results towards the smooth spectra of Monte Carlo simulations. The desirability of publishing unregularised results as well as unfolded measurements was agreed by PHYSTAT-nu attendees. “Response matrices” may also be released, allowing physicists outside of an experimental collaboration to smear their own models, and compare them to detector-level data. Another major issue in modeling neutrino–nuclear interactions is the “unknown unknowns”. As Kevin McFarland of the University of Rochester reflected in his summary talk, it is important not to estimate your uncertainty by a survey of theory models. “It’s like trying to measure the width of a valley from the variance of the position of sheep grazing on it. That has an obvious failure mode: sheep read each other’s papers.”
An important step for current and future neutrino experiments could be to set up a statistics committee, as at the Tevatron, and, more recently, the LHC experiments. This PHYSTAT-nu workshop could be the first real step towards this exciting scenario.
The next PHYSTAT workshop will be held at Stockholm University from 31 July to 2 August on the subject of statistical issues in direct-detection dark-matter experiments.
More than 300 experts convened from 18-22 February for the 15th Vienna Conference on Instrumentation to discuss ongoing R&D efforts and set future roadmaps for collaboration. “In 1978 we discussed wire chambers as the first electronic detectors, and now we have a large number of very different detector types with performances unimaginable at that time,” said Manfred Krammer, head of CERN’s experimental physics department, recalling the first conference of the triennial series. “In the long history of the field we have seen the importance of cross-fertilisation as developments for one specific experiment can catalyse progress in many fronts.”
Following this strong tradition, the conference covered fundamental and technological issues associated with the most advanced detector technologies as well as the value of knowledge transfer to other domains. Over five days, participants covered topics ranging from sensor types and fast and efficient electronics to cooling technologies and their mechanical structures.
Contributors highlighted experiments proposed in laboratories around the world, spanning gravitational-wave detectors, colliders, fixed-target experiments, dark-matter searches, and neutrino and astroparticle experiments. A number of talks covered upgrade activities for the LHC experiments ahead of LHC Run 3 and for the high-luminosity LHC. An overview of LIGO called for serious planning to ensure that future ground-based gravitational-wave detectors can be operational in the 2030s. Drawing a comparison between the observation of gravitational waves and the discovery of the Higgs boson, Christian Joram of CERN noted “Progress in experimental physics often relies on breakthroughs in instrumentation that lead to substantial gains in measurement accuracy, efficiency and speed, or even open completely new approaches.”
Beyond innovative ideas and cross-disciplinary collaboration, the development of new detector technologies calls for good planning of resources and times. The R&D programme for the current LHC upgrades was set out in 2006, and it is already timely to start preparing for the third long shutdown in 2023 and the High-Luminosity LHC. Meanwhile, the CLIC and Future Circular Collider studies are developing clear ideas of the future experimental challenges in tackling the next exploration frontier.
Around 50 experts from around the world met at CERN from 26 to 29 March for the second ALEGRO workshop to discuss advanced linear-collider concepts at the energy frontier.
ALEGRO, the Advanced Linear Collider Study Group, was formed as an outcome of an ICFA workshop on advanced accelerators held at CERN in 2017 (CERN Courier December 2017 p31). Its purpose is to unite the accelerator community behind a > 10 TeV electron–positron collider based on advanced and novel accelerators (ANAs), which use wakefields driven by intense laser pulses or relativistic particle bunches in plasma, dielectric or metallic structures to reach gradients as high as 1 GeV/m. The proposed Advanced Linear International Collider – ALIC for short – would be shorter than linear colliders based on more conventional acceleration technologies such as CLIC and ILC, and would reach higher energies.
The main research topics ALEGRO identified are the preservation of beam quality, the development of stable and efficient drivers (in particular laser systems), wall-plug-to-beam-power efficiency, operation at high-repetition rates, tolerance studies, the staging of two structures and the development of suitable numerical tools to allow for the simulation of the accelerator as a whole.
The next ALEGRO workshop will be held in March 2020 in Germany.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.