Recently, the ALICE collaboration measured the elliptic flow of J/ψ mesons with unprecedented precision in lead–lead (Pb–Pb) collisions and, for the first time, also in proton–lead (p–Pb) collisions. While the results at low transverse momentum (pT) in Pb–Pb collisions confirm that charm quarks flow with the quark–gluon plasma (QGP), the results at high pT do not agree with model predictions. Furthermore, their similarity to p–Pb collisions suggest that additional J/ψ flow-generation mechanisms are still to be identified.
The elliptic flow (v2) is the azimuthal anisotropy of the final-state particles, generated by the collective expansion of the almond-shaped interaction region of the colliding nuclei in non-central nucleus–nucleus collisions. The J/ψ meson is a bound state of charm and anti-charm quarks, which is created at early times in hard-scattering processes. Effects of the QGP on the production of J/ψ mesons are currently understood in terms of two mechanisms: suppression by dissociation due to the large surrounding colour-charge density and regeneration by recombination of de-confined charm quarks. If charm quarks thermalise in the medium, recombined states should inherit their flow.
A clear positive v2 for J/ψ mesons at forward rapidity is observed in Pb–Pb collisions at a nucleon–nucleon energy of 5.02 TeV for different collision centralities. In semi-central collisions, the J/ψ v2 increases with pT up to 4–6 GeV/c and saturates or decreases thereafter. The J/ψ v2 measurement at mid-rapidity has a larger background and is therefore less precise, but demonstrates potential for future studies at the high-luminosity LHC.
A comparison with available theoretical model calculations shows that the measured values at low pT (below 4 GeV/c) can only be explained through a large contribution from the recombination of thermalised charm quarks. The expected v2 without this contribution (labelled “primordial” v2 in the figure) is much smaller than the measured values. However, the models clearly underestimate the measured azimuthal asymmetry at higher transverse momentum and do not reproduce the overall pT dependence, suggesting that there is another mechanism to produce J/ψ v2. The J/ψ v2 has also been measured in p–Pb collisions at energies of 5.02 and 8.16 TeV at forward (p-travelling) and backward (Pb-travelling) rapidities. Interestingly, the J/ψ v2 in the smaller p–Pb collision system is similar to that in central Pb–Pb collisions at high pT. The possibly missing mechanism could therefore be the same in both collision systems.
The Higgs boson interacts more strongly with more massive particles, so the coupling between the top quark and the Higgs boson (the top-quark Yukawa coupling) is expected to be large. The coupling can be directly probed by measuring the rate of events in which a Higgs boson is produced in association with a pair of top quarks (ttH production). Using the 13 TeV LHC data set collected in 2015 and 2016, several ATLAS analyses targeting different Higgs boson decay modes were performed. The combination of their results, released in late October, provides the strongest single-experiment evidence to date for ttH production.
The H → bb decay channel offers the largest rate of ttH events, but extracting the signal is hard because of the large background of top quarks produced in association with a pair of bottom quarks. The analysis relies on the identification of b-jets and multivariate analysis techniques to reconstruct the events and determine whether candidates are more likely to arise from ttH production or from background processes.
The probability for the Higgs boson to decay to a pair of W bosons or a pair of τ leptons is smaller, but the backgrounds to ttH searches with these decays are also smaller and easier to estimate. These decays are targeted in searches for events with a pair of leptons carrying the same charge or three or more charged leptons (including electrons, muons, or hadronically decaying τ leptons). In total, seven different final states were probed in the latest ATLAS analysis.
Higgs boson decays to a pair of photons or to a pair of Z bosons with subsequent decays to lepton pairs (giving a four-lepton final state) are also considered. These decay channels have very small rates, but provide a high signal-to-background ratio.
In the combination of these ttH analyses, an excess with a significance of 4.2 standard deviations with respect to the “no-ttH-signal” hypothesis is observed, compared to 3.8 standard deviations expected for a Standard Model signal. This constitutes the first direct evidence for the ttH process occurring at ATLAS. A cross-section of 590+160–150 fb is measured, in good agreement with the Standard Model prediction of 507+35–50 fb. This measurement, when combined with other Higgs boson production and decay studies, will shed more light on the possible presence of physics beyond the Standard Model in the Higgs sector.
The CMS experiment has added another piece to the Higgs boson puzzle, reporting evidence that the Higgs decays to a pair of b quarks.
In the Standard Model (SM) the Higgs field couples to fermions, giving them their masses, through a Yukawa interaction. The recent CMS observation of the H →ττ channel provides direct evidence of this interaction. While it is clear that the Higgs boson couples to up-type quarks (based on overall agreement between the gluon–gluon fusion production channel cross-section and the SM prediction), the Higgs boson decay to bottom quark–antiquark pairs provides a unique tool to directly access the bottom-type quark couplings.
The Higgs boson decays to a pair of b quarks 58% of the time, making it by far the most frequent decay channel. However, at the LHC the signal is overwhelmed by QCD production, which is several orders of magnitude higher. This makes the H → bb process very elusive. The most effective way to observe it is to search for associated production with an electroweak vector boson (VH, with V being a W or a Z boson). Further background reduction is achieved by requiring the Higgs boson candidates to have large transverse momentum and by exploiting the peculiar VH kinematical event properties.
The latest CMS analysis is based on LHC data collected last year at an energy of 13 TeV. To identify jets originating from b quarks, the collaboration used a novel combined multivariate b-tagging algorithm that exploits the presence of soft leptons together with information such as track impact parameters and secondary vertices. A signal region enriched in VH events was then selected, together with several control regions to test the accuracy of the Monte Carlo simulations, and a simultaneous binned-likelihood fit of the signal and control regions used to extract the Higgs boson signal.
An excess of events is observed compared to the expectation in the absence of a H → bb signal. The significance of the excess is 3.3σ, where the expectation from SM Higgs boson production is 2.8σ. The signal strength corresponding to this excess, relative to the SM expectation, is 1.2±0.4. When combined with the Run 1 measurement at a lower energy, the signal significance is 3.8σ with 3.8σ expected and a signal strength of 1.1.
To validate the analysis procedure, the same methodology was used to extract a signal for the VZ process, with Z → bb, which has a nearly identical final state but with a different invariant mass and a larger production cross-section. The observed excess of events for the combined WZ and ZZ processes has a significance of 5σ from the background-only event-yield expectation, and the corresponding signal strength is 1.0±0.2.
Thanks to the outstanding performance of the LHC, the data set will significantly increase by the end of Run 2, in 2018. This will allow a consistent reduction of the uncertainties, and a 5σ observation of the H → bb decay is expected.
The energy spectrum of cosmic rays continuously bombarding the Earth spans many orders of magnitude, with the highest energy events topping 108 TeV. Where these extreme particles come from, however, has remained a mystery since their discovery more than 50 years ago. Now the Pierre Auger collaboration has published results showing that the arrival direction of ultra-high-energy cosmic rays (UHECRs) is far from uniform, giving a clue to their origins.
The discovery in 1963 at the Vulcano Ranch Experiment of cosmic rays with energies exceeding one million times the energy of the protons in the LHC raised many questions. Not only is the charge of these hadronic particles unknown, but the acceleration mechanisms required to produce UHECRs and the environments that can host these mechanisms are still being debated. Proposed origins include sources in the galactic centre, extreme supernova events, mergers of neutron stars, and extragalactic sources such as blazars. Unlike the case with photons or neutrinos, the arrival direction of charged cosmic rays does not point directly towards their origin because, despite their extreme energies, their paths are deflected by magnetic fields both inside and outside our galaxy. Since the deflection reduces as the energy goes up, however, some UHECRs with the highest energies might still contain information about their arrival direction.
At the Pierre Auger Observatory, cosmic rays are detected using a vast array of detectors spread over an area of 3000 km2 near the town of Malargüe in western Argentina. Like the first cosmic-ray detectors in the 1960s, the array measures the air showers induced as the cosmic rays interact with the atmosphere. The arrival times of the particles, measured with GPS receivers, are used to determine the direction from which the primary particles came within approximately one degree.
The presented dipole measurement is based on a total of 30,000 cosmic rays measured.
The collaboration studied the arrival direction of particles with energies in the range 4-8 EeV and for particles with energies exceeding 8 EeV. In the former data set, no clear anisotropy was observed, whereas for particles with energies above 8 EeV a dipole structure was observed (see figure), indicating that more particles come from a particular part of the sky. Since the maximum of the dipole is outside the galactic plane, the measured anisotropy is consistent with an extragalactic nature. The collaboration reports that the maximum, when taking into account the deflection of magnetic fields, is consistent with a region in the sky known to have a large density of galaxies, supporting the view that UHECRs are produced in other galaxies. The lack of anisotropy at lower energies could be a result of the higher deflection of these particles in the galactic magnetic field.
The presented dipole measurement is based on a total of 30,000 cosmic rays measured by the Pierre Auger Observatory, which is currently being upgraded. Although the results indicate an extragalactic origin, the particular source responsible for accelerating these particles remains unknown. The upgraded observatory will enable more data to be acquired and allow a more detailed investigation of the currently studied energy ranges. It will also open the possibility to explore even higher energies where the magnetic-field deflections become even smaller, making it possible to study the origin of UHECRs, their acceleration mechanism and the magnetic fields that deflect them.
On 14 September 2015, the world changed for those of us who had spent years preparing for the day when we would detect gravitational waves. Our overarching goal was to directly detect gravitational radiation, finally confirming a prediction made by Albert Einstein in 1916. A year after he had published his theory of general relativity, Einstein predicted the existence of gravitational waves in analogy to electromagnetic waves (i.e. photons) that propagate through space from accelerating electric charges. Gravitational waves are produced by astrophysical accelerations of massive objects, but travel through space as oscillations of space–time itself.
It took 40 years before the theoretical community agreed that gravitational waves are real and an integral part of general relativity. At that point, proving they exist became an experimental problem and experiments using large bars of aluminium were instrumented to detect a tiny change in shape from the passage of a gravitational wave. Following a vigorous worldwide R&D programme, a potentially more sensitive technique – suspended-mass interferometry – has superseded resonant-bar detectors. There was limited theoretical guidance regarding what sensitivity would be required to achieve detections from known astrophysical sources. But various estimates indicated that a strain sensitivity ΔL/L of approximately 10–21 caused by the passage of a gravitational wave would be needed to detect known sources such as binary compact objects (binary black-hole mergers, binary neutron-star systems or binary black-hole neutron-star systems). That’s roughly equivalent to measuring the Earth–Sun separation to a precision of the proton radius.
The US National Science Foundation approved the construction of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 1994 at two locations: Hanford in Washington state and Livingston in Louisiana, 3000 km away. At that time, there was a network of cryogenic resonant-bar detectors spread around the world, including one at CERN, but suspended-mass interferometers have the advantage of broadband frequency acceptance (basically the audio band, 10–10,000 Hz) and a factor-1000 longer arms, making it feasible to measure a smaller ΔL/L. Earth-based detectors are sensitive to the most violent events in the universe, such as the merger of compact objects, supernovae and gamma-ray bursts. The detailed interferometric concept and innovations had already been demonstrated during the 1980s and 1990s in a 30 m prototype in Garching, Germany, and a 40 m prototype at Caltech in the US. Nevertheless, these prototype interferometers were at least four orders of magnitude away from the target sensitivity.
Strategic planning
We built a flexible technical infrastructure for LIGO such that it could accommodate a future major upgrade (Advanced LIGO) without rebuilding too much infrastructure. Initial LIGO had mostly used demonstrated technologies to assure technical success, despite the large extrapolation from the prototype interferometers. After completing Initial LIGO construction in about 2000, we undertook an ambitious R&D programme for Advanced LIGO. Over a period of about 10 years, we performed six observational runs with Initial LIGO, each time searching for gravitational waves with improved sensitivity. Between each run, we made improvements, ran again, and eventually reached our Initial LIGO design sensitivity. But, unfortunately, we failed to detect gravitational waves.
We then undertook a major upgrade to Advanced LIGO, which had the goal of improving the sensitivity over Initial LIGO by at least a factor of 10 over the entire frequency range. To accomplish this, we developed a more powerful NdYAG laser system to reduce shot noise at high frequencies, a multiple suspension system and larger test masses to reduce thermal noise in the middle frequencies, and introduced active seismic isolation, which reduced seismic noise at frequencies of around 40 Hz by a factor of 100 (CERN Courier January/February 2017 p34). This was the key to our discovery of our first 30 solar-mass binary black-hole mergers, which are concentrated at low frequencies, two years ago. The increased sensitivity to such events had expanded the volume of the universe searched by a factor of up to 106, enabling a binary black-hole-merger detection coincidence within 6 ms between the Livingston and Hanford sites.
We recorded the last 0.2 seconds of this astrophysical collision: the final merger; coalescence; and “ring-down” phase, constituting the first direct observation of gravitational waves. The waveform was accurately matched by numerical-relativity calculations with a signal-to-noise ratio of 24:1 and a statistical probability easily exceeding 5σ. Beyond confirming Einstein’s prediction, this event represented the first direct observation of black holes, and established that stellar black holes exist in binary systems and that they merge within the lifetime of the universe (CERN Courier January/February 2017 p16). Surprisingly, the two black holes were each about 30 times the mass of the Sun – much heavier than expectations from astrophysics.
Run 2 surprises
Similar to Initial LIGO, we plan to reach Advanced LIGO design sensitivity in steps. After completion of the four-month-long first data run (called O1) in January 2016, we improved the interferometer at the Livingston site from 60 Mpc to 100 Mpc for binary neutron-star mergers, but fell somewhat short in Hanford due to some technical issues, which we decided to fix after LIGO’s second observational run (O2). We have now reported a total of four black-hole-merger events and are beginning to determine characteristics such as mass distributions and spin alignments that will help distinguish between the different possibilities for the origin of such heavy black holes. The leading ideas are that they originate in low-metallicity parts of the universe, were produced in dense clusters, or are primordial. They might even constitute some of the dark matter.
We recorded the last 0.2 seconds of this astrophysical collision: the final merger.
Advanced LIGO’s O2 run ended in August this year. Although it seemed almost impossible that it could be as exciting as O1, several more black-hole binary mergers have been reported, including one after the Virgo interferometer in Italy joined O2 in August and dramatically improved our ability to locate the direction of the source. In addition, the orientation of Virgo relative to the two LIGO interferometers enabled the first information on the polarisation of the gravitational waves. Together with other measurements, this allowed us to limit the existence of an additional tensor term in general relativity and showed that the LIGO–Virgo event is consistent with the predicted two-state polarisation picture.
Then, on 17 August, we really hit the jackpot: our interferometers detected a neutron-star binary merger for the first time. We observed a coincidence signal in both LIGO and Virgo that had strikingly different properties from the black-hole binary mergers we had spotted earlier. Like those, this event entered our detector at low frequencies and propagated to higher frequencies, but lasted much longer (around 100 s) and reached much higher frequencies. This is because the masses in the binary system were much lower and, in fact, are consistent with being neutron stars. A neutron star results from the collapse of a star into a compact object of between 1.1–1.6 solar masses. We have identified our event as the merger of two neutron stars, each about the size of Geneva, but having several hundred thousand times the mass of the Earth.
As we accumulate more events and improve our ability to record their waveforms, we look forward to studying nuclear physics under these extreme conditions. This latest event was the first observed gravitational-wave transient phenomenon also to have electromagnetic counterparts, representing multi-messenger astronomy. Combining the LIGO and Virgo signals, the source of the event was narrowed down to a location in the sky of about 28 square degrees, and it was soon recognised that the Fermi satellite had detected a gamma-ray burst shortly afterwards in the same region. A large and varied number of astronomical observations followed. The combined set of observations has resulted in an impressive array of new science and papers on gamma-ray bursts, kilonovae, gravitational-wave measurements of the Hubble constant, and more. The result even supports the idea that binary neutron-star collisions are responsible for the very heavy elements, such as platinum and gold.
Going deeper
Much has happened since our first detection, and this portends well for the future of this new field. Both LIGO and Virgo entered into a 15 month shutdown at the end of August to further improve noise levels and raise their laser power. At present, Advanced LIGO is about a factor of two below its design goal (corresponding to a factor of eight in event rates). We anticipate reaching design sensitivity by about 2020, after which the KAGRA interferometer in Japan will join us. A third LIGO interferometer (LIGO-India) is also scheduled for operation in around 2025. These observatories will constitute a network offering good global coverage and will accumulate a large sample of binary merger events, achieve improved pointing accuracy for multi-messenger astronomy, and hopefully will observe other sources of gravitational waves. This will not be the end of the story. Beyond the funded programme, we are developing technologies to improve our instruments beyond Advanced LIGO, including improved optical coatings and cryogenic test masses.
In the longer range, concepts and designs already exist for next-generation interferometers, having typically 10 times better sensitivity than will be achieved in Advanced LIGO and Virgo (see panel on previous page). In Europe, a mature concept called the Einstein Telescope is an underground interferometer facility in a triangular configuration (see panel on previous page), and in the US a very long (approximately 40 km) LIGO-like interferometer is under study. The science case for such next-generation devices is being developed through the Gravitational Wave International Committee (GWIC), which is the gravitational-wave field’s equivalent to the International Committee for Future Accelerators (ICFA) in particle physics. Although the science case appears very strong scientifically and technical solutions seem feasible, these are still very early days and many questions must be resolved before a new generation of detectors is proposed.
To fully exploit the new field of gravitational-wave science, we must go beyond ground-based detectors and into the pristine seismic environment of space, where different gravitational-wave sources will become accessible. As described earlier, the lowest frequencies accessible by Earth-based observatories are about 10 Hz. The Laser Interferometer Space Antenna (LISA), a European Space Agency project scheduled for launch in the early 2030s, was approved earlier this year and will cover frequencies around 10–1–10–4 Hz. LISA will consist of three satellites separated by 2.5 × 106 km in a triangular configuration and a heliocentric orbit, with light travelling continually along each arm to monitor the satellite separations for deviations from a passing gravitational wave. A test mission, LISA Pathfinder, was recently flown and demonstrated the key performance requirements for LISA in space (CERN Courier November 2017 p37).
Meanwhile, pulsar-timing arrays are being implemented to monitor signals from millisecond pulsars, with the goal of detecting low-frequency gravitational waves by studying correlations between pulsar arrival times. The sensitivity range of this technique is 10–6–10–9 Hz, where gravitational waves from massive black-hole binaries in the centres of merging galaxies with periods of months to years could be studied.
An ultimate goal is to study the Big Bang itself. Gravitational waves are not absorbed as they propagate and could potentially probe back to the very earliest times, while photons only take us to within 300,000 or so years after the Big Bang. However, we do not yet have detectors sensitive enough to detect early-universe signals. The imprint also of gravitational waves on the cosmic microwave background has been pursued by the Bicep2 experiment, but background issues so far mask a possible signal.
Although gravitational-wave science is clearly in its infancy, we have already learnt an enormous amount and numerous exciting opportunities lie ahead. These vary from testing general relativity in the strong-field limit to carrying out multi-messenger gravitational-wave astronomy over a wide range of frequencies – as demonstrated by the most recent and stunning observation of a neutron-star merger. Since Galileo first looked into a telescope and saw the moons of Jupiter, we have learnt a huge amount about the universe through modern-day electromagnetic astronomy. Now, we are beginning to look at the universe with a new probe and it does not seem to be much of a stretch to anticipate a rich new era of gravitational-wave science.
CERN LIGO–Virgo meeting weighs up 3G gravitational-wave detectors
Similar to particle physicists, gravitational-wave scientists are contemplating major upgrades to present facilities and developing concepts for next-generation observatories. Present-generation (G2) gravitational-wave detectors – LIGO in Hanford, Livingston and India, Virgo in Italy, GEO600 in Germany and KAGRA in Japan – are in different stages of development and have different capabilities (see main text), but all are making technical improvements to better exploit the science potential from gravitational waves over the coming years. As the network develops, the more accurate location information will enable the long-time dream of studying the same astrophysical event with gravitational waves and their electromagnetic and neutrino counterpart signals.
The case for making future, more sensitive next-generation gravitational-wave detectors is becoming very strong, and technological R&D and design efforts for 3G gravitational detectors may have interesting overlaps with both CERN capabilities and future directions. The 3G concepts have many challenging new features, including: making longer arms; going underground; incorporating squeezed quantum states; developing lower thermal-noise coatings; developing low-noise cryogenics; implementing Newtonian noise cancellation; incorporating adaptive controls; new computing capabilities and strategies; and new data-analysis methods.
In late August, coinciding with the end of the second Advanced LIGO observational run, CERN hosted a LIGO–Virgo collaboration meeting. On the final day, a joint meeting between LIGO–Virgo and CERN explored possible synergies between the two fields. It provided strong motivation for next-generation facilities in both particle and gravitational physics and revealed intriguing overlaps between them. On a practical level, the event identified issues facing both communities, such as geology and survey, vacuum and cryogenics, control systems, computing and governance.
The time for R&D, construction and commissioning is expected to be around a decade, with problems near to intractable. It is planned to use cryogenics to bring mirrors to the temperature of a few kelvin. The mirrors themselves are coated using ion beams for deposition, to obtain a controlled reflectivity that must be uniform over areas 1 m in diameter. These mirrors work in an ultra-high vacuum, and residual gas-density fluctuations must be minimal along a vacuum cavity of several tens of kilometres, which will be the approximate footprint of the 3G scientific infrastructure.
Data storage and analysis is another challenge for both gravitational and particle physicists. Unlike the large experiments at the LHC, which count or measure energy deposition in millions of pixels at the detector level, interferometers continuously sample signals from hundreds of channels, generating a large amount of data consisting of waveforms. Data storage and analysis places major demands on the computing infrastructure, and analysis of the first gravitational events called for the GRID infrastructure.
Interferometers have to be kept on an accurately controlled working point, with mirrors used for gravitational-wave detection positioned and oriented using a feedback control system, without introducing additional noise. Sensors and actuators are different in particle accelerators but the control techniques are similar.
Comparisons of the science capabilities, costs and technical feasibility for the next generation of gravitational-wave observatories are under active discussion, as is the question of how many 3G detectors will be needed worldwide and how similar or different they need be. Finally, there were discussions of how to form and structure a worldwide collaboration for the 3G detectors and how to manage such an ambitious project – similar to the challenge of building the next big particle-physics project after the LHC.
•Barry Barish, the author of this feature, shared the 2017 Nobel Prize in Physics with Kip Thorne and Rainer Weiss for the discovery of gravitational waves (CERN Courier November 2017 p37).
Since the discovery of the positron in 1932 and the antiproton in 1955, physicists have striven to confront the properties of leptonic and baryonic matter and antimatter. A major advance in the story took place in 1995 when the first antihydrogen atoms were observed at CERN’s LEAR facility. Then, in 2002, the ATHENA and ATRAP collaborations produced cold (trappable) antihydrogen at CERN’s Antiproton Decelerator (AD), paving the way to the first measurement of antihydrogen’s atomic transitions. An intense research programme at the AD has followed to compare the atomic states of antimatter with the most well-known atomic transitions in matter.
The physical properties of antimatter particles are tightly constrained within the Standard Model of particle physics (SM). For all local Lorentz-invariant quantum-field theories of point-like particles like the SM, the combination of the discrete symmetries charge-conjugation, parity and time-reversal (CPT) is conserved. An implication of the CPT theorem is that the properties of matter and antimatter are equal in absolute value. In this respect the lack of observation of primordial antimatter in the universe is tantalising, hinting that the universe has a preference for matter over antimatter despite their perfect symmetry on the microscopic scale as imposed by the SM. Although violations of CP symmetry, from which an imbalance in matter and antimatter can arise, have been observed in several systems, the effect is many orders of magnitude too small to account for the observed cosmological mismatch.
In the quest for a quantitative explanation to the baryon asymmetry in the universe, one could question the validity of our formulation of the laws of physics in terms of quantum-field theory. This is additionally motivated by the notable absence of the gravitational force in the SM and would suggest that CPT symmetry (or Lorentz invariance) need not be conserved. A framework called Standard Model Extension (SME), an effective field theory that contains the SM and general relativity but also possible CPT and Lorentz violating terms, allows researchers to interpret the results of experiments designed to search for such effects.
Any measurement with antihydrogen atoms constitutes a model-independent test of CPT invariance. Given the precision at which they have been measured in hydrogen, two atomic transitions in antihydrogen are of particular interest: the 1S–2S transition and the ground-state hyperfine splitting (which corresponds to the 21 cm microwave-emission line between parallel and antiparallel antiproton and positron spins). These were determined over the past few decades in hydrogen with an absolute (relative) precision of 10 Hz (4 × 10–15) and 2 mHz (1.4 × 10–12), respectively. Reaching similar precision in antihydrogen, hydrogen’s CPT conjugate would provide one of the most sensitive CPT tests in what was until recently a yet unprobed atomic domain. But this is a daunting challenge.
Status and prospects
Measurements of the hyperfine splitting of hydrogen reached their apogee in the 1970s. It is only recently that interest in such measurements has been revived, motivated by the possibility to further develop methods that can be applied to antihydrogen. Hydrogen’s hyperfine splitting was originally measured using a maser to interrogate atoms held in a Teflon-coated storage bulb, but this technique is not transferable to antihydrogen because unavoidable interactions between the antiatoms and the walls would lead to annihilations.
A precision of a few Hz can, however, be envisioned using the “beam-resonance” method of Rabi. This technique involves a polarised beam, microwave fields to drive spin flips, magnetic-field gradients to select a spin state, and a detector to measure the flux of atoms as a function of the microwave frequency. While less precise than the maser technique, the in-beam method can be directly applied to antihydrogen with a foreseen initial precision of a few kHz (10–6 relative precision). The leading order of the hyperfine splitting can be calculated from the known properties of the antiproton and positron, but a 10–6 level measurement would be sensitive to the antiproton magnetic and electric form factors that are so far unknown.
Earlier this year, the ALPHA experiment at CERN’s AD measured the hyperfine splitting of trapped antihydrogen. Following a long campaign that saw ALPHA determine antihydrogen’s 1S–2S transition in 2016 (CERN Courier January/February 2017 p8), the collaboration achieved a precision of 4 × 10–4 (0.5 MHz) on the hyperfine measurement. Ultimately the precision of in-trap measurements will be limited by the presence of strong magnetic-field gradients, however. The in-beam technique, by contrast, probes the hyperfine transition far away from the strong inhomogeneous magnetic trapping fields. In the 1950s this technique enabled hydrogen’s hyperfine structure to be determined to a precision of 50 Hz. The recent measurement of this transition by the ASACUSA experiment using a similar technique has now improved on this precision by more than an order of magnitude.
The ASACUSA collaboration was formed in 1997 to investigate antiprotonic atoms and collisions involving slow antiprotons. Its antihydrogen programme started in 2005 at the AD and in recent years the collaboration has focused on two topics. One is laser spectroscopy of antiprotonic helium, which allows the determination of the antiproton mass (CERN Courier September 2011 p7) and the antiproton magnetic moment. The latter value was recently measured to higher precision in Penning traps first by the ATRAP experiment (CERN Courier May 2013 p6) and, as announced in October, further improved by more than three orders of magnitude by the BASE experiment, both also located at the AD.
The second focus of ASACUSA, led by the CUSP group, is to measure the hyperfine structure of antihydrogen in a polarised beam. ASACUSA employs a multi-trap set-up to produce an antihydrogen beam (CERN Courier March 2014 p5) for Rabi-type spectroscopy on the hyperfine transition. The spectroscopy apparatus was designed to match the expected properties of an antihydrogen beam and called for a test of the apparatus with a hydrogen beam of similar characteristics.
Hydrogen first
The spectroscopy technique relies on the dependency of the atomic energy levels on a magnetic field, also known as the Zeeman effect (figure 1). In the presence of a magnetic field, the degeneracy of the hyperfine triplet states is lifted. Two of the states, called low-field seekers (lfs), have a rising energy with rising magnetic field, while the third state of the triplet and the singlet state decrease their energies with rising magnetic field (they are called high-field seekers, hfs). These distinguishing properties are used to first polarise the beam by means of a magnetic-field gradient (figure 2), which leads to opposite forces on lfs and hfs. As a result, only lfs arrive at the interaction region, where a microwave cavity provides an oscillating magnetic field. This field can then induce state conversions from lfs to hfs if tuned to the right frequency. Atoms in hfs states are subsequently removed from the beam by a second section of magnetic-field gradients, thus leading to a reduced count rate at the detector when the transition is induced.
In the apparatus design chosen, large geometrical openings compensate for the low antihydrogen flux and a superconducting magnet is used to generate sufficiently selective magnetic-field gradients over such a large area. The oscillating microwave field needed to drive the hyperfine transition must be homogenous over the large geometrical opening, which dictated the design of the cavity leading to a particular resonance spectrum (figure 3). The functionality of the spectroscopy apparatus and other technical developments were tested by coupling a cold and polarised hydrogen source and a quadrupole mass spectrometer as hydrogen detector to the spectroscopy apparatus envisioned for the antihydrogen experiment (figure 2).
The measurement led to the determination of the hydrogen’s so-called σ1 hyperfine transition (figure 1), the transition frequency of which was measured as a function of an externally applied magnetic field. From a set of frequency determinations, the zero-field value could be extracted and such measurements were repeated under 10 distinct conditions to investigate systematic effects. In total more than 500 resonances (an example is shown in figure 3) were acquired to extract the zero-field hydrogen ground-state hyperfine splitting. Numerical methods developed to assist the analysis of the transition line shape contributed to the improvement by more than an order of magnitude, leading to a precision of 3.8 Hz and a value consistent with the more precise maser result.
A measurement of hydrogen’s hyperfine splitting at the Hz level implies an absolute precision of 10–15 eV. Given the scarcity of antihydrogen and the yet unprobed properties (namely velocity and atomic states) of the antihydrogen beam, a measurement at this level of precision on antihydrogen is not possible in the short-term. However, the analysis of ASACUSA data collected with hydrogen enabled the collaboration to assess the necessary number of antiatoms to reach a 10–6 sensitivity, assuming plausible beam properties. The conclusion is that a measurement at the peV level (kHz precision) should be possible if 8000 antiatoms can be detected after the spectrometer. That would require at least an order-of-magnitude increase in the antihydrogen flux.
The Rabi-type spectroscopy approach chosen by ASACUSA has the capability to test individual transitions in hydrogen and antihydrogen under well-controlled external conditions and, if successful, will immediately result in a precision of 10–6 or better. At this level, the hyperfine transitions would provide yet unknown information on the internal structure of the antiproton. However, much work remains to be done for the ASACUSA experiment to gather the needed number of antihydrogen atoms in a reasonable time.
Until then, more measurements can be performed with the hydrogen set-up. The apparatus has recently been modified to allow for the simultaneous measurement of σ1 and π1 transitions (figure 1). Within the SME, the latter transition could reveal CPT and Lorentz violations while the σ1 transition is insensitive to these effects and would serve as a monitor of potential systematic errors. This would give access to a number of so-far-unconstrained SME parameters that can be probed by hydrogen alone. While the antihydrogen experiment focuses on increasing the cold, ground- state antihydrogen flux, the hydrogen experiment is about to start a new measurement campaign for which results are expected in the next 18–24 months. The hydrogen atom has been a source of profound theoretical developments for some time, and history has shown that it is well worth the effort to study it ever more closely.
Training and education have been among CERN’s core activities since the laboratory was founded. The CERN Convention of 1954 stated that these activities might include “promotion of contacts between, and interchange of, scientists…and the provision of advanced training for research workers”. It was in this spirit that the first residential schools of physics were organised by CERN in the early 1960s. Initially held in Switzerland, with a duration of one week, the schools soon evolved into two-week events that took place annually and rotated among CERN Member States.
Following discussions between the Directors-General of CERN and the Joint Institute for Nuclear Research (JINR) in Russia, it was agreed that CERN should organise the 1970 school in collaboration with JINR. The event was held in Finland, which at that time was not a Member State of either institution, and the CERN–JINR collaboration evolved into today’s annual CERN–JINR European Schools of High-Energy Physics (HEP). The European schools that began in 1993 (CERN Courier June 2013 p27) are held in a CERN Member State three years out of four, and in a JINR Member State one year out of four.
The target audience of the European schools is advanced PhD students in experimental HEP, preparing them for a career as research physicists. Around 100 students attend each event following a rigorous selection process. Those attending the 2017 school – the 25th in the series, held from 6 to 19 September in Évora, Portugal – were selected from more than 230 candidates, taking into account their potential to pursue a research career in experimental particle physics. The 100 successful students included 33 different nationalities and, reflecting an increasing trend over the past quarter century of the European schools, about a third were women.
The core programme of the schools continues to be particle-physics theory and phenomenology, including general topics such as the Standard Model, quantum chromodynamics and flavour physics, complemented by more specialised aspects such as heavy-ion physics, Higgs physics, neutrino physics and physics beyond the Standard Model. A course on practical statistics reflects the importance of this topic in modern HEP data analysis. The school also includes classes on cosmology, in light of the strong link between particle physics and astrophysical dark-matter research. Students are taught about the latest developments and prospects at CERN’s Large Hadron Collider (LHC). They also hear from the Director-General of CERN and the director of JINR about the programmes and plans of the two organisations, which have links going back more than half a century. Thus, in addition to studying a wide spectrum of physics topics, the students are given a broad overview and outlook on particle-physics facilities and related issues.
The two-week residential programme includes a total of more than 30 plenary lectures of 90 minutes each, complemented by parallel discussion sessions involving six groups of about 17 students. Each group remains with the same discussion leader for the duration of the school, providing an environment where the students are comfortable to ask questions about the lectures and explore topics of interest in greater depth. The students are encouraged to discuss their own research work with each other and with the staff of the school during an after-dinner poster session. The lecturers are highly experienced experts in their fields, coming from many different countries in Europe and beyond, while the discussion leaders are highly active, but sometimes less-senior physicists.
New ingredient
A new ingredient in the school’s programme since 2014 is training in outreach for the general public. Making use of two 90 minute teaching slots, the students learn about communicating science to a general audience from two professional trainers who have a background in journalism with the BBC. The compulsory training sessions are complemented by optional one-on-one exercises that are very popular with the students. The exercises involve acting out a radio interview about a discovery of new physics at the LHC based on a fictitious scenario.
Building on what they have learnt in the science-communication training, the students from each discussion group collaborate in their “free time” to prepare an eight-minute talk on a particle-physics topic at a level understandable to the public. This is an exercise in teamwork as well as in outreach. The group needs to identify the specific aspects of the topic that they are going to address, develop a plan to make it interesting and relevant to a general audience, share the work of preparing the presentation between the team members, and agree who will give the talk on their behalf. The results of the collaborative group projects are presented in an after-dinner session that is video recorded. A jury made up of experienced science communicators judges the projects and gives feedback to each group. The topics addressed in the projects at the 2017 school in Portugal included the Standard Model, neutrinos, extra dimensions, and cosmology, with the prize for the best team effort going to a presentation on the Higgs boson illustrated with a “cookie-eating grandmother” field.
Equipping young researchers with good science-communication skills is considered important by the management of both CERN and JINR, and outreach training is greatly appreciated by most of the European school’s students. As a follow up, students are encouraged to make contact with the people responsible for outreach in their experimental collaborations or home institutes, with a view to participating in science-communication activities.
In addition to the outreach training, important public events are often held in the host country at the time of the school – benefitting from the presence of the leading scientists who are lecturing.This is well illustrated by the 2017 edition, at which a public event at Évora University coincided with visits to the school by CERN Director-General Fabiola Gianotti, who gave a talk entitled “The Higgs particle and our life”, and JINR director Victor Matveev. The event was attended by numerous high-level representatives of Portuguese scientific institutes and universities, and also by the Portuguese minister of science, technology and higher education, Manuel Heitor. There was an audience of about 300, including high-school teachers, pupils and university students, with more following a live webcast.
Branching out
In addition to the annual schools that take place in Europe, CERN is involved in organising schools of HEP in Latin America (in odd-numbered years since 2001) and in the Asia-Pacific region (in even-numbered years since 2012). These schools have a similar core programme to the European ones, but with more emphasis on instrumentation and experimental techniques. This reflects the fact that there are fewer opportunities in some of the countries concerned for advanced training in these areas.
Although there is so far no specific teaching at the schools in Latin America and the Asia-Pacific region on communicating science to a general audience, education and outreach activities are often arranged in the host country around the time of the schools. For example, an important education and outreach programme was organised to coincide with the 2017 CERN–Latin-American School held from 8 to 21 March in Querétaro, Mexico. Here, several teachers from the CERN school gave short lecture courses or seminars to undergraduate students from Universidad Autónoma de Querétaro and the Juriquilla campus of Universidad Nacional Autónoma de México.
A highlight of the outreach programme in Mexico was a large public event on 8 March, the arrivals day for students at the CERN school and, by coincidence, International Women’s Day. This included introductory talks by Fabiola Gianotti (recorded in advance and subtitled in Spanish) and by Julia Tagüeña Parga (in person), deputy director for scientific development in the Mexican national science and technology agency, CONACyT. These were followed by a lecture entitled “Einstein, black holes and gravitational waves” by Gabriela Gonzalez, spokesperson of the LIGO collaboration, attracting a capacity audience of about 400 people.
As is evident, the European schools of HEP have a long history and continue their primary mission of teaching HEP and related topics to young researchers. However, the programme continues to evolve, and it now includes some training in science communication that is becoming increasingly important in the CERN and JINR Member States. The success of the schools can be judged by an anonymous evaluation questionnaire in which the overall assessment is overwhelmingly positive, with about 60% of students in 2014–2017 giving the highest ranking of “excellent”.
In total, more than 3000 students have attended the schools, including the Latin-American schools since 2001 and the Asia–Europe–Pacific schools since 2012, as well as the European schools since 1993. All these schools are important ingredients in delivering CERN’s mission in education and outreach, and in supporting its policies of international co-operation and being open to geographical enlargement within and beyond Europe. They bring together participants and teachers of many different nationalities, and each school requires close collaboration between CERN, co-organisers such as JINR for the European schools, and colleagues from the host country. The schools may also link in with other aspects of CERN’s international relations. For example, the 2015 Latin-American school in Ecuador helped to pave the way for formal membership of Ecuadorian universities in the CMS experiment. Similarly, the 2011 European school and associated outreach activities in Bucharest marked steps towards Romania becoming a Member State of CERN.
The next European school will be held in Maratea, Italy, from 20 June to 3 July 2018, followed by an Asia–Europe–Pacific school in Quy Nhon, Vietnam, from 12 to 25 September 2018.
Progress in experimental particle physics is driven by advances in accelerators. The conversion of storage rings into colliders in the 1970s is one example, another is the use of superconducting magnets and RF structures that allow higher energies to be reached. CERN’s Large Hadron Collider (LHC) is halfway through its second run at an energy of 13 TeV, and its high-luminosity upgrade is expected to operate until the mid-2030s. Several machines are under consideration for the post-LHC era and many will be weighed up during the European Strategy for Particle Physics beginning in 2019. All are large facilities based on advanced but essentially existing accelerator technologies.
A completely different breed of accelerator based on novel accelerating technologies is also under intense study. Capable of operating with an accelerating gradient larger than 1 GV/m, advanced and novel accelerators (ANAs) could reach energies in the 1–10 TeV range in much more compact and efficient ways. The technological challenge is huge and the timescales are long, but the eventual goal is to have a linear electron–positron or an electron–proton collider at the energy frontier. Such a machine would have a smaller footprint than conventional collider designs and promises energies that otherwise are technologically extremely difficult and expensive to reach.
The first Advanced and Novel Accelerators for High Energy Physics Roadmap (ANAR) workshop took place at CERN in April, focusing on the application of ANAs to high-energy physics (CERN Courier June 2017 p7). The workshop was organised under the umbrella of the International Committee for Future Accelerators as a step towards an international ANA scientific roadmap for an advanced linear collider, with the aim of delivering a technical design report by 2035. The first task towards this goal is to take stock of the scientific landscape by outlining global priorities and identifying necessary facilities and existing programmes.
The ANA landscape
The first idea to accelerate particles in a plasma came as long ago as 1979, with a seminal publication by Tajima and Dawson. It involved the use of wakefields – accelerating longitudinal electric fields generated in a plasma in the wake of a driving laser pulse or a particle bunch – to accelerate and focus a relativistic bunch of particles. In ANAs using plasma as a medium, the wakefields are sustained by a charge separation in the plasma driven by a laser pulse or a particle beam. Large energy gains over short distances can also be reached in ANAs using dielectric material structures that can sustain maximum accelerating fields larger than is possible in metallic structures. These ANAs can accelerate electrons as well as positrons and can also be driven by laser pulses or particle bunches.
Initial experiments took place with electrons at SLAC and elsewhere in the 1990s, demonstrating the principles of the technique, but the advent of high-power lasers as wakefield drivers led to increased activity. After the first demonstration of peaked electron spectra in millimetre-scale plasmas in 2004, GeV electron beams were obtained with 40 TW laser pulses in 2006 and subsequently electron beams with multi-GeV energies have been reported with PW-class laser systems and few-centimetre-long plasmas. Advanced and novel technologies for accelerators have made remarkable progress over the past two decades. They are now capable of bringing electrons to energies of a few GeV over a distance of a few centimetres, compared to 0.1 MeV per centimetre for the Large Electron–Positron (LEP) collider. Reaching such energies with ANAs has therefore sparked interest for high-energy physics applications, in addition to their potential for industry, security or health sectors.
Several challenges must be addressed before proposing a technical design for an advanced linear collider (ALC), requiring the sustained efforts of a diverse community that currently includes more than 62 laboratories in more than 20 countries. The key challenges are either related to fundamental components of ANAs – such as the injectors, accelerating structures, staging of components and their reliability – or to beam dynamics at high energy and the preservation of energy spread, emittance and efficiency.
A major component necessary for the application of an ANA to high-energy physics is a wakefield driver. In practice, this could be an efficient and reliable laser pulse with a peak power topping 100 TW, or a particle bunch with an energy higher than 1 GeV. In both cases, however, the duration of the pulse must be shorter than 100 fs.
The plasma medium, separated into successive stages, is another key component. Assuming accelerating gradients in the region 10–50 GeV/m and energy gains of 10–20 GeV per stage, plasma media 20–200 cm long are required. The main challenges for the plasma medium are the reproducibility, density uniformity, density ramps at their entrance and exit, and the high repetition rate required for collider operation. Tailoring the density ramps is important to mitigate the usually large mismatch between the small transverse size of the accelerated beam inside the plasma and the relatively large beam size that inter-stage optics must handle between plasma modules.
Staging successive accelerator modules is a further challenge in itself. Staging is necessary because the energy carried by most drivers is much smaller than the final energy desired for the accelerated bunch, e.g. 1.6 kJ for 2 × 1010 electrons or positrons at an energy of 500 GeV. Since state-of-the-art femtosecond laser pulses and relativistic electron bunches carry less than 100 J, multiple drivers and multiple stages are needed. Staging has to achieve, in a compact way, coupling of the accelerated bunch out of one plasma module into the next one, while preserving all bunch properties, and evacuating the exhausted driver and bringing the fresh driver before entering the next stage. Staging has been demonstrated, although with low-energy beams (< 200 MeV), in a number of schemes, the most recent being the one performed at the BELLA Center at LBNL. Injection of electrons from a laser plasma injector into a plasma module providing acceleration to 5–10 GeV is one of the goals of the French APOLLON CILEX laser facility starting operation in 2018, and of the baseline explored in the design study EuPRAXIA (see panel on right). The AWAKE experiment at CERN, meanwhile, aims to use protons to drive a plasma wakefield in a single plasma section with the long-term goal of accelerating electrons to TeV energies.
Stability, reproducibility and reliability are trademarks of accelerators used for particle physics. Results obtained with ANAs often appear of lower stability and reproducibility than those obtained with conventional accelerators. However, it is important to note that these ANAs are run mostly as experiments and research tools, with limited resources put towards feedback and control systems – which are one of the major features of conventional accelerators. A strong effort therefore has to be put into developing proper tools and devices, for instance by exploiting synergies with the RF-accelerator community to develop more reliable technologies.
Testing the components for an eventual ALC requires major facilities, most likely located at national or international laboratories. ANA technology might be more compact than that of conventional accelerators, but the environment for producing even 10–100 GeV range prototypes is beyond the capability of university labs, requiring multiple engineering skills to demonstrate reliable operation in a safe environment. The size and cost of these facilities are better justified in a collaborative environment, in line with the development of accelerators relevant for high-energy physics.
Four-phase roadmap
Co-ordination of the advanced accelerators field is at different levels of advancement around the world. In the US, roadmaps were drawn up in 2016 for plasma- and structure-based ANAs with application to high-energy physics and the construction of a linear collider in the 2040s. One outcome of the ANAR workshop this year was a first attempt at an international scientific roadmap. Arranged into four distinct phases, the roadmap describes the stages deemed scientifically necessary to elaborate a design for a multi-TeV linear collider.
The first is a five-year-long period in which to develop injectors and accelerating structures with controlled parameters, such as an injector–accelerator unit producing GeV-range electron and positron beams with high-quality bunches, low emittance and low relative energy spread. A second five-year phase will lead to improved bunch quality at higher energy, with the staging of two accelerating structures and first proposals of conceptual ALC designs. The third phase, also lasting five years, will focus on the reliability of the acceleration process, while the fourth phase will be dedicated to technical design reports for an ALC by 2035, following selection of the most promising options.
Community effort
Many very important challenges remain, such as improving the quality, stability and efficiency of the accelerated beams with ANAs, but no show-stopper has been identified to date. However, the proposed time frame is achievable only if there is an intensive and co-ordinated R&D effort supported by sufficient funding for ANA technology with particle-physics applications. The preparation of an eventual technical design report for an ALC at the energy frontier should therefore be undertaken by the ANA community with significant contributions from the whole accelerator community.
From the current state of wakefield acceleration in plasmas and dielectrics, it is clear that advanced concepts offer several promising options for energy frontier electron–positron and electron–proton colliders. In view of the significant cost of intense R&D for an ALC, an international programme, with some level of international co-ordination, is more suitable than a regional approach. Following the April ANAR workshop, a study group towards advanced linear colliders, named ALEGRO for Advanced LinEar collider study GROup, has been set up to co-ordinate the preparation of a proposal for an ALC in the multi-TeV energy range. ALEGRO consists of scientists with expertise in advanced accelerator concepts or accelerator physics and technology, drawn from national institutions or universities in Asia, Europe and the US. The group will organise a series of workshops on relevant topics to engage the scientific community. Its first objective is to prepare and deliver, by the end of 2018, a document detailing the international roadmap and strategy of ANAs with clear priorities as input for the European Strategy Group. Another objective for ALEGRO is to provide a framework to amplify international co-ordination on this topic at the scientific level and to foster worldwide collaboration towards an ALC, and possibly broaden the community. After all, ANA technology represents the next-generation of colliders and could potentially define particle physics into the 22nd century.
The 3rd European Advanced Accelerator Concept (EAAC) workshop, held every two years, took place from 24 to 30 September on the Island of Elba, Italy. Around 300 scientists attended, with advanced linear colliders at the centre of discussions. Specialists from accelerator physics, RF technology, plasma physics, instrumentation and the laser field discussed ideas and directions towards a new generation of ultra-compact and cost-effective accelerators with novel applications in science, medicine and industry.
Among the many outstanding presentations at EAAC 2017, at which 70 PhD students presented their work, were reports on: laser-driven kHz generation of MeV beams at LOA/TU Vienna; dielectric acceleration results from PSI/DESY/Cockcroft; first results from the AWAKE experiment at CERN; 7 GeV electrons in laser plasma acceleration from LBNL; 0.5 nC electron bunches from HZDR; new R&D directions towards high-power lasers at LLNL; controllable electron beams from Osaka and LLNL; undulator X-ray generation after laser plasma accelerators from DESY/University of Hamburg/SOLEIL/LOA; important progress in hadron beams from plasma accelerators from Belfast/HZDR/GSI; and future collider plans from CERN.
A special session was devoted to the Horizon2020 design study EuPRAXIA (European Plasma Research Accelerator with eXcellence In Applications). EuPRAXIA is a consortium of 38 institutes, co-ordinated by DESY, which aims to design a European plasma accelerator facility. This future research infrastructure will deliver high-brightness electron beams of up to 5 GeV for pilot users interested in free-electron laser applications, tabletop test beams for high-energy physics, medical imaging and other applications. This study, conceived at the EAAC meeting in 2013, is strongly supported by the European laser industry.
The EAAC was founded by the European Network for Novel Accelerators in 2013 and has grown in its third edition into a meeting with worldwide visibility, rapidly catching up with the long tradition of the Advanced Accelerator Concepts workshop (AAC) in the US. The EAAC2017 workshop was supported by the EuroNNAc3 network through the EU project ARIES, INFN as the host organisation, DESY and the Helmholtz association, CERN and the industrial sponsors Amplitude, Vacuum FAB and Laser Optronic.
•Ralph Assmann, DESY, Massimo Ferrario, INFN and Edda Gschwendtner, CERN.
I returned to the Netherlands as a professor of experimental physics at Radboud University Nijmegen in 1998. After having enjoyed more than 10 years almost exclusively doing research work at CERN and elsewhere, I found (as I had strongly suspected) that I very much enjoyed teaching. Teaching first-year undergraduate physics courses, I came into contact with high-school teachers who were assisting students with the transition between secondary school and university. While successful for a broad group of students, many realised during their first year of university that studying physics was rather different from what they had imagined when they were still in school. As a result, there was a significant drop-out rate.
An opportunity to remedy this situation came when I read about a cosmic-ray high-school project in Canada led by experimental particle-physicist Jim Pinfold. Soon thereafter, and independently, a Nijmegen colleague, Charles Timmermans, came to me with a similar proposal for our university, and in 2000 we initiated the Nijmegen Area High School Array. Two years later, together with others, we launched the Dutch national High-School Project on Astrophysics Research with Cosmics (HiSPARC), which involved placing scintillator detectors on the roofs of high schools to form detector arrays. This is an excellent mixture of real science and educating high-school pupils in research methods. It has been a lot of fun to build the detectors with pupils, to legally walk on school roofs, and to analyse the data that arrive. Of course reality is unruly and it is sometimes hard to keep the objectives in focus: the schools can tend to be rather casual, if not careless, about the proper function of their set-up, whereas for the physics harvest it is essential to have a reliable network.
HiSPARC had an interesting side effect. While working with my group on the DΦ experiment at the Tevatron, focusing on finding the Higgs boson, I was, more or less adiabatically, pulled towards the Pierre Auger Observatory (PAO) the international cosmic-ray observatory in Argentina. The highest-energy particles in the universe are very mysterious: we don’t yet know precisely where they come from, although the latest PAO results suggest we’re getting close (Extreme cosmic rays reveal clues to origin). Nor do we know how they are accelerated to energies up to 100 million TeV. My involvement as a university scientist in a high-school project has completely redirected my research career, and for the past five years I have spent all of my research time on the PAO.
Prompted by my teacher network, around 10 years ago I organised a joint effort between six nearby high schools concerning a new exam subject introduced by the Dutch ministry – “nature, life and technology”, which integrates science, technology, engineering and maths (STEM) subjects. Every Friday afternoon, 350 pupils come to our faculty of science, which itself is an organisational and logistical challenge. The groups are organised during the course of the afternoon depending on the activity: a lecture for all, tutorials, and labs in biology, chemistry, physics, computer science and other subjects. Around 10 different locations in the building (and sometimes outside) are involved, and for every 20 25 pupils there is one teacher available. Following this project, in 2011 I initiated a two-year-long pre-university programme for gifted fifth and sixth graders in high school, which also takes place at the university and involves about 20 teachers and 14 university faculty members. The first cohort of pupils arrived in 2013, and one of the first graduates in the programme recently completed an internship at CERN.
Admittedly it is a lot of work. But it has been worth the effort. By thinking about how to teach particle physics to pupils with different backgrounds and experiences, I have gained more insight into the fundamentals of particle physics. Even the sometimes tedious experience of bringing school managements together and getting them to carry out projects outside of their comfort zones has prepared me well for some aspects of my present duty as president of CERN Council. Working with pupils and teachers has enriched my life, without having to compromise on research or management duties. And if I can combine such things with a research career, there seems little excuse for most scientists not to help educate and inspire the next generation.
By T W Donnelly, J A Formaggio, B R Holstein, R G Milner and B Surrow
Cambridge University Press
This textbook aims to present the foundations of both nuclear and particle physics in a single volume in a balanced way, and to highlight the interconnections between them. The material is organised from a “bottom-up” point of view, moving from the fundamental particles of the Standard Model to hadrons and finally to few- and many-body nuclei built from these hadronic constituents.
The first group of chapters introduces the symmetries of the Standard Model. The structure of the proton, neutron and nuclei in terms of fundamental quarks and gluons is then presented. A lot of space is devoted to the processes used experimentally to unravel the structure of hadrons and to probe quantum chromodynamics, with particular focus on lepton scattering. Following the treatment of two-nucleon systems and few-body nuclei, which have mass numbers below five, the authors discuss the properties of many-body nuclei, and also extend the treatment of lepton scattering to include the weak interactions of leptons with nucleons and nuclei. The last group of chapters is dedicated to relativistic heavy-ion physics and nuclear and particle astrophysics. A brief perspective on physics beyond the Standard Model is also provided.
The volume includes approximately 120 exercises and is completed by two appendices collecting values of important constants, useful equations and a brief summary of quantum theory.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.