Bluefors – leaderboard other pages

Topics

Testing times for space–time symmetry

Throughout history, our notion of space and time has undergone a number of dramatic transformations, thanks to figures ranging from Aristotle, Leibniz and Newton to Gauss, Poincaré and Einstein. In our present understanding of nature, space and time form a single 4D entity called space–time. This entity plays a key role for the entire field of physics: either as a passive spectator by providing the arena in which physical processes take place or, in the case of gravity as understood by Einstein’s general relativity, as an active participant.

Since the birth of special relativity in 1905 and the CPT theorem of Bell, Lüders and Pauli in the 1950s, we have come to appreciate both Lorentz and CPT symmetry as cornerstones of the underlying structure of space–time. The former states that physical laws are unchanged when transforming between two inertial frames, while the latter is the symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity inversion (P) and time reversal (T). These closely entwined symmetries guarantee that space–time provides a level playing field for all physical systems independent of their spatial orientation and velocity, or whether they are composed of matter or antimatter. Both have stood the tests of time, but in the last quarter century these cornerstones have come under renewed scrutiny as to whether they are indeed exact symmetries of nature. Were physicists to find violations, it would lead to profound revisions in our understanding of space and time and force us to correct both general relativity and the Standard Model of particle

Accessing the Planck scale

Several considerations have spurred significant enthusiasm for testing Lorentz and CPT invariance in recent years. One is the observed bias of nature towards matter – an imbalance that is difficult, although perhaps possible, to explain using standard physics. Another stems from the synthesis of two of the most successful physics concepts in history: unification and symmetry breaking. Many theoretical attempts to combine quantum theory with gravity into a theory of quantum gravity allow for tiny departures from Lorentz and CPT invariance. Surprisingly, even deviations that are suppressed by 20 orders of magnitude or more are experimentally accessible with present technology. Few, if any, other experimental approaches to finding new physics can provide such direct access to the Planck scale.

Unfortunately, current models of quantum gravity cannot accurately pinpoint experimental signatures for Lorentz and CPT violation. An essential milestone has therefore been the development of a general theoretical framework that incorporates Lorentz and CPT violation into both the Standard Model and general relativity: the Standard Model Extension (SME), as formulated by Alan Kostelecký of Indiana University in the US and coworkers beginning in the early 1990s. Due to its generality and independence of the underlying models, the SME achieves the ambitious goal of allowing the identification, analysis and interpretation of all feasible Lorentz and CPT tests (see panel below). Any putative quantum-gravity remnants associated with Lorentz breakdown enter the SME as a multitude of preferred directions criss-crossing space–time. As a result, the playing field for physical systems is no longer level: effects may depend slightly on spatial orientation, uniform velocity, or whether matter or antimatter is involved. These preferred directions are the coefficients of the SME framework; they parametrise the type and extent of Lorentz and CPT violation, offering specific experiments the opportunity to try to glimpse them.

The Standard Model Extension

At the core of attempts to detect violations in space–time symmetry is the Standard Model Extension (SME) – an effective field theory that contains not just the SM but also general relativity and all possible operators that break Lorentz symmetry. It can be expressed as a Lagrangian in which each Lorentz-violating term has a coefficient that leads to a testable prediction of the theory.

Lorentz and CPT research is unique in the exceptionally wide range of experiments it offers. The SME makes predictions for symmetry-violating effects in systems involving neutrinos, gravity, meson oscillations, cosmic rays, atomic spectra, antimatter, Penning traps and collider physics, among others. In the case of free particles, Lorentz and CPT violation lead to a dependence of observables on the direction and magnitude of the particles’ momenta, on their spins, and on whether particles or antiparticles are studied. For a bound system such as atomic and nuclear states, the energy spectrum depends on its orientation and velocity and may differ from that of the corresponding antimatter system.

The vast spectrum of experiments and latest results in this field were the subject of the triennial CPT conference held at Indiana University in June this year (see panel below), highlights from which form the basis of this article.

The seventh triennial CPT conference

A host of experimental efforts to probe space–time symmetries were the focus of the week-long Seventh Meeting on CPT and Lorentz Symmetry (CPT’16) held at Indiana University, Bloomington, US, on 20–24 June, which are summarised in the main text of this article. With around 120 experts from five continents discussing the most recent developments in the subject, it has been the largest of all meetings in this one-of-a-kind triennial conference series. Many of the sessions included presentations involving experiments at CERN, and the discussions covered a number of key results from experiments at the Antiproton Decelerator and future improvements expected from the commissioning of ELENA. The common thread weaving through all of these talks heralds an exciting emergent era of low-energy Planck-reach fundamental physics with antimatter.

CERN matters

As host to the world’s only cold-antiproton source for precision antimatter physics (the Antiproton Decelerator, AD) and the highest-energy particle accelerator (the Large Hadron Collider, LHC), CERN is in a unique position to investigate the microscopic structure of space–time. The corresponding breadth of measurements at these extreme ends of the energy regime guarantees complementary experimental approaches to Lorentz and CPT symmetry at a single laboratory. Furthermore, the commissioning of the new ELENA facility at CERN is opening brand new tests of Lorentz and CPT symmetry in the antimatter sector (see panel below).

Cold antiprotons offer powerful tests of CPT symmetry

CPT – the combination of charge conjugation (C), parity inversion (P) and time reversal (T) – represents a discrete symmetry between matter and antimatter. As the standard CPT test framework, the Standard Model Extension (SME) possesses a feature that might perhaps seem curious at first: CPT violation always comes with a breakdown of Lorentz invariance. However, an extraordinary insight gleaned from the celebrated CPT theorem of the 1950s is that Lorentz symmetry already contains CPT invariance under “mild smoothness” assumptions: since CPT is essentially a special Lorentz transformation with a complex-valued velocity, the symmetry holds whenever the equations of physics are smooth enough to allow continuation into the complex plane. Unsurprisingly, then, the loss of CPT invariance requires Lorentz breakdown, an argument made rigorous in 2002. Lorentz violation, on the other hand, does not imply CPT breaking.

That CPT breaking comes with Lorentz violation has the profound experimental implication that CPT tests do not necessarily have to involve both matter and antimatter: hypothetical CPT violation might also be detectable via the concomitant Lorentz breaking in matter alone. But this feature comes at a cost: the corresponding Lorentz tests typically cannot disentangle CPT-even and CPT-odd signals and, worse, they may even be blind to the effect altogether. Antimatter experiments decisively brush aside these concerns, and the availability at CERN of cold antiprotons has thus opened an unparalleled avenue for CPT tests. In fact, all six fundamental-physics experiments that use CERN’s antiprotons have the potential to place independent limits on distinct regions of the SME’s coefficient space. The upcoming Extra Low ENergy Antiproton (ELENA) ring at CERN (see “CERN soups up its antiproton source”) will provide substantially upgraded access to antiprotons for these experiments.

One exciting type of CPT test that will be conducted independently by the ALPHA, ATRAP and ASACUSA experiments is to produce antihydrogen, an atom made up of an antiproton and a positron, and compare its spectrum to that of ordinary hydrogen. While the production of cold antihydrogen has already been achieved by these experiments, present efforts are directed at precision spectroscopy promising clean and competitive constraints on various CPT-breaking SME coefficients for the proton and electron.

At present, the gravitational interaction of antimatter remains virtually untested. The AEgIS and GBAR experiments will tackle this issue by dropping antihydrogen atoms in the Earth’s gravity field. These experiments differ in their detailed set-up, but both are projected to permit initial measurements of the gravitational acceleration, g, for antihydrogen at the per cent level. The results will provide limits on SME coefficients for the couplings between antimatter and gravity that are inaccessible with other experiments.

A third fascinating type of CPT test is based on the equality of the physical properties of a particle and its antiparticle, as guaranteed by CPT invariance. The ATRAP and BASE experiments have been advocating such a comparison between protons and antiprotons confined in a cryogenic Penning trap. Impressive results for the charge-to-mass ratios and g factors have already been obtained at CERN and are poised for substantial future improvements. These measurements permit clean bounds on SME coefficients of the proton with record sensitivities.

Regarding the LHC, the latest Lorentz- and CPT-violation physics comes from the LHCb collaboration, which studies particles made up of b quarks. The experiment’s first measurements of SME coefficients in the Bd and Bs systems, published in June this year, have improved existing results by up to two orders of magnitude. LHCb also has competition from other major neutral-meson experiments. These involve studies of the Bs system at the Tevatron’s DØ experiment, recent searches for  Lorentz and CPT violation with entangled kaons at KLOE and the upcoming KLOE-2 at DAΦNE in Italy, as well as results on CPT-symmetry tests in Bd mixing and decays from the BaBar experiment at SLAC. The LHC’s general-purpose ATLAS and CMS experiments, meanwhile, hold promise for heavy-quark studies. Data on single-top production at these experiments would allow the world’s first CPT test for the top quark, while the measurement of top–antitop production can sharpen by a factor of 10 the earlier measurements of CPT-even Lorentz violation at DØ.

Other possibilities for accelerator tests of Lorentz and CPT invariance include deep inelastic scattering and polarised electron–electron scattering. The first ever analysis of the former offers a way to access previously unconstrained SME coefficients in QCD employing data from, for example, the HERA collider at DESY. Polarised electron–electron scattering, on the other hand, allows constraints to be placed on currently unmeasured Lorentz violations in the Z boson, which are also parameterised by the SME and have relevance for SLAC’s E158 data and the proposed MOLLER experiment at JLab. Lorentz-symmetry breaking would also cause the muon spin precession in a storage ring to be thrown out of sync by just a tiny bit, which is an effect accessible to muon g-2 measurements at J-PARC and Fermilab.

Historically, electromagnetism is perhaps most closely associated with Lorentz tests, and this idea continues to exert a sustained influence on the field. Modern versions of the classical Michelson–Morley experiment have been realised with tabletop resonant cavities as well as with the multi-kilometre LIGO interferometer, with upcoming improvements promising unparalleled measurements of the SME’s photon sector. Another approach for testing Lorentz and CPT symmetry is to study the energy- and direction-dependent dispersion of photons as predicted by the SME. Recent observations by the space-based Fermi Large Area Telescope severely constrain this effect, placing tight limits on 25 individual non-minimal SME coefficients for the photon.

AMO techniques

Experiments in atomic, molecular and optical (AMO) physics are also providing powerful probes of Lorentz and CPT invariance and these are complementary to accelerator-based tests. AMO techniques excel at testing Lorentz-violating effects that do not grow with energy, but they are typically confined to normal-matter particles and cannot directly access the SME coefficients of the Higgs or the top quark. Recently, advances in this field have allowed researchers to carry out interferometry using systems other than light, and an intriguing idea is to use entangled wave functions to create a Michelson–Morley interferometer within a single Yb+ ion. The strongly enhanced SME effects in this system, which arise due to the ion’s particular energy-level structure, could improve existing limits by five orders of magnitude.

Other AMO systems, such as atomic clocks, have long been recognised as a backbone of Lorentz tests. The bright SME prospects arising from the latest trend toward optical clocks, which are several orders of magnitude more precise than traditional varieties based on microwave transitions, are being examined by researchers at NIST and elsewhere. Also, measurements on the more exotic muonium atom by J-PARC and by the PSI can place limits on the SME’s muon coefficients, which is a topic of significant interest in light of several current puzzles involving the muon.

From neutrinos to gravity

Unknown neutrino properties, such as their mass, and tension between various neutrino measurements have stimulated a wealth of recent research including a number of SME analyses. The breakdown of Lorentz and CPT symmetry would cause the ordinary neutrino–neutrino and antineutrino–antineutrino oscillations to exhibit unusual direction, energy and flavour dependence, and would also induce unconventional neutrino–antineutrino mixing and kinematic effects – the latter leading to modified velocities and dispersion, as measured in time-of-flight experiments. Existing and planned neutrino experiments offer a wealth of opportunities to examine such effects. For example: upcoming results from the Daya Bay experiment should yield improved limits on Lorentz violation from antineutrino–antineutrino mixing; EXO has obtained the first direct experimental bound on a difficult-to-access “counter-shaded” coefficient extracted from the electron spectrum of double beta decay; T2K has announced new constraints on the a-and-c coefficients tightened by a factor of two using the muon-neutrino; and IceCube promises extreme sensitivities to “non-minimal” effects with kinematical studies of astrophysical neutrinos, such as Cherenkov effects of various kinds.

The modern approach to Lorentz and CPT tests remains as active as ever.

The feebleness of gravity makes the corresponding Lorentz and CPT tests in this SME sector particularly challenging. This has led researchers from HUST in China and from Indiana University to use an ingenious tabletop experiment to seek Lorentz breaking in the short-range behaviour of the gravitational force. The idea is to bring gravitationally interacting test masses to within submillimetre ranges of one another and observe their mechanical resonance behaviour, which is sensitive to deviations from Lorentz symmetry in the gravitational field. Other groups are carrying out related cutting-edge measurements of SME gravity coefficients with laser ranging of the Moon and other solar-system objects, while analysis of the gravitational-wave data recently obtained by LIGO has already yielded many first constraints on SME coefficients in the gravity sector, with the promise of more to come.

After a quarter century of experimental and theoretical work, the modern approach to Lorentz and CPT tests remains as active as ever. As the theoretical understanding of Lorentz and CPT violation continues to evolve at a rapid pace, it is remarkable that experimental studies continue to follow closely behind and now stretch across most subfields of physics. The range of physical systems involved is truly stunning, and the growing number of different efforts displays the liveliness and exciting prospects for a research field that could help to unlock the deepest mysteries of the universe.

Cosmic rays continue to confound

The International Space Station (ISS) is the largest and most complex engineering project ever built in space. It has also provided a unique platform from which to conduct the physics mission of the Alpha Magnetic Spectrometer (AMS). Over the past five years on board the ISS, AMS has orbited the Earth every 93 minutes at an altitude of 400 km and recorded 85 billion cosmic-ray events with energies reaching the multi-TeV range. AMS has been collecting its unprecedented data set and beaming it down to CERN since 2011, and is expected to continue to do so for the lifetime of the ISS.

AMS is a unique experiment in particle physics. The idea for a space-based detector developed after the cancellation of the Superconducting Super Collider in the US in 1993. The following year, an international group of physicists who had worked together for many years at CERN’s LEP collider had a discussion with Roald Sagdeev, former director of the Soviet Institute of Space Research, about the possibility of performing a precision particle-physics experiment in space. Sagdeev arranged for the team to meet with Daniel Goldin, the administrator of NASA, and in May 1994 the AMS collaboration presented the science case for AMS at NASA’s headquarters. Goldin advised the group that use of the ISS as a platform required strong scientific endorsement from the US Department of Energy (DOE) and, after the completion of a detailed technical review of AMS science, the DOE and NASA formalised responsibilities for AMS deployment on the ISS on 20 September 1995.

A 10 day precursor flight of AMS (AMS-01) was carried out in June 1998, demonstrating for the first time the viability of using a precision, large-acceptance magnetic spectrometer in space for a multi-year mission. The construction of AMS-02 for the ISS started immediately afterwards in collaborating institutes around the world. With the loss of the shuttle Columbia in 2003 and the resulting redirection of space policy, AMS was removed from the space-shuttle programme in October 2005. However, the importance of performing fundamental science on the ISS was widely recognised and supported by the NASA Space Station management under the leadership of William Gerstenmaier. In 2008, the US Congress unanimously agreed that AMS be reinstated, mandating an additional flight for the shuttle Endeavour with AMS as its prime payload. Shortly after installation on the ISS in May 2011, AMS was powered on and began collecting and transmitting data (CERN Courier July/August 2011 p18).

The first five years

Much has been learnt in the first five years of AMS about operating a particle-physics detector in space, especially the challenges presented by the ever changing thermal environment and the need to monitor the detector elements and electronics 24 hours per day, 365 days per year. Communications with NASA’s ISS Mission Control Centers are also an essential requirement to ensure the operations of the ISS – such as sudden, unscheduled power cuts and attitude changes – do not disrupt the operations of AMS or imperil the detector.

Of course, it is the data recorded by AMS from events in the distant universe that are the most rich scientifically. AMS is able to detect both elementary particles – namely electrons, positrons, protons and antiprotons – in addition to nuclei of helium, lithium and heavier elements up to indium. The large acceptance and multiple redundant measurements allow AMS to analyse the data to an accuracy of approximately 1%. Combined with its atmosphere-free window on the cosmos, its long-duration exposure time and its extensive calibration at the CERN test beam, this allows AMS to greatly improve the accuracy of previous charged cosmic-ray observations. This is opening up new avenues through which to investigate the nature of dark matter, the existence of heavy antimatter and the true properties of primordial cosmic rays.

The importance of precision studies of positrons and antiprotons as a means to search for the origin of dark matter was first pointed out by theorists John Ellis and, independently, by Michael Turner and Frank Wilczek. They noted that annihilations of the leading dark-matter candidate, neutralinos, produce energies that transform neutralinos into ordinary particles such as positrons and antiprotons. Crucially, this characteristic excess of positrons and antiprotons in cosmic rays can be measured. The characteristic signature of dark-matter annihilations is a sharp drop-off of these positron and antiproton excesses at high energies, due to the finite mass of the colliding neutralinos. In addition, since dark matter is ubiquitous, the excesses of the fluxes should be isotropic.

Early low-energy measurements by balloons and satellites indicated that both the positron fraction (that is, the ratio of the positron flux to the flux of electrons and positrons) and the antiproton-to-proton fluxes are larger than predicted by models based on the collisions of cosmic rays. The superior precision of AMS over previous experiments is now allowing researchers to investigate such features, in particular the drop-off in the positron and antiproton excesses, in unprecedented detail.

The first major result from AMS came in 2013 and concerned the positron fraction (CERN Courier October 2013 p22). This highly accurate result showed that, up to a positron energy of 350 GeV, the positron fraction fits well to dark-matter models. This result generated widespread interest in the community and motivated many new interpretations of the positron-fraction excess, for instance whether it is due to astrophysical sources or propagation effects. In 2014, AMS published the positron and electron fluxes, which showed that their behaviours are quite different from each other and that neither can be fitted with the single-power-law assumption underpinning the traditional understanding of cosmic rays.

A deepening mystery

The latest AMS results are based on 17.6 million electrons and positrons and 350,000 antiprotons. In line with previous AMS measurements, the positron flux exhibits a distinct difference from the electron flux, both in its magnitude and energy dependence (figure 1). The positrons show a unique feature: they have a tendency to drop-off sharply at energies above 300 GeV, as expected from dark-matter collisions or new astrophysical phenomena. The positron fraction decreases with energy and reaches a minimum at 8 GeV. It then increases with energy and rapidly exceeds the predictions from cosmic-ray collisions, reaching a maximum at 265 GeV and then beginning to fall off. Whereas neither the electron flux nor the positron flux can be described by a single power law, surprisingly the sum of the electron and positron fluxes can be described very accurately by a single power law above an energy of 30 GeV.

Since astrophysical sources of cosmic-ray positrons and electrons may induce some degree of anisotropy in their arrival directions, it is also important to measure the anisotropy of cosmic-ray events recorded by AMS. Using the latest data set, a systematic search for anisotropies has been carried out on the electron and positron samples in the energy range 16–350 GeV. The dipole-anisotropy amplitudes measured on 82,000 positrons and 1.1 million electrons are 0.014 for positrons and 0.003 for electrons, which are consistent with the expectations from isotropy.

The latest AMS results on the fluxes and flux ratio of electrons and positrons exhibit unique and previously unobserved features. These include the energy dependence of the positron fraction, the existence of a maximum at 265 GeV in the positron fraction, the exact behaviour of the electron and positron fluxes and, in particular, the sharp drop-off of the positron flux. These features require accurate theoretical interpretation as to their origin, be it from dark-matter collisions or new astrophysical sources.

Concerning the measured antiproton-to-proton flux ratio (figure 2), the new data show that this ratio is independent of rigidity (defined as the momentum per unit charge) in the rigidity range 60–450 GV. This is contrary to traditional cosmic-ray models, which assume that antiprotons are produced only in the collisions of cosmic rays and therefore that the ratio decreases with rigidity. In addition, due to the large mass of antiprotons, the observed excess of the antiproton-to-proton flux ratio cannot come from pulsars. Indeed, the excess is consistent with some of the latest model predictions based on dark-matter collisions as well as those based on new astrophysical sources. Unexpectedly, the antiproton-to-positron flux ratio is also independent of rigidity in the range 60–450 GV (CERN Courier October 2016 p8). This is considered as a major result from the five-year summary of AMS data.

The upshot of these new findings in elementary-particle cosmic rays is that the rigidity dependences of the fluxes of positrons, protons and antiprotons are nearly identical, whereas the electron flux has a distinctly different rigidity dependence. This is unexpected because electrons and positrons lose much more energy in the galactic magnetic fields than do protons and antiprotons.

Nuclei in cosmic rays

Most of the cosmic rays flying through the cosmos comprise protons and nuclei, and AMS collects nuclei simultaneously with elementary particles to enable an accurate understanding of both astrophysical phenomena and cosmic-ray propagation. The latest AMS results shed light on the properties of protons, helium, lithium and heavier nuclei in the periodic table. Protons, helium, carbon and oxygen are traditionally assumed to be primary cosmic rays, which means they are produced directly from a source such as supernova remnants.

Protons and helium are the two most abundant charged cosmic rays. They have been measured repeatedly by many experiments over many decades, and their energy dependence has traditionally been assumed to follow a single power law. In the case of lithium, which is assumed to be produced from the collision of primary cosmic rays with the interstellar medium and therefore yields a single power law but with a different spectral index, experimental data have been very limited.

No one has a clue what could be causing these spectacular effects

Sam Ting

The latest AMS data reveal, with approximately 1% accuracy, that the proton, helium and lithium fluxes as a function of rigidity all deviate from the traditional single-power-law dependence at a rigidity of about 300 GV (figure 3). It is completely unexpected that all three deviate from a single power law, that all three deviations occur at about the same rigidity and increase at higher rigidities, and that the three spectra can be fitted with double power laws above a rigidity of 45 GV. In addition, it has long been assumed that since both protons and helium are primary cosmic rays with the same energy dependence at high energies, their flux ratio would be independent of rigidity. The AMS data show that above rigidities of 45 GV, the flux ratio decreases with rigidity and follows a single-power-law behaviour. Despite being a secondary cosmic ray, lithium also exhibits the same rigidity behaviour as protons and helium. It is fair to say that, so far, no one has a clue what could be causing these spectacular effects.

The latest AMS measurement of the boron-to-carbon flux ratio (B/C) also contains surprises (figure 4). Boron is assumed to be produced through the interactions of primary cosmic rays such as carbon and oxygen with the interstellar medium, which means that B/C provides information both on cosmic-ray propagation and on the properties of the interstellar medium. The B/C ratio does not show any significant structures, in contrast to many cosmic-ray propagation models that assume such behaviour at high rigidities (including a class of propagation models that explain the observed AMS positron fraction). Cosmic-ray propagation is commonly modelled as relativistic gas diffusion through a magnetised plasma, and models of the magnetised plasma predict different behaviours of B/C as a function of rigidity. At rigidities above 65 GV, the latest AMS data can be well fitted by a single power law with spectral index Δ in agreement with the Kolmogorov model of turbulence, which predicts Δ = –1/3 asymptotically.

Building a spectrometer in space

AMS is a precision, multipurpose TeV spectrometer measuring 5 × 4 × 3 m and weighing 7.5 tonnes. It consists of a transition radiation detector (TRD) to identify electrons and positrons; a permanent magnet together with nine layers of silicon tracker (labelled 1 to 9) to measure momentum up to the multi-TeV range and to identify different species of particles and nuclei via their energy loss; two banks of time-of-flight (TOF) counters to measure the direction and velocity of cosmic rays and identify species by energy loss; veto counters (ACC) surrounding the inner bore of the magnet to reject cosmic rays from the side; a ring-image Cherenkov counter (RICH) to measure the cosmic-ray energy and identify particle species; and an electromagnetic calorimeter (ECAL) to provide 3D measurements of the energy and direction of electrons and positrons, and distinguish them from antiprotons, protons and other nuclei.

Future directions

Much has been learnt from the unexpected physics results from the first five years of AMS. Measuring many different species of charged cosmic rays at the same time with high accuracy provides unique input for the development of a comprehensive theory of cosmic rays, which have puzzled researchers for a century. AMS data are also providing new information that is essential to our understanding of the origin of dark matter, the existence of heavy antimatter, and the properties of charged cosmic rays in the cosmos.

The physics potential of AMS is the reason why the experiment receives continuous support. AMS is a US DOE and NASA-sponsored international collaboration and was built with European participation from Finland, France, Germany, Italy, Portugal, Spain and Switzerland, together with China, Korea, Mexico, Russia, Taiwan and the US. CERN has provided critical support to AMS, with CERN engineers engaged in all phases of the construction. Of particular importance was the extensive calibration of the AMS detector with different particle test beams at various energies, which provided key reference points for verifying the detector’s operation in space.

AMS will continue to collect data at higher energies and with high precision during the lifetime of the ISS, at least until 2024. To date, AMS is the only long-duration precision magnetic spectrometer in space and, given the challenges involved in such a mission, it is likely that it will remain so for the foreseeable future.

What is AMS telling us?

In the first half of the 20th century, many of the most important discoveries of new particles were made by cosmic-ray experiments. Examples include antimatter, the muon, pion, kaon and other hadrons, which opened up the field of high-energy physics and set in motion our modern understanding of elementary particles. This came about because cosmic-ray interactions with nuclei in the upper atmosphere are among the highest-energy events known, surpassing anything that could be produced in laboratories at the time – and even in collisions at the LHC today.

However, around the middle of the century the balance of power in particle physics shifted to accelerator experiments. By generating high-energy interactions in the laboratory under controlled conditions, accelerators offered new possibilities for precise measurements and thus for the study of rare particles and phenomena. These experiments helped to flush out the quark model and also the fundamental force-carrying bosons, leading to the establishment of the Standard Model (SM) – whose success was crowned by the discovery of the Higgs boson at the LHC in 2012.

Today, thanks to its unique position on the International Space Station, the AMS experiment combines the best of both worlds as a highly sensitive particle detector that is free from the complicated environment of the atmosphere (see “Cosmic rays continue to confound“). Collecting data since 2011, AMS has initiated a new epoch of precision cosmic-ray experiments that help to address basic puzzles in particle physics such as the nature of dark matter. The experiment’s latest round of data continues to throw up surprises. Arriving at the correct interpretation of events due to particles produced far away in the universe, however, still presents challenges for physicists trying to understand dark matter and the cosmological asymmetry between matter and antimatter.

Best of both worlds

The emphasis in particle physics now is on the search for physics beyond the SM, for which many motivations come from astrophysics and cosmology. Examples include dark matter, which contributes many times more to the overall density of matter in the universe than does the conventional matter described by the SM, and the origin of matter itself. Many physicists think that dark matter may be composed of particles that could be detected at the LHC, or might reveal themselves in astrophysical experiments such as AMS. As for the origin of matter, the big question has been whether it is due to an intrinsic difference between the properties of matter and antimatter particles, or whether the dominance of matter over antimatter in the universe around us is merely a local phenomenon. Although it is unlikely that there exist other regions of the observable universe where antimatter dominates, there is limited direct experimental evidence against it.

The AMS approach to cosmic-ray physics is based on decades of experience in high-statistics, high-precision accelerator experiments. It has a strong focus on measurements of antiparticle spectra that allows it to search indirectly for possible dark-matter particles, which would produce antiparticles if they annihilated with each other, as well as for possible harbingers of astrophysical concentrations of antimatter. In parallel, AMS is able to make measurements of the energy spectra of many different nuclear species, posing challenges for models of the origin of cosmic rays – a mystery that has stood ever since their discovery in 1912.

Unconventional physics?

The latest AMS results on the cosmic-ray electron and positron fluxes provide very accurate measurements of the very different spectra of these particles. Numerous previous experiments had discovered an increase in the positron-to-electron ratio at increasing energies, although with considerable scatter. AMS has now confirmed this trend with greater precision, but it also indicates that the positron-to-electron ratio may decrease again at energies above about 300 GeV. The differences between the electron and positron fluxes mean that different mechanisms must be dominating their production. The natural question is whether some exotic mechanism is contributing to positron production.

One possibility is the annihilation of dark-matter particles, but a more conventional possibility is production by electromagnetic processes around one or more nearby pulsars. In both cases, one might expect the positron spectrum to turn down at higher energies, being constrained by either the mass of the dark-matter particle or by the strength of the acceleration mechanism around the pulsar(s). In the latter case, one would also expect the positron flux to be non-isotropic, but no significant effect has been seen so far. It will be interesting to see whether the high-energy decrease in the positron-to-electron ratio is confirmed by future AMS data, and whether this can be used to discriminate between exotic and conventional models for positron production.

A more sensitive probe of unconventional physics could be provided by the AMS measurement of the spectrum of antiprotons. These cannot be produced in the electromagnetic processes around pulsars, but would be produced as “secondaries” in the collisions between primary-matter cosmic rays and ordinary-matter particles. It is striking, for instance, that the antiproton-to-proton ratio measured by AMS is almost constant at energies of about 10 GeV. The ratio is significantly higher than some earlier calculations of secondary antiproton production, although recent calculations (which account more completely for the theoretical uncertainties) indicate that the antiproton-to-proton ratio may be somewhat higher – possibly even consistent with the AMS measurements. As with the case for positron production, extending the measurements to higher energies will be crucial for distinguishing between exotic and conventional mechanisms for antiproton production.

AMS has also released interesting data concerning the fluxes of protons, helium and lithium nuclei. Intriguingly, all three spectra show strong indications of breaks in the spectra at rigidities of around 200 GV. The higher-energy portions of the spectra lie significantly above simple power-law extrapolations of the lower-energy data. It seems that some additional acceleration mechanism might be playing a role at higher energies, providing food-for-thought for astrophysical models of cosmic-ray acceleration. In particular, the unexpected shape of the spectrum of primary protons in the cosmic rays may also need to be taken into account when calculating the secondary antiproton spectrum.

The AMS data on the boron-to-carbon ratio also provide interesting information for models of the propagation of cosmic rays. In the most general picture, cosmic rays can be considered as a relativistic gas diffusing through a magnetised plasma. This leads to a boron-to-carbon ratio that decreases as a power, Δ, of the rigidity, with different models yielding values of Δ between –1/2 and –1/3. The latest AMS data constrain this power law with very high precision: Δ = –0.333±0.015, in excellent agreement with the simplest Kolmogorov model of diffusion.

The AMS collaboration has already collected data on the production of many heavier nuclei, and it would be interesting if the team could extract information about unstable nuclear isotopes that might have been produced by a recent nearby supernova explosion. Such events might already have had an effect on Earth: analyses of deep-ocean sediments have recently confirmed previous reports of a layer of iron-60 that was presumably deposited by a supernova explosion within about 100 parsecs about 2.5 million years ago, and there is evidence of iron-60 also in lunar rock samples and cosmic rays. Other unstable isotopes of potential interest include beryllium-10, aluminium-26, chlorine-39, manganese-53 and nickel-59.

Promising prospects

What else may we expect from AMS in the future? The prospective gains from measuring the spectra of positrons and antiprotons to higher energies have already been mentioned. Since these antiparticles can also be produced by other processes, such as pulsars and primary-matter cosmic rays, they may not provide smoking guns for antimatter production via dark-matter annihilation, or for concentrations of antimatter in the universe. However, searches for antinuclei in cosmic rays present interesting prospects in either or both of these directions. The production of antideuterons in dark-matter annihilations may be visible above the background of secondary production by primary-matter cosmic rays, for example. On the other hand, the production of heavier antinuclei in both dark-matter annihilations and cosmic-ray collisions is expected to be very small. The search for such antinuclei has always been one of the main scientific objectives of AMS, and the community looks forward to hearing whatever data they may acquire on their possible (non-)appearance.

As this brief survey has indicated, AMS has already provided much information of great interest for particle physicists studying scenarios for dark matter, for astrophysicists and for the cosmic-ray community. Moreover, there are good prospects for further qualitative advances in future years of data-taking. The success of AMS is another example of the fruitful marriage of particle physics and astrophysics, in this case via the deployment in space of a state-of-the-art particle spectrometer. We look forward to seeing the future progeny of this happy marriage.

A record year for the LHC

LHC proton running for 2016 reached a conclusion on 26 October, after seven months of colliding protons at an energy of 13 TeV. The tally for the year is truly impressive. I could mention the fact that the machine’s design luminosity of 1034 cm–2 s–1 was regularly achieved and exceeded by 30 to 40%. Or I could say that with an integrated luminosity of 40 fb–1 delivered in 2016, we comfortably exceeded our year target of 25 fb–1 – allowing the LHC experiments to accumulate sizable data samples in time for the biennial ICHEP conference in August.

But what impresses me the most, and what really sets a marker for the future, is the availability of the machine. For 60% of its 2016 operational time, the LHC was running with stable beams delivering high-quality data to the experiments. This is unprecedented. Typical availability figures for big energy-frontier machines are around 50%, and that is the target we set ourselves for the LHC this year. Given the scale and complexity of the LHC, even that seemed ambitious. To put it in perspective, CERN’s previous and much simpler flagship facility, the Large Electron Positron (LEP) collider, achieved a figure of 30% over its operational lifetime from 1989 to 2000.

After hitting its design luminosity on 26 June, the LHC’s peak luminosity was further increased by using smaller beams from the injectors and reducing the angle at which the beams cross inside the ATLAS and CMS experiments. The resulting luminosity topped out at around 1.4 × 1034 cm–2 s–1, 40% above design. This year’s proton operation also included successful forward-physics runs for the TOTEM/CT-PPS, ALFA and AFP experiments.

The LHC is no ordinary machine. The world’s largest, most complex and highest-energy collider is also the world’s largest cryogenic facility. The difficulties we had when commissioning the machine in 2008 are well documented, and there is more to do: we are still not running at the design energy of 14 TeV, for example. But this does not detract from the fact that the 2016 run has shown what a fundamentally good design the LHC is, what a fantastic team it has running it, and that clearly it is possible to run a large-scale cryogenic facility with metronomic reliability.

This augurs well for the future of the LHC and its high-luminosity upgrade, the HL-LHC, which will take us well into the 2030s. But it is not only a good sign for particle physics. Other upcoming cryogenic facilities such as the ITER fusion experiment under construction in France can also take heart from the LHC’s performance, and who knows where else this technology might take us? If it is possible to run a 27 km-circumference superconducting particle accelerator with high reliability, then a superconducting electrical-power-distribution network, for example, does not seem so unrealistic. With developments in high-temperature superconductors proceeding apace, that possibility looks tantalisingly close.

With the way that the LHC has performed this year, it would be easy to be complacent, but the 2016 run has not been without difficulties. From the unfortunate beech marten that developed a short-lived taste for the high-voltage connections of an outdoor high-voltage transformer in May to rather more challenging technical issues, the LHC team has had numerous problems to solve, and the upcoming end-of-year technical stop will be a busy one. With a machine as complex as the LHC, its entire operational lifetime is a learning curve for accelerator physicists.

Which brings me back to the question of the LHC’s design energy. With proton running finished for another year, the LHC has now moved into a period of heavy-ion physics. When that is over, we will conclude the year with two weeks dedicated to re-training the magnets in two of the machine’s eight sectors, with a view to 14 TeV running. News from this work will provide valuable input to the LHC performance workshop in January, which will set the scene for the coming years at the energy frontier.

Colour: How We See It and How We Use It

By Michael Mark Woolfson

World Scientific

412olvqP0OL

In this book, the author discusses the scientific nature of light and colours, how we see them and how we use them in a variety of applications. Colours are the way that our vision system and – ultimately – our brain translate the different wavelengths of a part of the light spectrum. Other living things are sensitive in different ways to light and not all of them can see colours.

After presenting the science behind colours and our vision, the book discusses the use that mankind has made of colours. Ever since the time that humans lived in caves, we have used pigments to make graffiti on walls, which evolved into paintings and, lately, graphic art. Here, as is the case when designing decorations and dyes for clothing, the colours are not natural but man-made.

In the chapters that follow, the author reviews three technologies integrated in our everyday life that emerged as black-and-white and evolved into colour by way of photography, cinematography and television. The final part of the book is dedicated to describing various forms of light displays, mostly used for entertainment purposes, and to the application of colours as a code in many contexts – including road safety, hospital emergencies and industry.

Readers attracted by this mixture of science, art and culture will find the book easily readable.

Learning Scientific Programming With Python

By Christian Hill

Cambridge University Press

CCboo2_09_16

Science cannot be accomplished nowadays without the help of computers to produce, analyse, treat and visualise large experimental data sets. Scientists are called to code their programs using a programming language such as Python, which in recent times has become very popular among researchers in different scientific domains. It is a high-level language that is relatively easy to learn, rich in functionality and fairly compact. It includes many additional modules, in particular scientific and visualisation tools covering a vast area in numerical computation, which make it very handy for scientists and engineers.

In this book, the author covers basic programming concepts – such as numbers, variables, strings, lists, basic data structures, control flow, and functions. It also deals with advanced concepts and idioms of the Python language and of the tools that are presented, enabling readers to quickly gain proficiency. The most advanced topics and functionalities are clearly marked, so they can be skipped in the first reading.

While discussing Python structures, the author explains the differences with respect to other languages, in particular C, which can be useful for readers migrating from these languages to Python. The book focuses on version 3 of Python, but when needed exposes the differences with version 2, which is still widely in use among the scientific community.    

Once the basic concepts of the language are in place, the book passes to the NumPy, SciPy and Matplotlib libraries for numerical programming and data visualisation. These modules are open source, commonly used by scientists and easy to obtain and install. The functionality of each is well introduced with lots of examples, which is clearly an advantage with respect to the terse reference documentation of the modules that are available from the web. NumPy is the de facto standard for general scientific programming that deals very efficiently with data structures such as unidimensional arrays, while the SciPy library complements NumPy with more specific functionalities for scientific computing, including the evaluation of special functions frequently used in science and engineering, minimisation, integration, interpolation and equation solving.

Essential for any scientific work is the plotting of the data. This is achieved with the Matplotlib module, which is probably the most popular one that exists for Python. Many kinds of graphics are nicely introduced in the book, starting from the most basic ones, such as 1D plots, to fairly complex 3D and contour plots. The book also discusses the use of IPython notebooks to build rich-media documents, interleaving text and formulas with code and images into shareable documents for scientific analysis.

The book has many relevant examples, with their development traced from both science and engineering points of view. Each chapter concludes with a series of well-selected exercises, the complete step-by-step solutions of which are reported at the end of the volume. In addition, a nice collection of problems without solutions are also added to each section.

The book is a very complete reference of the major features of the Python language and of the most common scientific libraries. It is written in a clear, precise and didactical style that would appeal to those who, even if they are already familiar with the Python programming language, would like to develop their proficiency in numerical and scientific programming with the standard tools of the Python system.

Reviews of Accelerator Science and Technology: Volume 7

By Alexander W Chao and Weiren Chou (eds)

World Scientific

Also available at the CERN bookshop

reviews-of-accelerator-science-and-technology-volume-7-colliders

Volume 7 of Reviews of Accelerator Science and Technology is dedicated to colliders and provides an in-depth panorama of the different technologies developed since the construction in the 1960s of the first three: AdA in Italy, CBX in the US, and VEP-1 in the then Soviet Union.

Colliders have been crucial for proving the validity of the Standard Model, and they still define the energy frontier in particle physics because at present no machine can overcome the current LHC limit of 13 TeV in the centre of mass.

The book opens with an article by Burton Richter, a pioneer of high-energy colliders, who shares his viewpoint about their future. This is followed by contributions from leading experts worldwide, who discuss the characteristics, advantages and limits of machines that collide different types of particles. Proton–proton and proton–antiproton colliders are reviewed by Walter Scandale, electron–positron circular colliders by Katsunobu Oide, ion colliders by Wolfram Fischer and John M Jowett, and electron–proton and electron–ion colliders by Ilan Ben-zvi and Vadim Ptitsyn. Akira Yamamoto and Kaoru Yokoya then discuss linear colliders, Robert B Palmer muon colliders, and Jeffrey Gronberg photon colliders.

A section of the book is dedicated to the accelerator physics that form the basis of the design of these machines. In particular, Frank Zimmermann provides a general overview of collider-beam physics, while Eugene Levichev goes into more detail discussing the technologies for circular colliders.

The volume concludes with an article by Kwang-Je Kim, Robert J Budnitz and Herman Winick on the life of Andy Sessler, an accelerator physicist considered by his colleagues as an inspiring figure.

Comprehensive and containing contributions by high-profile experts, this book will be a good resource for students, physicists and engineers willing to learn about colliders and accelerator physics.

Relativistic Quantum Mechanics: An Introduction to Relativistic Quantum Fields

By Luciano Maiani and Omar Benhar

CRC Press

Quantum field theory (QFT) is the mathematical framework that forms the basis of our current understanding of the fundamental laws of nature. Its present formulation is the achievement of almost a century of theoretical efforts, first initiated by the necessity of reconciling quantum mechanics with special relativity. Its success is exemplified by the Standard Model, a specific QFT that spectacularly accounts for all of the observations performed so far in particle-physics experiments over many orders of magnitude in energy. Learning and mastering QFT is therefore essential for anyone who wants to understand how nature works on the smallest scales.

This book gives a concise and self-contained introduction to the basic concepts of QFT. As mentioned in the preface, it is mainly addressed to students with different interests who are approaching the subject for the first time, and is based on a series of lecture courses taught by the authors over the course of a decade at the University of Rome La Sapienza. Topics are selected and presented following their historical development and constant reference is made to those experiments that marked key advances, and sometimes breakthroughs, on the theoretical front. Some important subjects were not included, but they can be reconsidered later for more in-depth study.

The book is conceived as the first of a series that comprises two other texts on the more advanced topics of gauge theories and electroweak interactions (in collaboration with the late Nicola Cabibbo). The authors do not indulge in technical discussions of more formal aspects but try to derive the main physics results with the minimum amount of mathematical machinery. Although some concepts would have benefitted from a more systematic discussion, such as the scattering matrix and its definition through asymptotic states, the goal of giving an essential introduction to QFT and providing a solid foundation in this for the reader is achieved overall. The experience of the authors as both proficient teachers of the subject and main players is crucial to finding a good balance in establishing the QFT framework.

The first part of the book (chapters 1–3) is dedicated to a short review of classical dynamics in the relativistic limit. Starting from the principles of relativity and minimal action, the motion of point-like particles and the evolution of fields are described in their Lagrangian and Hamiltonian formulations. Special emphasis is given to symmetries and conservation laws. Quantisation is introduced in chapter 4 through the example of the scalar field by replacing the Poisson brackets with commutators of operators. Equal-time commutation rules are then used to define creation and destruction operators and the Fock space. Chapter 5 deals with the quantisation of the electromagnetic field. The approach is that of canonical formalism in the Coulomb gauge, but no mention is made of the complication due to the presence of constraints on fields. Chapters 6 and 7 are dedicated to the Dirac equation and the quantisation of the Dirac field. Besides introducing the usual machinery of spinors and gamma matrices, they include a detailed analysis of the relativistic hydrogen atom as well as concise though important discussions about Wigner’s method of induced representations as applied to the Lorentz group, micro-causality and the relation between spin and statistics. The propagation of free fields is analysed in chapter 8, while the three chapters that follow introduce the reader to relativistic perturbation theory. Chapter 12 discusses discrete symmetries (C, P and T) in QFT, gives a proof of the CPT theorem and illustrates its consequences. The last part of the book is dedicated to applications of QFT formalism to phenomenology. The authors give a detailed account of QED in chapter 14 by discussing a variety of physical processes. The reader is here introduced to the method of Feynman diagrams through explicit examples following a pragmatic approach. The following chapter deals with Fermi’s theory of weak interactions, again making use of several explicit examples of physical processes. Finally, chapters 13 and 16 are devoted to the theory and phenomenology of neutrinos. In particular, the last section discusses neutrino oscillations (both in a vacuum and through matter) and presents a thorough analysis of current experimental results. There is also a useful set of exercises at the end of each chapter.

Both the pragmatic approach and choice of topics make this book particularly suited for readers who want a concise and self-contained introduction to QFT and its physical consequences. Students will find it a valuable companion in their journey into the subject, and expert practitioners will enjoy the various advanced arguments that are scattered throughout the chapters and not commonly found in other textbooks.

All systems go for the High-Luminosity LHC

On 19 September, the European Investment Bank (EIB) signed a 250 million Swiss francs (€230 million) credit facility with CERN in order to finance the High-Luminosity Large Hadron Collider (HL-LHC) project. The finance contract follows recent approval from CERN Council, and will allow CERN to carry out the work necessary for the HL-LHC within a constant CERN budget.

The HL-LHC is expected to produce data from 2026 onwards, with the overall goal of increasing the integrated luminosity recorded by the LHC by a factor 10. Following approval of the HL-LHC as a priority project in the European Strategy Report for Particle Physics, this major upgrade is now gathering speed together with companion upgrade programmes of the LHC injectors and detectors. Engineers are currently putting the finishing touches to a full working model of an HL-LHC quadrupole, which will eventually be installed in the insertion regions close to the ATLAS and CMS experiments in order to focus the HL-LHC beam. Built in partnership with Fermilab, the magnets are based on an innovative niobium-tin superconductor (Nb3Sn) that can produce higher magnetic fields than the niobium-titanium magnets used in the LHC.

The contract signed between CERN and EIB falls under the InnovFin Large Projects facility, which is part of the new generation of financial instruments developed and supported under the European Union’s Horizon 2020 scheme. It’s the second EIB financing for CERN, following a loan of €300 million in 2002 for the LHC. “This loan under Horizon 2020, the EU’s research-funding programme, will help keep CERN and Europe at the forefront of particle-physics research,” says the European commissioner for research, science and innovation, Carlos Moedas. “It’s an example of how EU funding helps extend frontiers of human knowledge.”

First physics at HIE-ISOLDE begins

In early September, the first physics experiment using radioactive beams from the newly upgraded ISOLDE facility got under way: a study of tin, which is a special element because it has two double magic isotopes. ISOLDE is CERN’s long-running nuclear research facility, which for the past 50 years has allowed many different studies of the properties of atomic nuclei. The upgrade means the machine can now reach an energy of 5.5 MeV per nucleon, making ISOLDE the only Isotope Separator On-Line (ISOL) facility in the world capable of investigating heavy and super-heavy radioactive nuclei.

HIE-ISOLDE (High Intensity Energy-ISOLDE) is a major upgrade of the ISOLDE facility that will increase the energy, intensity and quality of the beams delivered to scientific users. “Our success is the result of eight years of development and manufacturing,” explains HIE-ISOLDE project-leader Yacine Kadi. “The community around ISOLDE has grown a lot recently, as more scientists are attracted by the possibilities that new higher energies bring. It’s an energy domain that’s not explored much, since no other facility in the world can deliver pure beams at these energies.”

The first run of the facility took place in October last year, but because the machine only had one cryomodule, it operated at an energy of 4.3 MeV per nucleon. Now, with the second cryomodule in place, the machine is capable of reaching up to 5.5 MeV per nucleon and therefore can investigate the structure of heavier isotopes. The rest of 2016 will be a busy time for HIE-ISOLDE, with scheduled experiments studying nuclei over a wide range of mass numbers – from 9Li to 142Xe. When two additional cryomodules are installed in 2017 and 2018, the facility will operate at 10 MeV per nucleon and be capable of investigating nuclei of all masses.

HIE-ISOLDE will run until mid-November, and all but one of the seven different experiments planned during this time will use the Miniball detection station.

bright-rec iop pub iop-science physcis connect