Comsol -leaderboard other pages

Topics

Testing times for space–time symmetry

Throughout history, our notion of space and time has undergone a number of dramatic transformations, thanks to figures ranging from Aristotle, Leibniz and Newton to Gauss, Poincaré and Einstein. In our present understanding of nature, space and time form a single 4D entity called space–time. This entity plays a key role for the entire field of physics: either as a passive spectator by providing the arena in which physical processes take place or, in the case of gravity as understood by Einstein’s general relativity, as an active participant.

Since the birth of special relativity in 1905 and the CPT theorem of Bell, Lüders and Pauli in the 1950s, we have come to appreciate both Lorentz and CPT symmetry as cornerstones of the underlying structure of space–time. The former states that physical laws are unchanged when transforming between two inertial frames, while the latter is the symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity inversion (P) and time reversal (T). These closely entwined symmetries guarantee that space–time provides a level playing field for all physical systems independent of their spatial orientation and velocity, or whether they are composed of matter or antimatter. Both have stood the tests of time, but in the last quarter century these cornerstones have come under renewed scrutiny as to whether they are indeed exact symmetries of nature. Were physicists to find violations, it would lead to profound revisions in our understanding of space and time and force us to correct both general relativity and the Standard Model of particle

Accessing the Planck scale

Several considerations have spurred significant enthusiasm for testing Lorentz and CPT invariance in recent years. One is the observed bias of nature towards matter – an imbalance that is difficult, although perhaps possible, to explain using standard physics. Another stems from the synthesis of two of the most successful physics concepts in history: unification and symmetry breaking. Many theoretical attempts to combine quantum theory with gravity into a theory of quantum gravity allow for tiny departures from Lorentz and CPT invariance. Surprisingly, even deviations that are suppressed by 20 orders of magnitude or more are experimentally accessible with present technology. Few, if any, other experimental approaches to finding new physics can provide such direct access to the Planck scale.

Unfortunately, current models of quantum gravity cannot accurately pinpoint experimental signatures for Lorentz and CPT violation. An essential milestone has therefore been the development of a general theoretical framework that incorporates Lorentz and CPT violation into both the Standard Model and general relativity: the Standard Model Extension (SME), as formulated by Alan Kostelecký of Indiana University in the US and coworkers beginning in the early 1990s. Due to its generality and independence of the underlying models, the SME achieves the ambitious goal of allowing the identification, analysis and interpretation of all feasible Lorentz and CPT tests (see panel below). Any putative quantum-gravity remnants associated with Lorentz breakdown enter the SME as a multitude of preferred directions criss-crossing space–time. As a result, the playing field for physical systems is no longer level: effects may depend slightly on spatial orientation, uniform velocity, or whether matter or antimatter is involved. These preferred directions are the coefficients of the SME framework; they parametrise the type and extent of Lorentz and CPT violation, offering specific experiments the opportunity to try to glimpse them.

The Standard Model Extension

At the core of attempts to detect violations in space–time symmetry is the Standard Model Extension (SME) – an effective field theory that contains not just the SM but also general relativity and all possible operators that break Lorentz symmetry. It can be expressed as a Lagrangian in which each Lorentz-violating term has a coefficient that leads to a testable prediction of the theory.

Lorentz and CPT research is unique in the exceptionally wide range of experiments it offers. The SME makes predictions for symmetry-violating effects in systems involving neutrinos, gravity, meson oscillations, cosmic rays, atomic spectra, antimatter, Penning traps and collider physics, among others. In the case of free particles, Lorentz and CPT violation lead to a dependence of observables on the direction and magnitude of the particles’ momenta, on their spins, and on whether particles or antiparticles are studied. For a bound system such as atomic and nuclear states, the energy spectrum depends on its orientation and velocity and may differ from that of the corresponding antimatter system.

The vast spectrum of experiments and latest results in this field were the subject of the triennial CPT conference held at Indiana University in June this year (see panel below), highlights from which form the basis of this article.

The seventh triennial CPT conference

A host of experimental efforts to probe space–time symmetries were the focus of the week-long Seventh Meeting on CPT and Lorentz Symmetry (CPT’16) held at Indiana University, Bloomington, US, on 20–24 June, which are summarised in the main text of this article. With around 120 experts from five continents discussing the most recent developments in the subject, it has been the largest of all meetings in this one-of-a-kind triennial conference series. Many of the sessions included presentations involving experiments at CERN, and the discussions covered a number of key results from experiments at the Antiproton Decelerator and future improvements expected from the commissioning of ELENA. The common thread weaving through all of these talks heralds an exciting emergent era of low-energy Planck-reach fundamental physics with antimatter.

CERN matters

As host to the world’s only cold-antiproton source for precision antimatter physics (the Antiproton Decelerator, AD) and the highest-energy particle accelerator (the Large Hadron Collider, LHC), CERN is in a unique position to investigate the microscopic structure of space–time. The corresponding breadth of measurements at these extreme ends of the energy regime guarantees complementary experimental approaches to Lorentz and CPT symmetry at a single laboratory. Furthermore, the commissioning of the new ELENA facility at CERN is opening brand new tests of Lorentz and CPT symmetry in the antimatter sector (see panel below).

Cold antiprotons offer powerful tests of CPT symmetry

CPT – the combination of charge conjugation (C), parity inversion (P) and time reversal (T) – represents a discrete symmetry between matter and antimatter. As the standard CPT test framework, the Standard Model Extension (SME) possesses a feature that might perhaps seem curious at first: CPT violation always comes with a breakdown of Lorentz invariance. However, an extraordinary insight gleaned from the celebrated CPT theorem of the 1950s is that Lorentz symmetry already contains CPT invariance under “mild smoothness” assumptions: since CPT is essentially a special Lorentz transformation with a complex-valued velocity, the symmetry holds whenever the equations of physics are smooth enough to allow continuation into the complex plane. Unsurprisingly, then, the loss of CPT invariance requires Lorentz breakdown, an argument made rigorous in 2002. Lorentz violation, on the other hand, does not imply CPT breaking.

That CPT breaking comes with Lorentz violation has the profound experimental implication that CPT tests do not necessarily have to involve both matter and antimatter: hypothetical CPT violation might also be detectable via the concomitant Lorentz breaking in matter alone. But this feature comes at a cost: the corresponding Lorentz tests typically cannot disentangle CPT-even and CPT-odd signals and, worse, they may even be blind to the effect altogether. Antimatter experiments decisively brush aside these concerns, and the availability at CERN of cold antiprotons has thus opened an unparalleled avenue for CPT tests. In fact, all six fundamental-physics experiments that use CERN’s antiprotons have the potential to place independent limits on distinct regions of the SME’s coefficient space. The upcoming Extra Low ENergy Antiproton (ELENA) ring at CERN (see “CERN soups up its antiproton source”) will provide substantially upgraded access to antiprotons for these experiments.

One exciting type of CPT test that will be conducted independently by the ALPHA, ATRAP and ASACUSA experiments is to produce antihydrogen, an atom made up of an antiproton and a positron, and compare its spectrum to that of ordinary hydrogen. While the production of cold antihydrogen has already been achieved by these experiments, present efforts are directed at precision spectroscopy promising clean and competitive constraints on various CPT-breaking SME coefficients for the proton and electron.

At present, the gravitational interaction of antimatter remains virtually untested. The AEgIS and GBAR experiments will tackle this issue by dropping antihydrogen atoms in the Earth’s gravity field. These experiments differ in their detailed set-up, but both are projected to permit initial measurements of the gravitational acceleration, g, for antihydrogen at the per cent level. The results will provide limits on SME coefficients for the couplings between antimatter and gravity that are inaccessible with other experiments.

A third fascinating type of CPT test is based on the equality of the physical properties of a particle and its antiparticle, as guaranteed by CPT invariance. The ATRAP and BASE experiments have been advocating such a comparison between protons and antiprotons confined in a cryogenic Penning trap. Impressive results for the charge-to-mass ratios and g factors have already been obtained at CERN and are poised for substantial future improvements. These measurements permit clean bounds on SME coefficients of the proton with record sensitivities.

Regarding the LHC, the latest Lorentz- and CPT-violation physics comes from the LHCb collaboration, which studies particles made up of b quarks. The experiment’s first measurements of SME coefficients in the Bd and Bs systems, published in June this year, have improved existing results by up to two orders of magnitude. LHCb also has competition from other major neutral-meson experiments. These involve studies of the Bs system at the Tevatron’s DØ experiment, recent searches for  Lorentz and CPT violation with entangled kaons at KLOE and the upcoming KLOE-2 at DAΦNE in Italy, as well as results on CPT-symmetry tests in Bd mixing and decays from the BaBar experiment at SLAC. The LHC’s general-purpose ATLAS and CMS experiments, meanwhile, hold promise for heavy-quark studies. Data on single-top production at these experiments would allow the world’s first CPT test for the top quark, while the measurement of top–antitop production can sharpen by a factor of 10 the earlier measurements of CPT-even Lorentz violation at DØ.

Other possibilities for accelerator tests of Lorentz and CPT invariance include deep inelastic scattering and polarised electron–electron scattering. The first ever analysis of the former offers a way to access previously unconstrained SME coefficients in QCD employing data from, for example, the HERA collider at DESY. Polarised electron–electron scattering, on the other hand, allows constraints to be placed on currently unmeasured Lorentz violations in the Z boson, which are also parameterised by the SME and have relevance for SLAC’s E158 data and the proposed MOLLER experiment at JLab. Lorentz-symmetry breaking would also cause the muon spin precession in a storage ring to be thrown out of sync by just a tiny bit, which is an effect accessible to muon g-2 measurements at J-PARC and Fermilab.

Historically, electromagnetism is perhaps most closely associated with Lorentz tests, and this idea continues to exert a sustained influence on the field. Modern versions of the classical Michelson–Morley experiment have been realised with tabletop resonant cavities as well as with the multi-kilometre LIGO interferometer, with upcoming improvements promising unparalleled measurements of the SME’s photon sector. Another approach for testing Lorentz and CPT symmetry is to study the energy- and direction-dependent dispersion of photons as predicted by the SME. Recent observations by the space-based Fermi Large Area Telescope severely constrain this effect, placing tight limits on 25 individual non-minimal SME coefficients for the photon.

AMO techniques

Experiments in atomic, molecular and optical (AMO) physics are also providing powerful probes of Lorentz and CPT invariance and these are complementary to accelerator-based tests. AMO techniques excel at testing Lorentz-violating effects that do not grow with energy, but they are typically confined to normal-matter particles and cannot directly access the SME coefficients of the Higgs or the top quark. Recently, advances in this field have allowed researchers to carry out interferometry using systems other than light, and an intriguing idea is to use entangled wave functions to create a Michelson–Morley interferometer within a single Yb+ ion. The strongly enhanced SME effects in this system, which arise due to the ion’s particular energy-level structure, could improve existing limits by five orders of magnitude.

Other AMO systems, such as atomic clocks, have long been recognised as a backbone of Lorentz tests. The bright SME prospects arising from the latest trend toward optical clocks, which are several orders of magnitude more precise than traditional varieties based on microwave transitions, are being examined by researchers at NIST and elsewhere. Also, measurements on the more exotic muonium atom by J-PARC and by the PSI can place limits on the SME’s muon coefficients, which is a topic of significant interest in light of several current puzzles involving the muon.

From neutrinos to gravity

Unknown neutrino properties, such as their mass, and tension between various neutrino measurements have stimulated a wealth of recent research including a number of SME analyses. The breakdown of Lorentz and CPT symmetry would cause the ordinary neutrino–neutrino and antineutrino–antineutrino oscillations to exhibit unusual direction, energy and flavour dependence, and would also induce unconventional neutrino–antineutrino mixing and kinematic effects – the latter leading to modified velocities and dispersion, as measured in time-of-flight experiments. Existing and planned neutrino experiments offer a wealth of opportunities to examine such effects. For example: upcoming results from the Daya Bay experiment should yield improved limits on Lorentz violation from antineutrino–antineutrino mixing; EXO has obtained the first direct experimental bound on a difficult-to-access “counter-shaded” coefficient extracted from the electron spectrum of double beta decay; T2K has announced new constraints on the a-and-c coefficients tightened by a factor of two using the muon-neutrino; and IceCube promises extreme sensitivities to “non-minimal” effects with kinematical studies of astrophysical neutrinos, such as Cherenkov effects of various kinds.

The modern approach to Lorentz and CPT tests remains as active as ever.

The feebleness of gravity makes the corresponding Lorentz and CPT tests in this SME sector particularly challenging. This has led researchers from HUST in China and from Indiana University to use an ingenious tabletop experiment to seek Lorentz breaking in the short-range behaviour of the gravitational force. The idea is to bring gravitationally interacting test masses to within submillimetre ranges of one another and observe their mechanical resonance behaviour, which is sensitive to deviations from Lorentz symmetry in the gravitational field. Other groups are carrying out related cutting-edge measurements of SME gravity coefficients with laser ranging of the Moon and other solar-system objects, while analysis of the gravitational-wave data recently obtained by LIGO has already yielded many first constraints on SME coefficients in the gravity sector, with the promise of more to come.

After a quarter century of experimental and theoretical work, the modern approach to Lorentz and CPT tests remains as active as ever. As the theoretical understanding of Lorentz and CPT violation continues to evolve at a rapid pace, it is remarkable that experimental studies continue to follow closely behind and now stretch across most subfields of physics. The range of physical systems involved is truly stunning, and the growing number of different efforts displays the liveliness and exciting prospects for a research field that could help to unlock the deepest mysteries of the universe.

Cosmic rays continue to confound

The International Space Station (ISS) is the largest and most complex engineering project ever built in space. It has also provided a unique platform from which to conduct the physics mission of the Alpha Magnetic Spectrometer (AMS). Over the past five years on board the ISS, AMS has orbited the Earth every 93 minutes at an altitude of 400 km and recorded 85 billion cosmic-ray events with energies reaching the multi-TeV range. AMS has been collecting its unprecedented data set and beaming it down to CERN since 2011, and is expected to continue to do so for the lifetime of the ISS.

AMS is a unique experiment in particle physics. The idea for a space-based detector developed after the cancellation of the Superconducting Super Collider in the US in 1993. The following year, an international group of physicists who had worked together for many years at CERN’s LEP collider had a discussion with Roald Sagdeev, former director of the Soviet Institute of Space Research, about the possibility of performing a precision particle-physics experiment in space. Sagdeev arranged for the team to meet with Daniel Goldin, the administrator of NASA, and in May 1994 the AMS collaboration presented the science case for AMS at NASA’s headquarters. Goldin advised the group that use of the ISS as a platform required strong scientific endorsement from the US Department of Energy (DOE) and, after the completion of a detailed technical review of AMS science, the DOE and NASA formalised responsibilities for AMS deployment on the ISS on 20 September 1995.

A 10 day precursor flight of AMS (AMS-01) was carried out in June 1998, demonstrating for the first time the viability of using a precision, large-acceptance magnetic spectrometer in space for a multi-year mission. The construction of AMS-02 for the ISS started immediately afterwards in collaborating institutes around the world. With the loss of the shuttle Columbia in 2003 and the resulting redirection of space policy, AMS was removed from the space-shuttle programme in October 2005. However, the importance of performing fundamental science on the ISS was widely recognised and supported by the NASA Space Station management under the leadership of William Gerstenmaier. In 2008, the US Congress unanimously agreed that AMS be reinstated, mandating an additional flight for the shuttle Endeavour with AMS as its prime payload. Shortly after installation on the ISS in May 2011, AMS was powered on and began collecting and transmitting data (CERN Courier July/August 2011 p18).

The first five years

Much has been learnt in the first five years of AMS about operating a particle-physics detector in space, especially the challenges presented by the ever changing thermal environment and the need to monitor the detector elements and electronics 24 hours per day, 365 days per year. Communications with NASA’s ISS Mission Control Centers are also an essential requirement to ensure the operations of the ISS – such as sudden, unscheduled power cuts and attitude changes – do not disrupt the operations of AMS or imperil the detector.

Of course, it is the data recorded by AMS from events in the distant universe that are the most rich scientifically. AMS is able to detect both elementary particles – namely electrons, positrons, protons and antiprotons – in addition to nuclei of helium, lithium and heavier elements up to indium. The large acceptance and multiple redundant measurements allow AMS to analyse the data to an accuracy of approximately 1%. Combined with its atmosphere-free window on the cosmos, its long-duration exposure time and its extensive calibration at the CERN test beam, this allows AMS to greatly improve the accuracy of previous charged cosmic-ray observations. This is opening up new avenues through which to investigate the nature of dark matter, the existence of heavy antimatter and the true properties of primordial cosmic rays.

The importance of precision studies of positrons and antiprotons as a means to search for the origin of dark matter was first pointed out by theorists John Ellis and, independently, by Michael Turner and Frank Wilczek. They noted that annihilations of the leading dark-matter candidate, neutralinos, produce energies that transform neutralinos into ordinary particles such as positrons and antiprotons. Crucially, this characteristic excess of positrons and antiprotons in cosmic rays can be measured. The characteristic signature of dark-matter annihilations is a sharp drop-off of these positron and antiproton excesses at high energies, due to the finite mass of the colliding neutralinos. In addition, since dark matter is ubiquitous, the excesses of the fluxes should be isotropic.

Early low-energy measurements by balloons and satellites indicated that both the positron fraction (that is, the ratio of the positron flux to the flux of electrons and positrons) and the antiproton-to-proton fluxes are larger than predicted by models based on the collisions of cosmic rays. The superior precision of AMS over previous experiments is now allowing researchers to investigate such features, in particular the drop-off in the positron and antiproton excesses, in unprecedented detail.

The first major result from AMS came in 2013 and concerned the positron fraction (CERN Courier October 2013 p22). This highly accurate result showed that, up to a positron energy of 350 GeV, the positron fraction fits well to dark-matter models. This result generated widespread interest in the community and motivated many new interpretations of the positron-fraction excess, for instance whether it is due to astrophysical sources or propagation effects. In 2014, AMS published the positron and electron fluxes, which showed that their behaviours are quite different from each other and that neither can be fitted with the single-power-law assumption underpinning the traditional understanding of cosmic rays.

A deepening mystery

The latest AMS results are based on 17.6 million electrons and positrons and 350,000 antiprotons. In line with previous AMS measurements, the positron flux exhibits a distinct difference from the electron flux, both in its magnitude and energy dependence (figure 1). The positrons show a unique feature: they have a tendency to drop-off sharply at energies above 300 GeV, as expected from dark-matter collisions or new astrophysical phenomena. The positron fraction decreases with energy and reaches a minimum at 8 GeV. It then increases with energy and rapidly exceeds the predictions from cosmic-ray collisions, reaching a maximum at 265 GeV and then beginning to fall off. Whereas neither the electron flux nor the positron flux can be described by a single power law, surprisingly the sum of the electron and positron fluxes can be described very accurately by a single power law above an energy of 30 GeV.

Since astrophysical sources of cosmic-ray positrons and electrons may induce some degree of anisotropy in their arrival directions, it is also important to measure the anisotropy of cosmic-ray events recorded by AMS. Using the latest data set, a systematic search for anisotropies has been carried out on the electron and positron samples in the energy range 16–350 GeV. The dipole-anisotropy amplitudes measured on 82,000 positrons and 1.1 million electrons are 0.014 for positrons and 0.003 for electrons, which are consistent with the expectations from isotropy.

The latest AMS results on the fluxes and flux ratio of electrons and positrons exhibit unique and previously unobserved features. These include the energy dependence of the positron fraction, the existence of a maximum at 265 GeV in the positron fraction, the exact behaviour of the electron and positron fluxes and, in particular, the sharp drop-off of the positron flux. These features require accurate theoretical interpretation as to their origin, be it from dark-matter collisions or new astrophysical sources.

Concerning the measured antiproton-to-proton flux ratio (figure 2), the new data show that this ratio is independent of rigidity (defined as the momentum per unit charge) in the rigidity range 60–450 GV. This is contrary to traditional cosmic-ray models, which assume that antiprotons are produced only in the collisions of cosmic rays and therefore that the ratio decreases with rigidity. In addition, due to the large mass of antiprotons, the observed excess of the antiproton-to-proton flux ratio cannot come from pulsars. Indeed, the excess is consistent with some of the latest model predictions based on dark-matter collisions as well as those based on new astrophysical sources. Unexpectedly, the antiproton-to-positron flux ratio is also independent of rigidity in the range 60–450 GV (CERN Courier October 2016 p8). This is considered as a major result from the five-year summary of AMS data.

The upshot of these new findings in elementary-particle cosmic rays is that the rigidity dependences of the fluxes of positrons, protons and antiprotons are nearly identical, whereas the electron flux has a distinctly different rigidity dependence. This is unexpected because electrons and positrons lose much more energy in the galactic magnetic fields than do protons and antiprotons.

Nuclei in cosmic rays

Most of the cosmic rays flying through the cosmos comprise protons and nuclei, and AMS collects nuclei simultaneously with elementary particles to enable an accurate understanding of both astrophysical phenomena and cosmic-ray propagation. The latest AMS results shed light on the properties of protons, helium, lithium and heavier nuclei in the periodic table. Protons, helium, carbon and oxygen are traditionally assumed to be primary cosmic rays, which means they are produced directly from a source such as supernova remnants.

Protons and helium are the two most abundant charged cosmic rays. They have been measured repeatedly by many experiments over many decades, and their energy dependence has traditionally been assumed to follow a single power law. In the case of lithium, which is assumed to be produced from the collision of primary cosmic rays with the interstellar medium and therefore yields a single power law but with a different spectral index, experimental data have been very limited.

No one has a clue what could be causing these spectacular effects

Sam Ting

The latest AMS data reveal, with approximately 1% accuracy, that the proton, helium and lithium fluxes as a function of rigidity all deviate from the traditional single-power-law dependence at a rigidity of about 300 GV (figure 3). It is completely unexpected that all three deviate from a single power law, that all three deviations occur at about the same rigidity and increase at higher rigidities, and that the three spectra can be fitted with double power laws above a rigidity of 45 GV. In addition, it has long been assumed that since both protons and helium are primary cosmic rays with the same energy dependence at high energies, their flux ratio would be independent of rigidity. The AMS data show that above rigidities of 45 GV, the flux ratio decreases with rigidity and follows a single-power-law behaviour. Despite being a secondary cosmic ray, lithium also exhibits the same rigidity behaviour as protons and helium. It is fair to say that, so far, no one has a clue what could be causing these spectacular effects.

The latest AMS measurement of the boron-to-carbon flux ratio (B/C) also contains surprises (figure 4). Boron is assumed to be produced through the interactions of primary cosmic rays such as carbon and oxygen with the interstellar medium, which means that B/C provides information both on cosmic-ray propagation and on the properties of the interstellar medium. The B/C ratio does not show any significant structures, in contrast to many cosmic-ray propagation models that assume such behaviour at high rigidities (including a class of propagation models that explain the observed AMS positron fraction). Cosmic-ray propagation is commonly modelled as relativistic gas diffusion through a magnetised plasma, and models of the magnetised plasma predict different behaviours of B/C as a function of rigidity. At rigidities above 65 GV, the latest AMS data can be well fitted by a single power law with spectral index Δ in agreement with the Kolmogorov model of turbulence, which predicts Δ = –1/3 asymptotically.

Building a spectrometer in space

AMS is a precision, multipurpose TeV spectrometer measuring 5 × 4 × 3 m and weighing 7.5 tonnes. It consists of a transition radiation detector (TRD) to identify electrons and positrons; a permanent magnet together with nine layers of silicon tracker (labelled 1 to 9) to measure momentum up to the multi-TeV range and to identify different species of particles and nuclei via their energy loss; two banks of time-of-flight (TOF) counters to measure the direction and velocity of cosmic rays and identify species by energy loss; veto counters (ACC) surrounding the inner bore of the magnet to reject cosmic rays from the side; a ring-image Cherenkov counter (RICH) to measure the cosmic-ray energy and identify particle species; and an electromagnetic calorimeter (ECAL) to provide 3D measurements of the energy and direction of electrons and positrons, and distinguish them from antiprotons, protons and other nuclei.

Future directions

Much has been learnt from the unexpected physics results from the first five years of AMS. Measuring many different species of charged cosmic rays at the same time with high accuracy provides unique input for the development of a comprehensive theory of cosmic rays, which have puzzled researchers for a century. AMS data are also providing new information that is essential to our understanding of the origin of dark matter, the existence of heavy antimatter, and the properties of charged cosmic rays in the cosmos.

The physics potential of AMS is the reason why the experiment receives continuous support. AMS is a US DOE and NASA-sponsored international collaboration and was built with European participation from Finland, France, Germany, Italy, Portugal, Spain and Switzerland, together with China, Korea, Mexico, Russia, Taiwan and the US. CERN has provided critical support to AMS, with CERN engineers engaged in all phases of the construction. Of particular importance was the extensive calibration of the AMS detector with different particle test beams at various energies, which provided key reference points for verifying the detector’s operation in space.

AMS will continue to collect data at higher energies and with high precision during the lifetime of the ISS, at least until 2024. To date, AMS is the only long-duration precision magnetic spectrometer in space and, given the challenges involved in such a mission, it is likely that it will remain so for the foreseeable future.

What is AMS telling us?

In the first half of the 20th century, many of the most important discoveries of new particles were made by cosmic-ray experiments. Examples include antimatter, the muon, pion, kaon and other hadrons, which opened up the field of high-energy physics and set in motion our modern understanding of elementary particles. This came about because cosmic-ray interactions with nuclei in the upper atmosphere are among the highest-energy events known, surpassing anything that could be produced in laboratories at the time – and even in collisions at the LHC today.

However, around the middle of the century the balance of power in particle physics shifted to accelerator experiments. By generating high-energy interactions in the laboratory under controlled conditions, accelerators offered new possibilities for precise measurements and thus for the study of rare particles and phenomena. These experiments helped to flush out the quark model and also the fundamental force-carrying bosons, leading to the establishment of the Standard Model (SM) – whose success was crowned by the discovery of the Higgs boson at the LHC in 2012.

Today, thanks to its unique position on the International Space Station, the AMS experiment combines the best of both worlds as a highly sensitive particle detector that is free from the complicated environment of the atmosphere (see “Cosmic rays continue to confound“). Collecting data since 2011, AMS has initiated a new epoch of precision cosmic-ray experiments that help to address basic puzzles in particle physics such as the nature of dark matter. The experiment’s latest round of data continues to throw up surprises. Arriving at the correct interpretation of events due to particles produced far away in the universe, however, still presents challenges for physicists trying to understand dark matter and the cosmological asymmetry between matter and antimatter.

Best of both worlds

The emphasis in particle physics now is on the search for physics beyond the SM, for which many motivations come from astrophysics and cosmology. Examples include dark matter, which contributes many times more to the overall density of matter in the universe than does the conventional matter described by the SM, and the origin of matter itself. Many physicists think that dark matter may be composed of particles that could be detected at the LHC, or might reveal themselves in astrophysical experiments such as AMS. As for the origin of matter, the big question has been whether it is due to an intrinsic difference between the properties of matter and antimatter particles, or whether the dominance of matter over antimatter in the universe around us is merely a local phenomenon. Although it is unlikely that there exist other regions of the observable universe where antimatter dominates, there is limited direct experimental evidence against it.

The AMS approach to cosmic-ray physics is based on decades of experience in high-statistics, high-precision accelerator experiments. It has a strong focus on measurements of antiparticle spectra that allows it to search indirectly for possible dark-matter particles, which would produce antiparticles if they annihilated with each other, as well as for possible harbingers of astrophysical concentrations of antimatter. In parallel, AMS is able to make measurements of the energy spectra of many different nuclear species, posing challenges for models of the origin of cosmic rays – a mystery that has stood ever since their discovery in 1912.

Unconventional physics?

The latest AMS results on the cosmic-ray electron and positron fluxes provide very accurate measurements of the very different spectra of these particles. Numerous previous experiments had discovered an increase in the positron-to-electron ratio at increasing energies, although with considerable scatter. AMS has now confirmed this trend with greater precision, but it also indicates that the positron-to-electron ratio may decrease again at energies above about 300 GeV. The differences between the electron and positron fluxes mean that different mechanisms must be dominating their production. The natural question is whether some exotic mechanism is contributing to positron production.

One possibility is the annihilation of dark-matter particles, but a more conventional possibility is production by electromagnetic processes around one or more nearby pulsars. In both cases, one might expect the positron spectrum to turn down at higher energies, being constrained by either the mass of the dark-matter particle or by the strength of the acceleration mechanism around the pulsar(s). In the latter case, one would also expect the positron flux to be non-isotropic, but no significant effect has been seen so far. It will be interesting to see whether the high-energy decrease in the positron-to-electron ratio is confirmed by future AMS data, and whether this can be used to discriminate between exotic and conventional models for positron production.

A more sensitive probe of unconventional physics could be provided by the AMS measurement of the spectrum of antiprotons. These cannot be produced in the electromagnetic processes around pulsars, but would be produced as “secondaries” in the collisions between primary-matter cosmic rays and ordinary-matter particles. It is striking, for instance, that the antiproton-to-proton ratio measured by AMS is almost constant at energies of about 10 GeV. The ratio is significantly higher than some earlier calculations of secondary antiproton production, although recent calculations (which account more completely for the theoretical uncertainties) indicate that the antiproton-to-proton ratio may be somewhat higher – possibly even consistent with the AMS measurements. As with the case for positron production, extending the measurements to higher energies will be crucial for distinguishing between exotic and conventional mechanisms for antiproton production.

AMS has also released interesting data concerning the fluxes of protons, helium and lithium nuclei. Intriguingly, all three spectra show strong indications of breaks in the spectra at rigidities of around 200 GV. The higher-energy portions of the spectra lie significantly above simple power-law extrapolations of the lower-energy data. It seems that some additional acceleration mechanism might be playing a role at higher energies, providing food-for-thought for astrophysical models of cosmic-ray acceleration. In particular, the unexpected shape of the spectrum of primary protons in the cosmic rays may also need to be taken into account when calculating the secondary antiproton spectrum.

The AMS data on the boron-to-carbon ratio also provide interesting information for models of the propagation of cosmic rays. In the most general picture, cosmic rays can be considered as a relativistic gas diffusing through a magnetised plasma. This leads to a boron-to-carbon ratio that decreases as a power, Δ, of the rigidity, with different models yielding values of Δ between –1/2 and –1/3. The latest AMS data constrain this power law with very high precision: Δ = –0.333±0.015, in excellent agreement with the simplest Kolmogorov model of diffusion.

The AMS collaboration has already collected data on the production of many heavier nuclei, and it would be interesting if the team could extract information about unstable nuclear isotopes that might have been produced by a recent nearby supernova explosion. Such events might already have had an effect on Earth: analyses of deep-ocean sediments have recently confirmed previous reports of a layer of iron-60 that was presumably deposited by a supernova explosion within about 100 parsecs about 2.5 million years ago, and there is evidence of iron-60 also in lunar rock samples and cosmic rays. Other unstable isotopes of potential interest include beryllium-10, aluminium-26, chlorine-39, manganese-53 and nickel-59.

Promising prospects

What else may we expect from AMS in the future? The prospective gains from measuring the spectra of positrons and antiprotons to higher energies have already been mentioned. Since these antiparticles can also be produced by other processes, such as pulsars and primary-matter cosmic rays, they may not provide smoking guns for antimatter production via dark-matter annihilation, or for concentrations of antimatter in the universe. However, searches for antinuclei in cosmic rays present interesting prospects in either or both of these directions. The production of antideuterons in dark-matter annihilations may be visible above the background of secondary production by primary-matter cosmic rays, for example. On the other hand, the production of heavier antinuclei in both dark-matter annihilations and cosmic-ray collisions is expected to be very small. The search for such antinuclei has always been one of the main scientific objectives of AMS, and the community looks forward to hearing whatever data they may acquire on their possible (non-)appearance.

As this brief survey has indicated, AMS has already provided much information of great interest for particle physicists studying scenarios for dark matter, for astrophysicists and for the cosmic-ray community. Moreover, there are good prospects for further qualitative advances in future years of data-taking. The success of AMS is another example of the fruitful marriage of particle physics and astrophysics, in this case via the deployment in space of a state-of-the-art particle spectrometer. We look forward to seeing the future progeny of this happy marriage.

ATLAS homes in on Higgs-quark couplings

boosted-decision-tree output

The Higgs boson has been observed via its decays to photons, tau leptons, and Z and W bosons, which has allowed ATLAS to glean much information about the particle’s properties. So far, these properties agree with the predictions of the Standard Model (SM). However, there are several aspects of the Higgs boson that are still largely unexplored, most notably the coupling of the Higgs boson to quarks. The two heaviest quarks, the bottom and top, are particularly interesting because they have the largest couplings to the Higgs boson. If these couplings differ from the SM predictions, it could provide a first hint of new physics.

Observing the coupling of the Higgs boson to these two quark flavours is challenging, however. Despite the Higgs decaying to a pair of bottom quarks around 58% of the time, this decay has not yet been observed because such decays manifest themselves as jets in the detector and this signature is overwhelmed by the SM production of multi-jets. As a result, physicists search for this decay by looking for the production of the Higgs in association with a vector boson (W or Z) or a top-quark pair. The additional particles have a more distinctive decay signature, but this comes at the price of a much lower signal-production rate.

Regarding the top quark, the only way to directly measure the coupling of the Higgs to the top quark at the LHC is to study events where a Higgs is produced in association with a top-quark pair. Like the situation with bottom quarks, this process has not yet been observed. Indeed, even with the more distinct decays, the background processes that mimic these signals are large, complex and difficult to model. In both the top and bottom production channels, the backgrounds are controlled by using advanced machine-learning techniques to separate signal events from background (see figure).

We should finally observe both of these processes at a high statistical significance later during Run 2,

Both searches have now been carried out by ATLAS with data from LHC Run 2, revealing a sensitivity to the Higgs boson couplings to top and bottom quarks that is competitive with searches at Run 1. However, they are still not precise enough to identify if there are any deviations from SM behaviour. With further improvements to the analyses, better understanding of the backgrounds and the unprecedented performance of the LHC, we should finally observe both of these processes at a high statistical significance later during Run 2. This will tell us if the Higgs boson is indeed responsible for the masses of the quarks as predicted in the SM, or if there is new physics beyond it.

CMS investigates the width of the top quark

Twenty years after its discovery at the Tevatron collider at Fermilab, interest in studying the top quark at the LHC is higher than ever. This was illustrated by the plethora of new results presented by the CMS collaboration at the ICHEP conference in August and at TOP 2016, which took place in the Czech Republic from 19 to 23 September.

The top quark is the only fermion heavier than the W boson and which has weak decays that do not involve a virtual particle. This leads to an unusually short lifetime (5 × 10–24 s) for a weak-mediated process, and provides a unique opportunity to probe the properties and couplings of a bare quark. In particular, the width of the top quark (which, like for all quantum resonances, is inversely proportional to its lifetime) may be easily affected by new-physics processes.

In a series of recent publications, the CMS collaboration has explored the width of the top quark in a model-independent way and searched for contributions from extremely rare processes mediated by so-called flavour-changing neutral currents (FCNCs).

The top-quark width is too narrow compared with the experimental resolution of the CMS detector to allow a precision measurement directly from the shape of the top’s invariant-mass distribution. CMS therefore considers alternative observables that provide complementary information on the top’s mass and width.

One of those observables is the invariant-mass distribution of lepton and b-jet systems produced after top-quark pair decays, which has allowed the collaboration to place new bounds on a Standard Model-like top-quark width of 0.6 ≤ Γt ≤ 2.4 GeV, based on the first 13 fb–1 of data collected in 2016 at a collision energy of 13 TeV. In parallel, based on the LHC Run 1 data set recorded at lower energies, a set of dedicated searches for FCNC processes involving top quarks has been carried out. This analysis focuses on the couplings of the top-quark to other up-type quarks (up, charm) and different neutral bosons: the gluon, the photon, the Z boson and the Higgs boson.

CMS collaboration is fast approaching sensitivity to the FCNC signals expected by some models with just Run 1 data.

Another approach adopted by CMS was to search for the rare production of a single top quark in association with a photon and a Z boson with the 8 TeV data set. These channels exploit the large up-quark density in the proton, and to a lesser extent the charm-quark density, therefore compensating for the smallness of the FCNC couplings. Finally, events with the conventional signature of t-channel production (resulting in a single top-quark decay and a light-quark jet) were used to set constraints on FCNC and other anomalous couplings by simultaneously considering their effects on the production and the decay of the top quark with both the 7 and 8 TeV data sets.

Although no deviation from the background-only expectations has been observed in any of the analyses so far, the CMS collaboration is fast approaching sensitivity to the FCNC signals expected by some models with just Run 1 data (see figure). All the analyses are limited in statistics and therefore will only benefit from more data to start effectively probing beyond-the-Standard-Model effects in the top quark sector.

Gaia compiles largest ever stellar survey

The largest all-sky survey of celestial objects has been compiled by ESA’s Gaia mission. On 13 September, 1000 days after the satellite’s launch, the Gaia team published a preliminary catalogue of more than a billion stars, far exceeding the reach of ESA’s Hipparcos mission completed two decades ago.

Astrometry – the science of charting the sky – has undergone tremendous progress over the centuries, from naked-eye observations in antiquity to Gaia’s sophisticated space instrumentation today. The oldest known comprehensive catalogue of stellar positions was compiled by Hipparchus of Nicaea in the 2nd century BC. His work, which was based on even earlier observations by Assyro-Babylonian astronomers, was handed down 300 years later by Ptolemy in his 2nd century treatise known as the Almagest. Although it listed the positions of 850 stars with a precision of less than one degree, which is about twice the diameter of the Moon, this work was significantly surpassed only in 1627 with the publication of a catalogue of about 1000 stars by the Danish astronomer Tycho Brahe, who achieved a precision of about 1 arcminute by using large quadrants and sextants.

Gaia has an astrometric accuracy about 100 times better than Hipparcos.

The first stellar catalogue compiled with the aid of a telescope was published in 1725 by English astronomer John Flamsteed, listing the positions of almost 3000 stars with a precision of 10–20 arcseconds. The precision increased significantly during the following centuries, with the use of photographic plates by the Yale Trigonometric Parallax Catalogue reaching 0.01 arcsecond in 1995. ESA’s Hipparcos mission, which operated from 1989 to 1993, was the first space telescope devoted to measuring stellar positions. The Hipparcos catalogue, released in 1997, provides the position, parallax and proper motion of 117,955 stars with a precision of 0.001 arcsecond. The “parallax” is a small displacement of the star’s position after a six-month interval, offering a different viewpoint from Earth’s annual orbit around the Sun and allowing the star’s distance to be derived.

While Hipparcos could probe the stars to distances of about 300 light-years, Gaia’s objective is to extend this to a significant fraction of the size of our Galaxy, which spans about 100,000 light-years. To achieve this, Gaia has an astrometric accuracy about 100 times better than Hipparcos. As a comparison, if Hipparcos could measure the angle that corresponds to the height of an astronaut standing on the Moon, Gaia would be able to measure the astronaut’s thumbnail.

Gaia was launched on 19 December 2013 towards the Lagrangian point L2, which is a prime location to look at the sky away from disturbances from the Sun, Earth and Moon. Although the first data release already comprises about a billion stars observed during the first 14 months of the mission, there was not enough time to disentangle the proper motion from the parallax. This could only be computed with higher precision for about two million stars previously observed by Hipparcos.

The new catalogue gives an impression of the great capabilities of Gaia. More observations are needed to make a dynamic 3D map of the Milky Way and to find and characterise possible brightness variations of all these stars. Gaia will then be able to provide the parallax distance of many periodic stars such as Cepheids, which are crucial in the accurate determination of the cosmic-distance ladder.

AMS reports unexpected result in antiproton data

Researchers working on the AMS (Alpha Magnetic Spectrometer) experiment, which is attached to the International Space Station, have reported precision measurements of antiprotons in primary cosmic rays at energies never before attained. Based on 3.49 × 105 antiproton events and 2.42 × 109 proton events, the AMS data represent new and unexpected observations of the properties of elementary particles in the cosmos.

Assembled at CERN and launched in May 2011, AMS is a 7.5 tonne detector module that measures the type, energy and direction of particles. The goals of AMS are to use its unique position in space to search for dark matter and antimatter, and to study the origin and propagation of charged cosmic rays: electrons, positrons, protons, antiprotons and nuclei. So far, the collaboration has published several key measurements of energetic cosmic-ray electrons, positrons, protons and helium, for example finding an excess in the positron flux (CERN Courier November 2014 p6). This latter measurement placed constraints on existing models and gave rise to new ones, including collisions of dark-matter particles, astrophysical sources and collisions of cosmic rays – some of which make specific predictions about the antiproton flux and the antiproton-to-proton flux ratio in cosmic rays.

With its latest antiproton results, AMS has now simultaneously measured all of the charged-elementary-particle cosmic-ray fluxes and flux ratios. Due to the scarcity of antiprotons in space (being outnumbered by protons by a factor 10,000), experimental data on antiprotons are limited. Using the first four years of data, AMS has now measured the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays with unprecedented precision. The measurements, which demanded AMS provide a separation power of approximately 106, provide precise experimental information over an extended energy range in the study of elementary particles travelling through space.

The antiproton (p), proton (p), and positron (e+) fluxes are found to have nearly identical rigidity dependence

In the absolute-rigidity (the absolute value of the momentum/charge) range 60–500 GV, the antiproton (p), proton (p), and positron (e+) fluxes are found to have nearly identical rigidity dependence, while the electron (e) flux exhibits a markedly different rigidity dependence. In the absolute-rigidity range below 60 GV, the p/p, p/e+ and p/e+ flux ratios each reach a maximum, while in the range 60–500 GV these ratios unexpectedly show no rigidity dependence.

“These are precise and completely unexpected results. It is difficult to imagine why the flux of positrons, protons and antiprotons have exactly the same rigidity dependence and the electron flux is so different,” says AMS-spokesperson Samuel Ting. “AMS will be on the Space Station for its lifetime. With more statistics at higher energies, we will probe further into these mysteries.”

ATLAS observes single top-quarks at 13 TeV

The neural-network discriminant for the positive lepton channel.

The ATLAS collaboration is exploiting the window of opportunity opened by the LHC’s 13 TeV run to search directly for unknown particles. Complementary to this approach, the collaboration is also looking for deviations in the cross-sections and kinematic distributions of Standard Model processes, which could be caused by energy-dependent couplings that become accessible at the higher collision energy.

Using data recorded in 2015 corresponding to an integrated luminosity of 3.2 fb–1, ATLAS has recently measured the total cross-sections of single top-quark and top-antiquark production via the t-channel exchange of virtual W bosons. This channel has exciting kinematic features such as polarised top-quarks and forward spectator jets. Compared to the dominant top-quark−top-antiquark (tt) pair-production process, however, the single-production process is experimentally more challenging due to a higher background level. Because the two major background processes are W+jets and tt pair production, the selection of candidate events requires one charged lepton, missing transverse momentum and two hadronic jets to be present (exactly one of which has to be identified to contain b hadrons).

To measure the cross-section of top-quark and top-antiquark production separately, the events are separated into two channels according to the sign of the lepton charge. ATLAS uses neural networks to exploit the kinematic differences between the signal and background processes as much as possible, thereby optimising the statistical power of the data set. Ten different kinematic variables were combined into a discriminant, which is assumed to be close to zero for background-like events and unity for signal-like events (see figure).

The cross-sections were measured to be 156±28 pb for top-quark production and 91±19 pb for top-antiquark production. These are slightly higher than expected (+15% and +12%, respectively), but still in good agreement with the predictions. The largest uncertainties are related to the Monte Carlo generators used to model the t-channel single top-quark process and the tt pair-production process, the b-jet identification efficiency and the jet energy scale. In future measurements of the single top-quark process, the focus will be on reducing the uncertainties, exploiting improved calibrations and extending studies of the Monte Carlo generators.

Earth-like planet orbits our nearest star

Astronomers have found clear evidence of a planet orbiting the closest star to Earth, Proxima Centauri. The extrasolar planet is only slightly more massive than the Earth and orbits its star within the habitable zone, where the temperature would allow liquid water on its surface. The discovery represents a new milestone in the search for exoplanets that possibly harbour life.

Since the discovery of the first exoplanet in 1995, more than 3000 have been found. Most were detected either via radial velocity or transit techniques. The former relies on spectroscopic measurements of the weak back-and-forth wobbling of the star induced by the gravitational pull of the orbiting planet, while the latter method measures the slight drop in the star’s brightness due to the occultation of part of its surface when the planet passes in front of it.

Exoplanets discovered so far exhibit a diverse range of properties, with masses ranging from Earth-like values to several times the mass of Jupiter. Massive planets close to their parent star are the easiest to find: the first known exoplanet, called 51 Peg b, was a gaseous Jupiter-sized planet (a “hot Jupiter”) with a temperature of the order of 1000 °C due to its proximity to the star. The ultimate goal of exoplanet hunters is to find an Earth twin or at least an Earth-sized planet at the right distance from its parent star to have liquid water on its surface. This condition defines the habitable zone, which is the range of distance around the star that would be suitable for life.

Proxima Centauri b orbits the star (Proxima Centauri) in only 11.2 days and has a minimum mass of 1.27 Earth masses.

Proxima Centauri b matches this condition and is also a special planet for us because it orbits our nearest star, located just 4.2 light-years away. Near does not necessarily mean bright, however. Proxima Centauri is actually a cool red star that is much too dim to be seen with the naked eye and, with a mass about eight times smaller than the Sun, it is also around 600 times less luminous. The habitable zone around this red-dwarf star is therefore at much shorter distances than the corresponding distances in our solar system – equivalent to a small fraction of the orbit of Mercury. Proxima Centauri b orbits the star in only 11.2 days and has a minimum mass of 1.27 Earth masses. The exact value of the mass cannot be determined by the radial-velocity method because it depends on the unknown inclination of the orbit with respect to the line of sight.

During the first half of 2016, Proxima Centauri was regularly observed with the HARPS spectrograph on the ESO 3.6 m telescope at La Silla in Chile, and simultaneously monitored by other telescopes around the world. This campaign, which was led by Guillem Anglada-Escudé of Queen Mary University of London and shared publicly online as it happened, was called the Pale Red Dot.

The final results have now been published, concluding with a discussion on the habitability of the planet. Whether there is an atmosphere and liquid water on the surface is the subject of intense debate because red-dwarf stars can display quite violent behaviour. The main threats identified in the paper are tidal locking (for example, does the planet  always present the same face to the star, as does our Moon?), strong stellar magnetic fields and strong flares with high ultraviolet and X-ray fluxes. Whereas robotic exploration is some time away, the future European Extremely Large Telescope (E-ELT) should be able to see the planet and probe its atmosphere spectroscopically.

LHCb finds tetraquark candidates

The LHCb collaboration has reported the observation of three new exotic hadrons and confirmed the existence of a fourth by analysing the full data sample from LHC Run 1. Although the theoretical interpretation of the new states is still under study, the particles each appear to be formed by two quarks and two antiquarks. They also do not seem to contain the lightest up and down quarks, which means they could be more tightly bound than other exotic particles discovered so far.

Until recently, all observed hadrons were formed either by a quark–antiquark pair (mesons) or by three quarks only (baryons). The underlying reason has remained a mystery, but during the last decade several experiments have found evidence for particles formed by more than three quarks. For example, in 2009 the CDF collaboration at Fermilab in the US observed evidence for a tetraquark candidate dubbed X(4140), which was later confirmed by the CMS and D0 collaborations (the latest LHCb analysis yields a clear observation of this state, although finds a slightly larger width than the other experiments). Then, in July 2015, LHCb announced the first observation of two pentaquark particles, which are hadrons composed of five quarks.

Each of the four states observed by LHCb – dubbed X(4274), X(4500) and X(4700), in addition to the X(4140) – has a statistical significance above five standard deviations. Sophisticated analysis of the angular distribution of B+ meson decays into J/ψ, φ and K+ mesons also allowed the collaboration to determine the quantum numbers of the exotic states with high precision. Alas, the data could not be described by a model that contains only ordinary mesons and baryons.

The binding mechanism of the new states could involve tightly bound tetraquarks or strange charmed meson pairs bouncing off each other and rearranging their quark content to emerge as a J/ψφ system. The high statistics of the LHCb data set and the sophisticated techniques exploited in the analysis will help to shed further light on the production mechanisms of these particles.

LHCb has made several other important contributions to the investigation of exotic particles. In February 2013, the quantum numbers of the X(3872) particle discovered in 2003 by the Belle experiment in Japan were determined, and in April 2014 the collaboration showed that the Z(4430) particle (also discovered at Belle) is composed of four quarks: ccdu. The latest exotic results from LHCb, which were first presented in June at the Meson 2016 workshop in Cracow, Poland, have been submitted for publication.

bright-rec iop pub iop-science physcis connect