Bluefors – leaderboard other pages

Topics

ALICE zeroes in on cold-matter effects


Measuring the production cross-section of charm hadrons in proton–proton collisions provides an important test of perturbative quantum chromodynamics (QCD). In proton–nucleus collisions, “cold-matter” effects related to the presence of nuclei in the colliding system are expected to modify the production cross-section and the transverse-momentum distribution of open-charm hadrons. Assessing such effects is thus crucial for interpreting the results from heavy-ion collisions, where a hot and dense medium of deconfined partons – the quark–gluon plasma (QGP) – is formed.

Previously, ALICE measured D-meson production in proton–lead collisions and found no substantial modification relative to proton–proton interactions within the kinematic range of the measurement (covering a transverse momentum, pT, between one and 24 GeV/c at mid-rapidity). Most cold-nuclear-matter effects are expected to modify charm production at low pT, but no measurement of D-meson production down to zero transverse momentum was performed at mid-rapidity at LHC energies.

ALICE

Recently the ALICE collaboration extended the measurement of the D0-meson cross-section down to zero pT in proton–proton collisions at 7 TeV and in proton–lead collisions at 5.02 TeV. In contrast to previous ALICE publications, the analysis relied on estimating and subtracting the combinatorial background without having to reconstruct the D0 decay vertex. This allowed the first measurement of the D0 signal in the interval 0 < pT < 1 GeV/c and a significant reduction of the uncertainties in the interval 1 < pT < 2 GeV/c compared with previous results.

The current precision of the measurement does not yet confirm the role of the different nuclear effects or the possible presence of additional hot-medium effects. However, applied to larger data sets in the future, the analysis technique will provide insight into the physics-rich region close to pT = 0.

Protons probe quark–gluon plasma at CMS


Proton–nucleus collisions provide a crucial tool to investigate the quark–gluon plasma (QGP), a state of nuclear matter with a high energy density spread over a relatively large volume. Although proton–lead (pPb) collision systems have been considered to be too small and dilute to themselves form a QGP, they have served as a reference in the search for QGP signatures in the collisions of two heavy ions. Nonetheless, in the first-ever pPb collisions at the LHC, collected in 2013, the CMS experiment observed QGP-like features in very high multiplicity pPb events.

Subsequent studies have supported the hypothesis that a dense, QGP-like medium may be formed in high multiplicity pPb systems. However, several key signatures of a dense QGP medium, observed in PbPb collisions, remain unestablished for pPb events. These unestablished signatures include the loss of energy from high-energy quarks and gluons (“jet quenching”) and the suppression of quarkonium states (J/ψ and ϒ mesons). A hint of a stronger suppression of ϒ(2S) mesons compared to ϒ(1S) mesons is observed in the 2013 pPb data, but a conclusive comparison with PbPb data at similar high multiplicities has not been possible because of limited statistical precision in the pPb data.

CMS

At the end of 2016, CMS again collected pPb collisions, with a higher energy and a larger accumulated data sample than in 2013. The experiment is thus poised to relaunch its comprehensive search for QPG signatures in high multiplicity pPb systems. Compared to 2013, the yields of relevant events (see figure) are enhanced by a factor of 20–30. This will enable many new studies that might provide conclusive results on the formation of QGP in pPb events.

Electron gun shrunk to matchbox size

An interdisciplinary team of researchers from DESY in Germany and MIT in the US has built a new kind of electron gun that is about the size of a matchbox. The new device uses laser-generated terahertz radiation, rather than traditional radio-frequency fields, to accelerate electrons from rest. Since terahertz radiation has a much shorter wavelength than radio waves, the new device measures just 34 × 24.5 × 16.8 mm – compared with the size of a car for traditional state-of-the-art electron guns.

This device reached an accelerating gradient of 350 MV per metre, which the team says is almost twice that of current electron guns. “We achieved an acceleration of a dense packet of 250,000 electrons from rest to 0.5 keV with minimal energy spread,” explains lead author W Ronny Huang of MIT, who carried out the work at the Center for Free-Electron Laser Science in Hamburg. The electron beams emerging from the device could already be used for low-energy electron diffraction experiments, he says, and will also have applications in ultrafast electron diffraction or for injecting electrons into linacs and X-ray light sources.

Compact star hints at vacuum polarisation

By studying an isolated neutron star, astronomers may have found the first observational indication of a strange quantum effect called vacuum birefringence, which was predicted in the 1930s by Werner Heisenberg and Hans Heinrich Euler.

Neutron stars are the very dense remnant cores of massive stars – at least 10 times more massive than our Sun – that have exploded as supernovae at the ends of their lives. In the 1990s, the Germany-led ROSAT space mission for soft X-ray astronomy discovered a new class of seven neutron stars that are known as the Magnificent Seven. The faint isolated objects emit pulses of X-rays every three to 11 seconds or so, but unlike most pulsars they have no detectable radio emission. The ultra-dense stars have an extremely high dipolar magnetic field (of the order 109–1010 T) and display an almost perfect black-body emission, making them unique laboratories to study neutron-star cooling processes.

A team led by Roberto Mignani from INAF Milan in Italy and the University of Zielona Gora, Poland, used ESO’s Very Large Telescope (VLT) at the Paranal Observatory in Chile to observe the neutron star RX J1856.5-3754. Despite being the brightest of the Magnificent Seven and located only around 400 light-years from Earth, its extreme dimness is at the limit of the VLT’s current capabilities to measure polarisation. The aim of the measurement was to detect a quantum effect predicted 80 years ago: since the vacuum is full of virtual particles that appear and vanish, a very strong magnetic field could polarise empty space and hence also light passing through it. Vacuum birefringence is too weak to be observed in laboratory experiments, but the phenomenon should be visible in the very strong magnetic fields around neutron stars.

ESO’s future European Extremely Large Telescope will allow astronomers to study this effect around many more neutron stars.

After careful analysis of the VLT data, Mignani and collaborators detected a significant degree (16%) of linear polarisation, which they say is likely due to vacuum birefringence occurring in the empty space surrounding RX J1856.5-3754. They claim that such a level of polarisation is not easily explained by other sources. For example, the contribution from dust grains in the interstellar medium were estimated to be less than 1%, which was corroborated by the detection of almost zero polarisation in the light from 42 nearby stars. The genuine thermal radiation of the neutron star is also expected to be polarised by its surface magnetic field, but this effect should cancel out if the emission comes from the entire surface of the neutron star over which the magnetic-field direction changes substantially.

The polarisation measurement in this neutron star constitutes the very first observational support for the predictions of QED vacuum polarisation effects. ESO’s future European Extremely Large Telescope will allow astronomers to study this effect around many more neutron stars, while the advent of X-ray polarimetric space missions offers another perspective to this new field of research.

The dawn of a new era

One of the greatest scientific discoveries of the century took place on 14 September 2015. At 09.50 UTC on that day, a train of gravitational waves launched by two colliding black holes 1.4 billion light-years away passed by the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in Louisiana, US, causing a fractional variation in the distance between the mirrors of about one part in 1021. Just 7 ms later, the same event – dubbed GW150914 – was picked up by the twin aLIGO detector in Washington 3000 km away (figure 1). A second black-hole coalescence was observed on 26 December 2015 (GW151226) and a third candidate event was also recorded, although its statistical significance was not high enough to claim a detection. A search that had gone on for half a century had finally met with success, ushering in the new era of gravitational-wave astronomy.

Black holes are the simplest physical objects in the universe: they are made purely from warped space and time and are fully described by their mass and intrinsic rotation, or spin. The gravitational-wave train emitted by coalescing binary black holes comprises three main stages: a long “inspiral” phase, where gravitational waves slowly and steadily drain the energy and angular momentum from the orbiting black-hole pair; the “plunge and merger”, where black holes move at almost the speed of light and then coalesce into the newly formed black hole; and the “ringdown” stage during which the remnant black hole settles to a stationary configuration (figure 2). Each dynamical stage contains fingerprints of the astrophysical source, which can be identified by first tracking the phase and amplitude of the gravitational-wave train and then by comparing it with highly accurate predictions from general relativity.

aLIGO employs waveform models built by combining analytical and numerical relativity. The long, early inspiral phase, characterised by a weak gravitational field and low velocities, is well described by the post-Newtonian formalism (which expands the Einstein field equation and the gravitational radiation in powers of v/c, but loses accuracy as the two bodies come closer and closer). Numerical relativity provides the most accurate solution for the last stages of inspiral, plunge, merger and ringdown, but such models are time-consuming to produce – the state-of-the-art code of the Simulating eXtreme Spacetimes collaboration took three weeks and 20,000 CPU hours to compute the gravitational waveform for the event GW150914 and three months and 70,000 CPU hours for GW151226.

A few hundred thousand different waveforms were used as templates by aLIGO during the first observing run, covering compact binaries with total masses 2–100 times that of the Sun and mass ratios up to 1:99. Novel approaches to the two-body problem that extend post-Newtonian theory into the strong-field regime and combine it with numerical relativity had to be developed to provide aLIGO with accurate and efficient waveform models, which were based on several decades of steady work in general relativity (figure 3). Further theoretical work will be needed to deal with more sensitive searches in the future if we want to take full advantage of the discovery potential of gravitational-wave astronomy.

aLIGO’s first black holes

The two gravitational-wave signals observed by aLIGO have different morphologies that reveal quite distinct binary black-hole sources. GW150914 is thought to be composed of two stellar black holes with masses 36 MSun and 29 MSun, which formed a black hole of about 62 MSun rotating at almost 70% of its maximal rotation speed, while GW151226 had lower black-hole masses (of about 14 MSun and 8 MSun) and merged in a 21 MSun black-hole remnant. Although the binary’s individual masses for GW151226 have larger uncertainties compared with GW150914 (since the former happened at a higher frequency where aLIGO sensitivity degrades), the analysis ruled out the possibility that the lower-mass object in GW151226 was a neutron star. A follow-up analysis also revealed that the individual black holes had spins less than 70% of the maximal value, and that at least one of the black holes in GW151226 was rotating at 20% of its maximal value or faster. Finally, the aLIGO data show that the binaries that produced GW150914 and GW151226 were at comparable distances from the Earth and that the peak of the gravitational-wave luminosity was about 3 × 1056 erg/sec, making them by far the most luminous transient events in the universe.

Owing to the signal’s length and the particular orientation of the binary plane with respect to the aLIGO detectors, no information about the spin precession of the system could be extracted. It has therefore not yet been possible to determine the precise astrophysical production route for these objects. Whereas the predictions for the rate of binary black-hole mergers from astrophysical-formation mechanisms traditionally vary by several orders of magnitude, the aLIGO detections so far have already established the rate to be somewhat on the high side of the range predicted by astrophysical models at 9–240 per Gpc3 per year. Larger black-hole masses and higher coalescence rates raise the interesting possibility that a stochastic background of gravitational waves composed of unresolved signals from binary black-hole mergers could be observed when aLIGO reaches its design sensitivity in 2019.

The sky localisation of GW150914 and GW151226, which is mainly determined by recording the time delays of the signals arriving at the interferometers, extended over several hundred square degrees. This can be compared with the 0.2 square degrees covered by the full Moon as seen from the Earth, and makes it very hard to search for an electromagnetic counterpart to black-hole mergers. Nevertheless, the aLIGO results kicked off the first campaign for possible electromagnetic counterparts of gravitational-wave signals, involving almost 20 astronomical facilities spanning the gamma-ray, X-ray, optical, infrared and radio regions of the spectrum. No convincing evidence of electromagnetic signals emitted by GW150914 and GW151226 was found, in line with expectations from standard astrophysical scenarios. Deviations from the standard scenario may arise if one considers dark electromagnetic sectors, spinning black holes with strong magnetic fields that need to be sustained until merger, and black holes surrounded by clouds of axions (see “Linking waves to particles”).

The new aLIGO observations have put the most stringent limits on higher post-Newtonian terms.

aLIGO’s observations allow us to test general relativity in the so-far-unexplored, highly dynamical and strong-field gravity regime. As the two black holes that emitted GW150914 and GW151226 started to merge, the binary’s orbital period varied considerably and the phase of the gravitational-wave signal changed accordingly. It is possible to obtain an analytical representation of the phase evolution in post-Newtonian theory, in which the coefficients describe a plethora of dynamical and radiative physical effects, and long-term timing observations of binary pulsars have placed precise bounds on the leading-order post-Newtonian coefficients. However, the new aLIGO observations have put the most stringent limits on higher post-Newtonian terms – setting upper bounds as low as 10% for some coefficients (figure 4). It was even possible to investigate potential deviations during the non-perturbative coalescence phase, and again general relativity passed this test without doubt.

The first aLIGO observations could neither test the second law of black-hole mechanics, which states that the black-hole entropy cannot decrease, nor the “no-hair” theorem, which says that a black hole is only described by mass and spin, for which we require to extract the mass and spin of the final black hole from the data. But we expect that future, multiple gravitational-wave detections with higher signal-to-noise ratios will shed light on these important theoretical questions. Despite those limitations, aLIGO has provided the most convincing evidence to date that stellar-mass compact objects in our universe with masses larger than roughly five solar masses are described by black holes: that is, by the solutions to the Einstein field equations (see “General relativity at 100”).

From binaries to cosmology

During its first observation run, lasting from mid-September 2015 to mid-January 2016, aLIGO did not detect gravitational waves from binaries composed of either two neutron stars, or a black hole and a neutron star. Nevertheless, it set the most stringent upper limits on the rates of such processes: 12.6 × 103 and 3.6 × 103 per Gpc3 per year, respectively. The aLIGO rates imply that we expect to detect those binary systems a few years after aLIGO and the French–Italian experiment Virgo reach their design sensitivity. Observing gravitational waves from binaries made up of matter is exciting because it allows us to infer the neutron-star equation of state and also to unveil the possible origin of short-hard gamma-ray bursts (GRBs) – enormous bursts of electromagnetic radiation observed in distant galaxies.

Neutron stars are extremely dense objects that form when massive stars run out of nuclear fuel and collapse. The density in the core is expected to be more than 1014 times the density of the Sun, at which the standard structure of nuclear matter breaks down and new phases of matter such as superfluidity and superconductivity may appear. All mass and spin parameters being equal, the gravitational-wave train emitted by a binary containing a neutron star differs from the one emitted by two black holes only in the late inspiral phase, when the neutron star is tidally deformed or disrupted. By tracking the gravitational-wave phase it will be possible to measure the tidal deformability parameter, which contains information about the neutron-star interior, and ultimately to discriminate between some equations of state. The merger of double neutron stars and/or black-hole–neutron-star binaries is currently considered the most likely source of short-hard GRBs, and we expect a plethora of electromagnetic signals from the coalescence of such compact objects that will test the short-hard GRB/binary-merger paradigm.

Bursts of gravitational waves lasting for tenths of milliseconds are also produced during the catastrophic final moments of all stars, when the stellar core undergoes a sudden collapse (or supernova explosion) to a neutron star or a black hole. At design sensitivity, aLIGO and Virgo could detect bursts from the core’s “bounce”, provided that the supernova took place in the Milky Way or neighbouring galaxies, with more extreme emission scenarios observable to much further distances. Highly magnetised rotating neutron stars called pulsars are also promising astrophysical sources of gravitational waves. Mountains just a few centimetres in height on the crust of pulsars can cause the variation in time of the pulsar’s quadrupole moment, producing a continuous gravitational-wave train at twice the rotation frequency of the pulsar. The most recent LIGO all-sky searches and targeted observations of known pulsars have already started to invade the parameter space of astrophysical interest, setting new upper limits on the source’s ellipticity, which depends on the neutron-star’s equation of state.

Lastly, several physical mechanisms in the early universe could have produced gravitational waves, such as cosmic inflation, first-order phase transitions and vibrations of fundamental and/or cosmic strings. Being that gravitational waves are almost unaffected by matter, they provide us with a pristine snapshot of the source at the time they were produced. Thus, gravitational waves may unveil a period in the history of the universe around its birth that we cannot otherwise access. The first observation run of aLIGO has set the most stringent constraints on the stochastic gravitational-wave background, which is generally expressed by the dimensionless energy density of gravitational waves, of < 1.7 × 10−7. Digging deeper, at design sensitivity aLIGO is expected to reach a value of 10−9, while next-generation detectors such as the Einstein Telescope and the Cosmic Explorer may achieve values as low as 10−13 – just two orders of magnitude above the background predicted by the standard “slow-roll” inflationary scenario.

Grand view

The sensitivity of existing interferometer experiments on Earth will be improved in the next 5–10 years by employing a quantum-optics phenomenon called squeezed light. This will reduce the sky-localisation errors of coalescing binaries, provide a better measurement of tidal effects and the neutron-star equation of state in binary mergers, and enhance our chances of observing gravitational waves from pulsars and supernovae. The ability to identify the source of gravitational waves will also improve over time, as upgraded and new gravitational-wave observatories come online.

Furthermore, pulsar signals offer an alternative Pulsar Timing Array (PTA) detection scheme that is currently operating. Gravitational waves passing through pulsars and the Earth would modify the time of arrival of the pulses, and searches for correlated signatures in the pulses’ times of arrival from the most stable known pulsars by PTA projects could detect the stochastic gravitational-wave background from unresolved supermassive binary black-hole inspirals in the 10−9–10−7 Hz frequency region. Results from the North-American NANOGrav, European EPTA and Australian PPTA collaborations have already set interesting upper limits on the astrophysical background, and could achieve a detection in the next five years.

The past year has been a milestone for gravitational-wave research in space, with the results of the LISA Pathfinder mission published in June 2016 exceeding all expectations and proving that LISA, planned for 2034, will work successfully (see “Catching a gravitational wave”). LISA would be sensitive to gravitational waves between 10−4–10−2 Hz, thus detecting sources different from the ones observed on the Earth such as supermassive binary black holes, extreme mass-ratio inspirals, and the astrophysical stochastic background from white-dwarf binaries in our galaxy. In the meantime, a new ground facility to be built in 10–15 years – such as the Einstein Telescope in Europe and the Cosmic Explorer in the US – will be required to maximise the scientific potential of gravitational-wave physics and astrophysics. These future detectors will allow such high sensitivity to binary coalescences that we can probe binary black holes in all our universe, enabling the most exquisite tests of general relativity in the highly dynamical, strong-field regime. That will challenge our current knowledge of gravity, fundamental and nuclear physics, unveiling the nature of the most extreme objects in our universe.

General relativity at 100

Einstein’s long path towards general relativity (GR) began in 1907, just two years after he created special relativity (SR), when the following apparently trivial idea occurred to him: “If a person falls freely, he will not feel his own weight.” Although it was long known that all bodies fall in the same way in a gravitational field, Einstein raised this thought to the level of a postulate: the equivalence principle, which states that there is complete physical equivalence between a homogeneous gravitational field and an accelerated reference frame. After eight years of hard work and deep thinking, in November 1915 he succeeded in extracting from this postulate a revolutionary theory of space, time and gravity. In GR, our best description of gravity, space–time ceases to be an absolute, non-dynamical framework as envisaged by the Newtonian view, and instead becomes a dynamical structure that is deformed by the presence of mass-energy.

GR has led to profound new predictions and insights that underpin modern astrophysics and cosmology, and which also play a central role in attempts to unify gravity with other interactions. By contrast to GR, our current description of the fundamental constituents of matter and of their non-gravitational interactions – the Standard Model (SM) – is given by a quantum theory of interacting particles of spins 0, ½ and 1 that evolve within the fixed, non-dynamical Minkowski space–time of SR. The contrast between the homogeneous, rigid and matter-independent space–time of SR and the inhomogeneous, matter-deformed space–time of GR is illustrated in figure 1.

The universality of the coupling of gravity to matter (which is the most general form of the equivalence principle) has many observable consequences such as: constancy of the physical constants; local isotropy of space; local Lorentz invariance; universality of free fall and universality of gravitational redshift. Many of these have been verified to high accuracy. For instance, the universality of the acceleration of free fall has been verified on Earth at the 10–13 level, while the local isotropy of space has been verified at the 10–22 level. Einstein’s field equations (see panel below) also predict many specific deviations from Newtonian gravity that can be tested in the weak-field, quasi-stationary regime appropriate to experiments performed in the solar system. Two of these tests – Mercury’s perihelion advance, and light deflection by the Sun – were successfully performed, although with limited precision, soon after the discovery of GR. Since then, many high-precision tests of such post-Newtonian gravity have been performed in the solar system, and GR has passed each of them with flying colours.

Precision tests

Similar to what is done in precision electroweak experiments, it is useful to quantify the significance of precision gravitational experiments by parameterising plausible deviations from GR. The simplest, and most conservative, deviation from Einstein’s pure spin-2 theory is defined by adding a long-range (massless) spin-0 field, φ, coupled to the trace of the energy-momentum tensor. The most general such theory respecting the universality of gravitational coupling contains an arbitrary function of the scalar field defining the “observable metric” to which the SM matter is minimally and universally coupled.

In the weak-field slow-motion limit, appropriate to describing gravitational experiments in the solar system, the addition of φ modifies Einstein’s predictions only through the appearance of two dimensionless parameters, γ and β. The best current limits on these “post-Einstein” parameters are, respectively, (2.1±2.3) × 10–5 (deduced from the additional Doppler shift experienced by radio-wave beams connecting the Earth to the Cassini spacecraft when they passed near the Sun) and < 7 × 10–5, from a study of the global sensitivity of planetary ephemerides to post-Einstein parameters.

In the regime of radiative and/or strong gravitational fields, by contrast, pulsars (rotating neutron stars emitting a beam of radio waves) in gravitationally bound orbits have provided crucial tests of GR. In particular, measurements of the decay in the orbital period of binary pulsars have provided direct experimental confirmation of the propagation properties of the gravitational field. Theoretical studies of binaries in GR have shown that the finite velocity of propagation of the gravitational interaction between the pulsar and its companion generates damping-like terms at order (v/c)5 in the equations of motion that lead to a small orbital period decay. This has been observed in more than four different systems since the discovery of binary pulsars in 1974, providing direct proof of the reality of gravitational radiation. Measurements of the arrival times of pulsar signals have also allowed precision tests of the quasi-stationary strong-field regime of GR, since their values may depend both on the unknown masses of the binary system and on the theory of gravity used to describe the strong self-gravity of the pulsar and its companion (figure 2).

The radiation revelation

Einstein realised that his field equations had wave-like solutions in two papers in June 1916 and January 1918 (see panel below). For many years, however, the emission of gravitational waves (GWs) by known sources was viewed as being too weak to be of physical significance. In addition, several authors – including Einstein himself – had voiced doubts about the existence of GWs in fully nonlinear GR.

The situation changed in the early 1960s when Joseph Weber understood that GWs arriving on Earth would have observable effects and developed sensitive resonant detectors (“Weber bars”) to search for them. Then, prompted by Weber’s experimental effort, Freeman Dyson realised that, when applying the quadupolar energy-loss formula derived by Einstein to binary systems made of neutron stars, “the loss of energy by gravitational radiation will bring the two stars closer with ever-increasing speed, until in the last second of their lives they plunge together and release a gravitational flash at a frequency of about 200 cycles and of unimaginable intensity.” The vision of Dyson has recently been realised thanks, on the one hand, to the experimental development of drastically more sensitive non-resonant kilometre-scale interferometric detectors and, on the other hand, to theoretical advances that allowed one to predict in advance the accurate shape of the GW signals emitted by coalescing systems of neutron stars and black holes (BHs).

The recent observations of the LIGO interferometers have provided the first detection of GWs in the wave zone. They also provide the first direct evidence of the existence of BHs via the observation of their merger, followed by an abrupt shut-off of the GW signal, in complete accord with the GR predictions.

BHs are perhaps the most extraordinary consequence of GR, because of the extreme distortion of space and time that they exhibit. In January 1916, Karl Schwarzschild published the first exact solution of the (vacuum) Einstein equations, supposedly describing the gravitational field of a “mass point” in GR. It took about 50 years to fully grasp the meaning and astrophysical plausibility of these Schwarzschild BHs. Two of the key contributions that led to our current understanding of BHs came from Oppenheimer and Snyder, who in 1939 suggested that a neutron star exceeding its maximum possible mass will undergo gravitational collapse and thereby form a BH, and from Kerr 25 years later, who discovered a generalisation of the Schwarzschild solution describing a BH endowed both with mass and spin.

The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies.

Another remarkable consequence of GR is theoretical cosmology, namely the possibility of describing the kinematics and the dynamics of the whole material universe. The field of relativistic cosmology was ushered in by a 1917 paper by Einstein. Another key contribution was the 1924 paper of Friedmann that described general families of spatially curved, expanding or contracting homogeneous cosmological models. The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies. Quantitative confirmations of GR on cosmological scales have also been obtained, notably through the observation of a variety of gravitational lensing systems.

Dark clouds ahead

In conclusion, all present experimental gravitational data (universality of free fall, post-Newtonian gravity, radiative and strong-field effects in binary pulsars, GW emission by coalescing BHs and gravitational lensing) have been found to be compatible with the predictions of Einstein’s theory. There are also strong constraints on sub-millimetre modifications of Newtonian gravity from torsion-balance tests of the inverse square law.

One might, however, wish to keep in mind the presence of two dark clouds in our current cosmology, namely the need to assume that most of the stress-energy tensor that has to be put on the right-hand side of the GR field equations to account for the current observations is made of yet unseen types of matter: dark matter and a “cosmological constant”. It has been suggested that these signal a breakdown of Einstein’s gravitation at large scales, although no convincing theoretical modification of GR at large distances has yet been put forward.

GWs, BHs and dynamical cosmological models have become essential elements of our description of the macroscopic universe. The recent and bright beginning of GW astronomy suggests that GR will be an essential tool for discovering new aspects of the universe (see “The dawn of a new era”). A century after its inception, GR has established itself as the standard theoretical description of gravity, with applications ranging from the Global Positioning System and the dynamics of the solar system, to the realm of galaxies and the primordial universe.

However, in addition to the “dark clouds” of dark matter and energy, GR also poses some theoretical challenges. There are both classical challenges (notably the formation of space-like singularities inside BHs), and quantum ones (namely the non-renormalisability of quantum gravity – see “Gravity’s quantum side”). It is probable that a full resolution of these challenges will be reached only through a suitable extension of GR, and possibly through its unification with the current “spin ≤ 1” description of particle physics, as suggested both by supergravity and by superstring theory.

It is therefore vital that we continue to submit GR to experimental tests of increasing precision. The foundational stone of GR, the equivalence principle, is currently being probed in space at the 10–15 level by the MICROSCOPE satellite mission of ONERA and CNES. The observation of a deviation of the universality of free fall would imply that Einstein’s purely geometrical description of gravity needs to be completed by including new long-range fields coupled to bulk matter. Such an experimental clue would be most valuable to indicate the road towards a more encompassing physical theory.

General relativity makes waves

There are two equivalent ways of characterising general relativity (GR). One describes gravity as a universal deformation of the Minkowski metric, which defines a local squared interval between two infinitesimally close space–time points and, consequently, the infinitesimal light cones describing the local propagation of massless particles. The metric field gμν is assumed in GR to be universally and minimally coupled to all the particles of the Standard Model (SM), and to satisfy Einstein’s field equations:

equation 1

Here, Rμν denotes the Ricci curvature (a nonlinear combination of gμν and of its first and second derivatives), Tμν is the stress-energy tensor of the SM particles (and fields), and G denotes Newton’s gravitational constant.

The second way of defining GR, as proven by Richard Feynman, Steven Weinberg, Stanley Deser and others, states that it is the unique, consistent, local, special-relativistic theory of a massless spin-2 field. It is then found that the couplings of the spin-2 field to the SM matter are necessarily equivalent to a universal coupling to a “deformed” space–time metric, and that the propagation and self-couplings of the spin-2 field are necessarily described by Einstein’s equations.

Following the example of Maxwell, who had found that the electromagnetic-field equations admit propagating waves as solutions, Einstein found that the GR field equations admit propagating gravitational waves (GWs). He did so by considering the weak-field limit (gμν  = ημν + hμν) of his equations, namely,

equation 2

where hμν =  hμν – ½h ημν. When choosing the co-ordinate system so as to satisfy the gravitational analogue of the Lorenz gauge condition, so that

equation 3

the linearised field equations simplify to the diagonal inhomogeneous wave equation, which can be solved by retarded potentials.

There are two main results that derive from this wave equation: first, a GW is locally described by a plane wave with two transverse tensorial polarisations (corresponding to the two helicity states of the massless spin-2 graviton) and travelling at the velocity of light; second, a slowly moving, non self-gravitating source predominantly emits a quadupolar GW.

Gravity’s quantum side

There is little doubt that, in spite of their overwhelming success in describing phenomena over a vast range of distances, general relativity (GR) and the Standard Model (SM) of particle physics are incomplete theories. Concerning the SM, the problem is often cast in terms of the remaining open issues in particle physics, such as its failure to account for the origin of the matter–antimatter asymmetry or the nature of dark matter. But the real problem with the SM is theoretical: it is not clear whether it makes sense at all as a theory beyond perturbation theory, and these doubts extend to the whole framework of quantum field theory (QFT) (with perturbation theory as the main tool to extract quantitative predictions). The occurrence of “ultraviolet” (UV) divergences in Feynman diagrams, and the need for an elaborate mathematical procedure called renormalisation to remove these infinities and make testable predictions order-by-order in perturbation theory, strongly point to the necessity of some other and more complete theory of elementary particles.

On the GR side, we are faced with a similar dilemma. Like the SM, GR works extremely well in its domain of applicability and has so far passed all experimental tests with flying colours, most recently and impressively with the direct detection of gravitational waves (see “General relativity at 100”). Nevertheless, the need for a theory beyond Einstein is plainly evident from the existence of space–time singularities such as those occurring inside black holes or at the moment of the Big Bang. Such singularities are an unavoidable consequence of Einstein’s equations, and the failure of GR to provide an answer calls into question the very conceptual foundations of the theory.

Unlike quantum theory, which is rooted in probability and uncertainty, GR is based on notions of smoothness and geometry and is therefore subject to classical determinism. Near a space–time singularity, however, the description of space–time as a continuum is expected to break down. Likewise, the assumption that elementary particles are point-like, a cornerstone of QFT and the reason for the occurrence of ultraviolet infinities in the SM, is expected to fail in such extreme circumstances. Applying conventional particle-physics wisdom to Einstein’s theory by quantising small fluctuations of the metric field (corresponding to gravitational waves) cannot help either, since it produces non-renormalisable infinities that undermine the predictive power of perturbatively quantised GR.

In the face of these problems, there is a wide consensus that the outstanding problems of both the SM and GR can only be overcome by a more complete and deeper theory: a theory of quantum gravity (QG) that possibly unifies gravity with the other fundamental interactions in nature. But how are we to approach this challenge?

Planck-scale physics

Unlike with quantum mechanics, whose development was driven by the need to explain observed phenomena such as the existence of spectral lines in atomic physics, nature gives us very few hints of where to look for QG effects. One main obstacle is the sheer smallness of the Planck length, of the order 10−33 cm, which is the scale at which QG effects are expected to become visible (conversely, in terms of energy, the relevant scale is 1019 GeV, which is 15 orders of magnitude greater than the energy range accessible to the LHC). There is no hope of ever directly measuring genuine QG effects in the laboratory: with zillions of gravitons in even the weakest burst of gravitational waves, realising the gravitational analogue of the photoelectric effect will forever remain a dream.

One can nevertheless speculate that QG might manifest itself indirectly, for instance via measurable features in the cosmic microwave background, or cumulative effects originating from a more granular or “foamy” space–time. Alternatively, perhaps a framework will emerge that provides a compelling explanation for inflation, dark energy and the origin of the universe. Although not completely hopeless, available proposals typically do not allow one to unambiguously discriminate between very different approaches, for instance when contrarian schemes like string theory and loop quantum gravity vie to explain features of the early universe. And even if evidence for new effects was found in, say, cosmic-ray physics, these might very well admit conventional explanations.

In the search for a consistent theory of QG, it therefore seems that we have no other choice but to try to emulate Einstein’s epochal feat of creating a new theory out of purely theoretical considerations.

Emulating Einstein

Yet, after more than 40 years of unprecedented collective intellectual effort, different points of view have given rise to a growing diversification of approaches to QG – with no convergence in sight. It seems that theoretical physics has arrived at crossroads, with nature remaining tight-lipped about what comes after Einstein and the SM. There is currently no evidence whatsoever for any of the numerous QG schemes that have been proposed – no signs of low-energy supersymmetry, large extra dimensions or “stringy” excitations have been seen at the LHC so far. The situation is no better for approaches that do not even attempt to make predictions that could be tested at the LHC.

Existing approaches to QG fall roughly into two categories, reflecting a basic schism that has developed in the community. One is based on the assumption that Einstein’s theory can stand on its own feet, even when confronted with quantum mechanics. This would imply that QG is nothing more than the non-perturbative quantisation of Einstein’s theory and that GR, suitably treated and eventually complemented by the SM, correctly describes the physical degrees of freedom also at the very smallest distances. The earliest incarnation of this approach goes back to the pioneering work of John Wheeler and Bryce DeWitt in the early 1960s, who derived a GR analogue of the Schrödinger equation in which the “wave function of the universe” encodes the entire information about the universe as a quantum system. Alas, the non-renormalisable infinities resurface in a different guise: the Wheeler–DeWitt equation is so ill-defined mathematically that no one until now has been able to make sense of it beyond mere heuristics. More recent variants of this approach in the framework of loop quantum gravity (LQG), spin foams and group field theory replace the space–time metric by new variables (Ashtekar variables, or holonomies and fluxes) in a renewed attempt to overcome the mathematical difficulties.

The opposite attitude is that GR is only an effective low-energy theory arising from a more fundamental Planck-scale theory, whose basic degrees of freedom are very different from GR or quantum field theory. In this view, GR and space–time itself are assumed to be emergent, much like macroscopic physics emerges from the quantum world of atoms and molecules. The perceived need to replace Einstein’s theory by some other and more fundamental theory, having led to the development of supersymmetry and supergravity, is the basic hypothesis underlying superstring theory (see “The many lives of supergravity”). Superstring theory is the leading contender for a perturbatively finite theory of QG, and widely considered the most promising possible pathway from QG to SM physics. This approach has spawned a hugely varied set of activities and produced many important ideas. Most notable among these, the AdS/CFT correspondence posits that the physics that takes place in some volume can be fully encoded in the surface bounding that volume, as for a hologram, and consequently that QG in the bulk should be equivalent to a pure quantum field theory on its boundary.

Apart from numerous technical and conceptual issues, there remain major questions for all approaches to QG. For LQG-like or “canonical” approaches, the main unsolved problems concern the emergence of classical space–time and the Einstein field equations in the semiclassical limit, and their inability to recover standard QFT results such as anomalies. On the other side, a main shortcoming is the “background dependence” of the quantisation procedure, for which both supergravity and string theory have to rely on perturbative expansions about some given space–time background geometry. In fact, in its presently known form, string theory cannot even be formulated without reference to a specific space–time background.

These fundamentally different viewpoints also offer different perspectives on how to address the non-renormalisability of Einstein’s theory, and consequently on the need (or not) for unification. Supergravity and superstring theory try to eliminate the infinities of the perturbatively quantised theory, in particular by including fermionic matter in Einstein’s theory, thus providing a raison d’être for the existence of matter in the world. They therefore automatically arrive at some kind of unification of gravity, space–time and matter. By contrast, canonical approaches attribute the ultraviolet infinities to basic deficiencies of the perturbative treatment. However, to reconcile this view with semiclassical gravity, they will have to invoke some mechanism – a version of Weinberg’s asymptotic safety – to save the theory from the abyss of non-renormalisability.    

Conceptual challenges

Beyond the mathematical difficulties to formulating QG, there are a host of issues of a more conceptual nature that are shared by all approaches. Perhaps the most important concerns the very ground rules of quantum mechanics: even if we could properly define and solve the Wheeler–DeWitt equation, how are we to interpret the resulting wave function of the universe? After all, the latter pretends to describe the universe in its entirety, but in the absence of outside classical observers, the Copenhagen interpretation of quantum mechanics clearly becomes untenable. On a slightly less grand scale, there are also unresolved issues related to the possible loss of information in connection with the Hawking evaporation of black holes.

A further question that any theory of QG must eventually answer concerns the texture of space–time at the Planck scale: do there exist “space–time atoms” or, more specifically, web-like structures like spin networks and spin foams, as claimed by LQG-like approaches? (see diagram) Or does the space–time continuum get dissolved into a gas of strings and branes, as suggested by some variants of string theory, or emerge from holographic entanglement, as advocated by AdS/CFT aficionados? There is certainly no lack of enticing ideas, but without a firm guiding principle and the prospect of making a falsifiable prediction, such speculations may well end up in the nirvana of undecidable propositions and untestable expectations.

Why then consider unification? Perhaps the strongest argument in favour of unification is that the underlying principle of symmetry has so far guided the development of modern physics from Maxwell’s theory to GR all the way to Yang–Mills theories and the SM (see diagram). It is therefore reasonable to suppose that unification and symmetry may also point the way to a consistent theory of QG. This point of view is reinforced by the fact that the SM, although only a partially unified theory, does already afford glimpses of trans-Planckian physics, independently of whether new physics shows up at the LHC or not. This is because the requirements of renormalisability and vanishing gauge anomalies put very strong constraints on the particle content of the SM, which are indeed in perfect agreement with what we see in detectors. There would be no more convincing vindication of a theory of QG than its ability to predict the matter content of the world (see panel below).

In search of SUSY

Among the promising ideas that have emerged over the past decades, arguably the most beautiful and far reaching is supersymmetry. It represents a new type of symmetry that relates bosons and fermions, thus unifying forces (mediated by vector bosons) with matter (quarks and leptons), and which endows space–time with extra fermionic dimensions. Supersymmetry is very natural from the point of view of cancelling divergences because bosons and fermions generally contribute with opposite signs to loop diagrams. This aspect means that low-energy (N = 1) supersymmetry can stabilise the electroweak scale with regard to the Planck scale, thereby alleviating the so-called hierarchy problem via the cancellation of quadratic divergences. These models predict the existence of a mirror world of superpartners that differ from the SM particles only by their opposite statistics (and their mass), but otherwise have identical internal quantum numbers.

To the great disappointment of many, experimental searches at the LHC so far have found no evidence for the superpartners predicted by N = 1 supersymmetry. However, there is no reason to give up on the idea of supersymmetry as such, since the refutation of low-energy supersymmetry would only mean that the most simple-minded way of implementing this idea does not work. Indeed, the initial excitement about supersymmetry in the 1970s had nothing to do with the hierarchy problem, but rather because it offered a way to circumvent the so-called Coleman–Mandula no-go theorem – a beautiful possibility that is precisely not realised by the models currently being tested at the LHC.

In fact, the reduplication of internal quantum numbers predicted by N = 1 supersymmetry is avoided in theories with extended (N > 1) supersymmetry. Among all supersymmetric theories, maximal N = 8 supergravity stands out as the most symmetric. Its status with regard to perturbative finiteness is still unclear, although recent work has revealed amazing and unexpected cancellations. However, there is one very strange agreement between this theory and observation, first emphasised by Gell-Mann: the number of spin-1/2 fermions remaining after complete breaking of supersymmetry is 48 = 3 × 16, equal to the number of quarks and leptons (including right-handed neutrinos) in three generations (see “The many lives of supergravity”). To go beyond the partial matching of quantum numbers achieved so far will, however, require some completely new insights, especially concerning the emergence of chiral gauge interactions.

Then again, perhaps supersymmetry is not the end of the story. There is plenty of evidence that another type of symmetry may be equally important, namely duality symmetry. The first example of such a symmetry, electromagnetic duality, was discovered by Dirac in 1931. He realised that Maxwell’s equations in vacuum are invariant under rotations of the electric and magnetic fields into one another – an insight that led him to predict the existence of magnetic monopoles. While magnetic monopoles have not been seen, duality symmetries have turned out to be ubiquitous in supergravity and string theory, and they also reveal a fascinating and unsuspected link with the so-called exceptional Lie groups.

More recently, hints of an enormous symmetry enhancement have also appeared in a completely different place, namely the study of cosmological solutions of Einstein’s equations near a space-like singularity. This mathematical analysis has revealed tantalising evidence of a truly exceptional infinite-dimensional duality symmetry, which goes by the name of E10, and which “opens up” as one gets close to the cosmological (Big Bang) singularity (see image at top). Could it be that the near-singularity limit can tell us about the underlying symmetries of QG in a similar way as the high-energy limit of gauge theories informs us about the symmetries of the SM? One can validly argue that this huge and monstrously complex symmetry knows everything about maximal supersymmetry and the finite-dimensional dualities identified so far. Equally important, and unlike conventional supersymmetry, E10 may continue to make sense in the Planck regime where conventional notions of space and time are expected to break down. For this reason, duality symmetry could even supersede supersymmetry as a unifying principle.

Outstanding questions

Our summary, then, is very simple: all of the important questions in QG remain wide open, despite a great deal of effort and numerous promising ideas. In the light of this conclusion, the LHC will continue to play a crucial role in advancing our understanding of how everything fits together, no matter what the final outcome of the experiments will be. This is especially true if nature chooses not to abide by current theoretical preferences and expectations.

Over the past decades, we have learnt that the SM is a most economical and tightly knit structure, and there is now mounting evidence that minor modifications may suffice for it to survive to the highest energies. To look for such subtle deviations will therefore be a main task for the LHC in the years ahead. If our view of the Planck scale remains unobstructed by intermediate scales, the popular model-builders’ strategy of adding ever more unseen particles and couplings may come to an end. In that case, the challenge of explaining the structure of the low-energy world from a Planck-scale theory of quantum gravity looms larger than ever.

Einstein on unification

It is well known that Albert Einstein spent much of the latter part of his life vainly searching for unification, although disregarding the nuclear forces and certainly with no intention of reconciling quantum mechanics and GR. Already in 1929, he published a paper on the unified theory (pictured above right, click to enlarge). In this paper, he states with wonderful and characteristic lucidity what the criteria should be of a “good” unified theory: to describe as far as possible all phenomena and their inherent links, and to do so on the basis of a minimal number of assumptions and logically independent basic concepts. The second of these goals (also known as the principle of Occam’s razor) refers to “logical unity”, and goes on to say: “Roughly but truthfully, one might say: we not only want to understand how nature works, but we are also after the perhaps utopian and presumptuous goal of understanding why nature is the way it is and not otherwise.” 

 

The LHC’s extra dimension

At 10.00 a.m. on 9 August 2016, physicists gathered at the Sheraton hotel in Chicago for the “Beyond the Standard Model” session at the ICHEP conference. The mood was one of slight disappointment. An excess of “diphoton” events at a mass of 750 GeV reported by the LHC’s ATLAS and CMS experiments in 2015 had not shown up in the 2016 data, ending a burst of activity that saw some 540 phenomenology papers uploaded to the arXiv preprint server in a period of just eight months. Among the proposed explanations for the putative new high-mass resonance were extra space–time dimensions, an idea that has been around since Theodor Kaluza and Oscar Klein attempted to unify the electromagnetic and gravitational forces a century ago.

In the modern language of string theory, extra dimensions are required to ensure the mathematical consistency of the theory. They are typically thought to be very small, close to the Planck length (10–35 m). In the 1990s, however, theorists trying to solve problems with supersymmetry suggested that some of these extra dimensions could be as large as 10–19 m, corresponding to an energy scale in the TeV range. In 1998, as proposed by Arkani-Hamed and co-workers, theories emerged with even larger extra dimensions, which predicted detectable effects in contemporary collider experiments. In such large extra-dimension (LED) scenarios, gravity can become stronger than we perceive in 3D due to the increased space available. In addition to showing us an entirely different view of the universe, extra dimensions offer an elegant solution to the so-called hierarchy problem, which arises because the Planck scale (where gravity becomes as strong as the other three forces) is 17 orders of magnitude larger than the electroweak scale.

Particle physicists normally ignore gravity because it is feeble compared with the other three forces. In theories where gravity gets stronger at small distances due to the opening of extra dimensions, however, it can catch up and lead to phenomena at colliders with high enough rates that they can be measured in experiments. The possibility of having extra space dimensions at the TeV scale was a game changer. Scientists from experiments at the LEP, Tevatron and HERA colliders quickly produced tailored searches for signals for this new beyond-the-Standard Model (SM) physics scenario. No evidence was found in their accumulated data, setting lower limits on the scale of extra dimensions of around 1 TeV.

By the turn of the century, a number of possible new experimental signatures had been identified for extra-dimension searches, many of which were studied in detail while assessing the physics performance of the LHC experiments. For the case of LEDs, where gravity is the only force that can expand in these dimensions, high-energy collider experiments were just one approach. Smaller “tabletop” scale experiments aiming to measure the strength of gravity at sub-millimetre distances were also in pursuit of extra dimensions, but no deviation from the Newtonian law has been observed to date. In addition, there were also significant constraints from astrophysics processes on the possible number and size of these dimensions.

Enter the LHC

Analysis strategies to search for extra dimensions have been deployed from the beginning of high-energy LHC operations in 2010, and the recent increase in the LHC’s collision energy to 13 TeV has extended the search window considerably. Although no positive signal of the presence of extra dimensions has been observed so far, a big leap forward has been taken in excluding large portions of the TeV scale phase-space where extra dimensions could live.

A particular feature of LED-type searches is the production of a single very energetic “mono-object” that does not balance the transverse momentum carried by anything else emerging from the collision (as would be required by momentum and energy conservation). Examples of such objects are particle jets, very energetic photons or heavy W and Z vector bosons. Such collisions only appear to be imbalanced, however, because the emerging jet or boson is balanced by a graviton that escapes detection. Hence SM processes such as the production of a jet plus a Z boson that decays into neutrinos can mimic a graviton production signal. The absence of any excess in the mono-jet or mono-photon event channels at the LHC has put stringent limits on LEDs (figure 1), with 2010 data already bypassing previous collider search limits. LEDs can also manifest themselves as a new contribution to the continuum in the invariant mass spectrum of two energetic photons (figure 2) or fermions (dileptons or dijets). Here too, though, no signals have been observed, and the LHC has now excluded such contributions for extra-dimension scales up to several TeV.

In 1999, another extra-dimension scenario was proposed by Randall and Sundrum (RS), which led to a quite different phenomenology compared with that expected from LEDs. In its simplest form, the RS idea contains two fundamental 3D branes: one on which most if not all SM particles live, and one on which gravity lives. Gravity is assumed to be intrinsically strong, but the warped space between the two branes makes it appear weak on the brane where we live. The experimental signature of such scenarios is the production of so-called Kaluza–Klein (spin-2 graviton) resonances that can be observed in the invariant mass spectra of difermions or dibosons. The most accessible spectra to the LHC experiments include the diphoton and dilepton spectra, in which no new resonance signal has been found, and at present the limits on putative Kaluza–Klein gravitons are about 4 TeV, depending on RS-model parameters. Analyses of dijet final states provide even more stringent limits of up to 7 TeV. Further extensions of the RS model, in particular the production of top quark–antiquark resonances, offer a more sensitive signature, but despite intense searches, no signal has been detected.

Searching in the dark

At the start of 2000, it was realised that large or warped extra dimensions could lead to a new type of signature at the LHC: microscopic black holes. These can form when two colliding partons come close enough to each other, namely to within the Schwarzschild radius or black-hole event horizon, and can be as large as a femtometre in the presence of TeV-scale extra dimensions at the LHC. Such microscopic black holes would evaporate via Hawking radiation on time scales of around 10–27 s, way before they could suck up any matter, and provide an ideal opportunity to study quantum gravity in the laboratory.

Black holes that are produced with a mass significantly above the formation threshold are expected to evaporate in high-energy multi-particle final states leading to plenty of particle jets, leptons, photons and even Higgs particles. Searches for such energetic multi-object final states in excess of the SM expectation have been performed since the first collisions at the LHC at 7 TeV, but none have been found. If black holes are produced closer to the formation threshold, these would be expected to decay in a much smaller final-state topology, for instance into dijets. The CMS and ATLAS experiments have been looking for all of these final states up until the latest 13 TeV data (figure 3), but no signal has been observed so far for black-hole masses up to about 9 TeV.

Several other possible incarnations of extra-dimension theories have been proposed and searched for at the LHC. So-called TeV-type extra dimensions allow for more SM particles, for example partners of the heavy W and Z bosons, to enter in the bulk, and these would show up as high-mass resonances in dilepton and other invariant mass spectra. These new resonances have a spin equal to one, and hence such signatures could be more tedious to detect because they can interfere with the SM Drell–Yan production background. Nevertheless, no such resonances have been discovered so far.

In so-called universal extra-dimension (UED) scenarios, all particles have states that can go into the bulk. If this scenario is correct, a completely new particle spectrum of partners of the SM particles should show up at the LHC at high masses. Although this looks very much like what would be expected from supersymmetry, where all known SM particles have partners, the Kaluza–Klein partners would have exactly the same spin as their SM partners, whereas supersymmetry transforms bosons into fermions and vice versa. Alas, no new particles either for Kaluza–Klein partners or supersymmetry candidates have been observed, pushing the lower mass limits beyond 1 TeV for certain particle types.

Final hope

Collider data so far have not yet given us any sign of the existence of extra dimensions, or for that matter a sign that gravity is becoming strong at the TeV scale. It is possible that, even if they exist, the extra dimensions could be as small as predicted by string theory, in which case they would not be able to solve the hierarchy problem. The idea is still very much alive, however, and searches will continue as more data are recorded at the LHC.

Even excellent and attractive ideas always need confirmation from data, and inevitably the initial high enthusiasm for extra-dimension theories may have waned somewhat in recent years. Although such confirmation could come from the next generation of colliders, such as possible higher-energy machines, there is unfortunately no guarantee. It could be that we have to turn to even more outlandish ideas to progress further.

Catching a gravitational wave

Gravitational waves alternatively compress and stretch space–time as they propagate, exerting tidal forces on all objects in their path. Detectors such as Advanced LIGO (aLIGO) search for this subtle distortion of space–time by measuring the relative separation of mirrors at the ends of long perpendicular arms, which form a simple Michelson interferometer with Fabry–Perot cavities in the arms: a beam splitter directs laser light to mirrors at the ends of the arms and the reflected light is recombined to produce an interference pattern. When a gravitational wave passes through the detector, the strain it exerts changes the relative lengths of the arms and causes the interference pattern to change.

The arms of the aLIGO detectors are each 4 km long to help maximise the measured length change. Even on this scale, however, the induced length changes are tiny: the first detected gravitational waves, from the merger of two black holes, changed the arm length of the aLIGO detectors by just 4 × 10–18 m, which is approximately 200 times smaller than the proton radius. Achieving the fantastically high sensitivity required to detect this event was the culmination of decades of research and development.

Battling noise

The idea of using an interferometer to detect gravitational waves was first concretely proposed in the 1970s and full-scale detectors began to be constructed in the mid-1990s, including GEO600 in Germany, Virgo in Italy and the LIGO project in the US. LIGO consists of detectors at two sites separated by about 3000 km – Hanford (in Washington state) and Livingston in Louisiana – and undertook its first science runs in 2002–2008. Following a major upgrade, the observatory restarted in September 2015 as aLIGO with an initial sensitivity four times greater than its predecessor. Since the detectors measure strain in space–time, the effective increase in volume, or event rate, of aLIGO is a factor 43 higher.

A major issue facing aLIGO designers is to isolate the detectors from various noise sources. At a frequency of around 10 Hz, the motion of the Earth’s surface or seismic noise is about 10 orders of magnitude larger than required, with the seismic noise falling off at higher frequencies. A powerful solution is to suspend the mirrors as pendulums: a pendulum acts as a low-pass filter, providing significant reductions in motion at frequencies above the pendulum frequency. In aLIGO, a chain of four suspended masses is used to provide a factor 107 reduction in seismic motion. In addition, the entire suspension is attached to an advanced seismic isolation system using a variety of active and passive techniques, which further isolate noise by a factor 1000. At 10 Hz, and in the absence of other noise sources, these systems could already increase the sensitivity of the detectors to roughly 10–19 m/(Hz). At even lower frequencies (10 μHz), the daily tides stretch and shrink the Earth by the order of 0.4 mm over 4 km.

Another source of low-frequency noise arises from moving mass interacting with the detector mirrors via the Newtonian inverse square law. The dominant source of this noise is from surface seismic waves, which can produce density fluctuations of the Earth’s surface close to the interferometer mirrors and result in a fluctuating gravitational force on them. While methods of monitoring and subtracting this noise are being investigated, the performance of Earth-based detectors is likely to always be limited at frequencies below 1 Hz by this noise source.

Thermal noise associated with the thermal energy of the mirrors and their suspensions can also cause the mirrors to move, providing a significant noise source at low-to-mid-range frequencies. The magnitude of thermal noise is related to the mechanical loss of the materials: similar to a high-quality wine glass, a material with a low loss will ring for a long time with a pure note because most of the thermal motion is confined to frequencies close to the resonance. For this reason, aLIGO uses fibres fabricated from fused silica – a type of very pure glass with very low mechanical loss – for the final stage of the mirror suspension. Pioneered in the GEO600 detector near Hanover in Germany, the use of silica fibres in place of the steel wires used in the initial LIGO detectors significantly reduces thermal noise from suspension.

aLIGO also has much reduced quantum noise compared with the original LIGO.

Low-loss fused silica is also used for the 40 kg interferometer mirrors, which use multi-layered optical coatings to achieve the high reflectivity required. For aLIGO, a new optical coating was developed comprising a stack of alternating layers of silica and titania-doped “tantala”, reducing the coating thermal noise by about 20%. However, at the aLIGO design sensitivity (which is roughly 10 times higher than the initial aLIGO set-up) thermal noise will be the limiting noise source at frequencies of around 60 Hz – close to the frequency at which the detectors are most sensitive.

aLIGO also has much reduced quantum noise compared with the original LIGO. This noise source has two components: radiation-pressure noise and shot noise. The former results from fluctuations in the number of photons hitting the detector mirrors, which is more significant at lower frequencies, and has been reduced by using mirrors four times heavier than the initial LIGO mirrors. Photon shot noise, resulting from statistical fluctuations in the number of photons at the output of the detector, limits sensitivity at higher frequencies. Since shot noise is inversely proportional to the square root of the power, it can be reduced by using higher laser power. In the first observing run of aLIGO, 100 kW of laser power was circulating in the detector arms, with the potential to increase it to up to 750 kW in future runs. Optical cavities are also used to store light in the arms and build up laser power.

In addition to reductions in these fundamental noise sources, many other technological improvements were required to reduce more technical noise sources. Improvements over the initial LIGO detector included a thermal compensation system to reduce thermal lensing effects in the optics, reduced electronic noise in control circuits and finer polishing of the mirror substrates to reduce the amount of scattered light in the detectors.

Upgrades on the ground

Having detected their first gravitational wave almost as soon as they switched on in September 2015, followed by a further event a few months later, the aLIGO detectors began their second observation run on 30 November. Dubbed “O2”, it is scheduled to last for six months. More observation runs are envisaged, with more upgrades in sensitivity taking place between them.

The next major upgrade, expected in around 2018, will see the injection of “squeezed light” to further reduce quantum noise. However, to gain the maximum sensitivity improvement from squeezing, a reduction in coating thermal noise is also likely to be required. With these and other relatively short-term upgrades, it is expected that a factor-two improvement over the aLIGO design sensitivity could be achieved. This would allow events such as the first detection to be observed with a signal-to-noise ratio almost 10 times better than the initial result. Further improvements in sensitivity will almost certainly require more extensive upgrades or new facilities, possibly involving longer detectors or cryogenic cooling of the mirrors.

aLIGO is expected to soon be joined in observing runs by Advanced Virgo, giving a network of three geographically separated detectors and thus improving our ability to locate the position of gravitational-wave sources on the sky. Discussions are also under way for an aLIGO site in India. In Japan, the KAGRA detector is under construction: this detector will use cryogenic cooling to reduce thermal noise and is located underground to reduce seismic and gravity gradient effects. When complete, KAGRA is expected to have similar sensitivity to aLIGO.

Longer term, in Europe a detector known as the Einstein Telescope (ET) has been proposed to provide a factor 10 more sensitivity than aLIGO. ET would not only have arms measuring 10 km long but would take a new approach to noise reduction using two very different detectors: a high-power room-temperature interferometer optimised for sensitivity at high frequencies, where shot noise limits performance, and a low-power cryogenic interferometer optimised for sensitivity at low frequencies (where performance is limited by thermal noise). ET would require significant changes in detector technology and also be constructed underground to reduce the effect of seismic noise and gravity-gradient noise on low-frequency sensitivity.

The final frontier

Obtaining significantly improved sensitivity at lower frequencies is difficult on Earth because they are swamped by local mass motion. Gaining sensitivity at very low frequencies, which is where we must look for signals from massive black-hole collisions and other sources that will provide exquisite science results, is only likely to be achieved in space. This concept has been on the table since the 1970s and has evolved into the Laser Interferometer Space Antenna (LISA) project, which is led by the European Space Agency (ESA) with contributions from 14 European countries and the US.

A survey mission called LISA Pathfinder was launched on 3 December 2015 from French Guiana. It is currently located 1.5 million  km away at the first Earth–Sun Lagrange point, and will take data until the end of May 2017. The aim of LISA Pathfinder was to demonstrate technologies for a space-borne gravitational-wave detector based on the same measurement philosophy as that used by ground-based detectors. The mission has clearly demonstrated that we can place test masses (gold–platinum cubes with 46 mm sides separated by 38 cm) into free fall, such that the only varying force acting on them is gravity. It has also validated a host of complementary techniques, including: operating a drag-free spacecraft using cold gas thrusters; electrostatic control of free-floating test masses; short-arm interferometry and test-mass charge control. When combined, these novel features allow differential accelerometry at the 10–15 g level, which is the sensitivity needed for a space-borne gravitational-wave detector. Indeed, if Pathfinder test-mass technology were used to build a full-scale LISA detector, it would recover almost all of the science originally anticipated for LISA without any further improvements.

The success of Pathfinder, coming hot on the heels of the detection of gravitational waves, is a major boost for the international gravitational-wave community. It comes at an exceptional time for the field, with ESA currently inviting proposals for the third of its Cosmic Vision “large missions” programme. Developments are now needed to move from LISA Pathfinder to LISA proper, but these are now well understood and technology development programmes are planned and under way. The timeline for this mission leads to a launch in the early 2030s and the success of Pathfinder means we can look forward with excitement to the fantastic science that will result.

Does antimatter fall up?

Measuring the effect of gravity on antimatter is a long-standing story. It started with a project at Stanford in 1968 that attempted to measure the free fall of positrons, but a trial experiment with electrons showed that environmental effects swamped the effect of gravity and the final experiment was not performed. In the 1990s, the PS200 experiment at CERN’s LEAR facility attempted the same feat with antiprotons, but the project ended with the termination of LEAR before any robust measurement could be made. To date, indirect measurements have set limits on the deviation from standard gravity at the level of 10–6.

Thanks to advances in cooling and trapping technology, and the construction of a new synchrotron at CERN called ELENA, three collaborations are now preparing experiments at CERN’s Antiproton Decelerator (AD) facility to measure the behaviour of antihydrogen (a positron orbiting an antiproton) under gravity. The ALPHA experiment has already analysed its data on the trapping of antihydrogen atoms to set upper limits on differences in the free-fall rate of matter and antimatter, and is now designing a new set-up. AEgIS is currently putting its apparatus through its paces, while GBAR will start installation in 2017.

Given that most of the mass of antinuclei comes from massless gluons, it is extremely unlikely that antimatter experiences an opposite gravitational force to matter and therefore “falls” up. Nevertheless, precise measurements of the free fall of antiatoms could reveal subtle differences that point to a crack in our current understanding.

Violating equivalence

To date, most efforts at the AD have focused on looking for CPT violation by comparing the spectroscopy of antihydrogen to its well-known matter counterpart, hydrogen. Now we are in a position to test Einstein’s equivalence principle with antimatter by directly measuring the free fall of antiatoms on Earth. The equivalence principle is the keystone of general relativity and states that all particles with the same initial position and velocity should follow the same trajectories in a given gravitational field. On the other hand, quantum theories such as supersymmetry or superstrings do not necessarily lead to an equivalent force on matter and antimatter (technically, the terms related to gravity in the Lagrangians are not bound to be the same for matter and antimatter). This is also the case when Lorentz-symmetry violating terms are included in the Standard Model of particle physics.

Any difference seen in the behaviour of antimatter and matter with respect to gravity would mean that the equivalence principle is not perfect and force us to understand quantum effects in the gravitational arena. Experiments performed with free-falling matter atoms have so far found no difference to that of macroscopic objects. Such tests have set limits at the level of one part in 1013, but have not yet been able to test the equivalence principle at the level where supersymmetric or other quantum effects would appear. Since the amplitude of these effects could be different for antimatter, the AD experiments might have a better opportunity to test such quantum effects. Any difference would probably not change anything in the observable universe, but it would point to the necessity of having a quantum theory of gravity.

AEgIS plans to measure the vertical deviation of a pulsed horizontal beam of cold antihydrogen atoms, generated by bringing laser-excited positronium moving at several km/s into contact with cold antiprotons, travelling with a velocity of a few hundred m/s. The resulting highly excited antihydrogen atoms are then accelerated horizontally and a moiré deflectometer used to measure the vertical deviation, which is expected to be a few microns given the approximately 1 m-long flight tube of AEgIS. Reaching the lowest possible antiproton temperature minimises the divergence of the beam and therefore maximises the flux of antihydrogen atoms that end up on the downstream detector.

In GBAR, which takes advantage of advances in ion-cooling techniques, antihydrogen ions (H+) are produced with velocities of the order of 0.5 m/s. In a second step, the anti-ions will be stripped of one positron to give an ultra-slow neutral antiatom that is allowed to enter free fall. The time of free fall over a height of 20 cm is as long as 200 ms, which is easily measurable. These numbers correspond to the gravitational acceleration known for matter atoms, and the expected sensitivity to small deviations is 1% in the first phase of operation.

The ALPHA-g experiment will release antihydrogen atoms from a vertical magnetic atom trap and record their positions when they annihilate on the walls of the experiment. In a proof-of-principle experiment using the original ALPHA atom trap, the acceleration of antihydrogen atoms by gravity was constrained to lie anywhere between –110 g and 65 g. ALPHA-g improves on this original demonstration by orienting the trap vertically, thereby enabling better control of the antiatom release and improving sensitivity to the vertical annihilation position. In the new arrangement, antihydrogen gravitation can be measured at the 10% level, which would already settle the question of whether antimatter falls up or down, but improvements in cooling techniques will allow measurements at the 1% level. A long-term aspiration of the ALPHA-g project is to use techniques that cause antihydrogen atoms to interact with a beam of photons, promising a sensitivity in the 10–6 range.

Cooling matter

In the case of AEgIS, the deflectometer principle that underpins the measurement has already been demonstrated with matter atoms and with antiprotons, while the time-of-flight measurement is straightforward in the case of GBAR. The difficulty for the experiments lies in preparing sufficient numbers of antiatoms at the required low velocities. ALPHA has already demonstrated trapping of several hundred antiatoms at a temperature below 0.5 K, corresponding to random velocities of the order 10 m/s. The antiatoms are formed by letting the antiprotons traverse a plasma of positrons located within the same Penning trap.

A different scheme is used in AEgIS and GBAR to form and possibly cool the antiatoms and anti-ions. In AEgIS, antiprotons are cooled within a Penning trap and receive a shower of positronium atoms (bound e+e pairs) to form the antiatoms. These are then slightly accelerated by electric fields (which act on the atoms’ induced electric-dipole moments) so that they exit the charged particle trap axially in the form of a neutral beam. For GBAR, the antiproton beam traverses a cloud of positronium to form the anti-ions, which are then cooled to a few μK by forcing them to interact with laser-cooled beryllium ions.

In this race towards low energies, ALPHA and AEgIS are located on the beam at the AD, which delivers 5 MeV antiprotons. While AEgIS is already commissioning its dedicated gravity experiment, ALPHA will move from spectroscopy to gravity in the coming months. GBAR, which will be the first experiment to make use of the beam delivered by ELENA, is now beginning installation and expects first attempts at anti-ion production in 2018. ELENA will decelerate antiprotons coming from the AD from 5 MeV to just 100 keV, making it more efficient to trap and store antimatter. Following commissioning first with protons and then with hydrogen ions, ELENA should receive its first antiprotons in the middle of 2017 (CERN Courier December 2016 p16). Along with precision tests of CPT invariance, this facility will help to ensure that any differences in the gravitational antics of antimatter are not missed.

bright-rec iop pub iop-science physcis connect