The discovery of the Higgs boson in 2012, a fundamentally new type of scalar particle, has provided the particle-physics community with a new tool with which to search for new physics beyond the Standard Model (SM). Originally discovered via its decay into two photons or four leptons, the SM Higgs boson is also predicted to interact with fermions with coupling strengths proportional to the fermion masses. The top quark, being the heaviest elementary fermion known, has the largest coupling to the Higgs boson. Precise measurements of such processes therefore provide a sensitive means to search for new physics.
The top-Higgs coupling is crucial for the production of Higgs bosons at the LHC, since the process with the largest production cross-section (gluon–gluon fusion) proceeds via a virtual top-quark loop. In this sense, Higgs production itself provides indirect evidence for the top-Higgs coupling. Direct experimental access to the top-Higgs coupling, on the other hand, comes from the study of the associated production of a Higgs boson and a top-quark pair. This production mode, while proceeding at a rate about 100 times smaller than gluon fusion, provides a highly distinctive signature in the detector, which includes leptons and/or jets from the decay of the two top quarks.
Combined ATLAS and CMS results on ttH production based on the LHC’s Run 1 data set showed an intriguing excess: the measured rate was above the SM prediction with a statistical significance corresponding to 2.3σ. With the increase of the LHC energy from 8 to 13 TeV for Run 2, the ttH production cross-section is expected to increase by a factor four – putting the ttH analyses in the crosshairs of the CMS collaboration in its search for new physics.
Compared to the first evidence for Higgs production in 2012, namely Higgs-boson decays into clean final states containing two photons or four leptons, the ttH process is much more rare, and the expected signal yields in these modes are just a few events. For this reason, searches for ttH production have been driven by the higher sensitivity achieved in Higgs decay modes with larger branching fractions, such as H → bb, H → WW, and H →ττ. The search in the H → bb final state is challenging because of the large background from the production of top-quark pairs in association with jets, and the results are currently limited by systematic and theoretical uncertainties.
A compromise between expected signal yield and background uncertainty can be obtained from final states containing leptons. Such analyses target Higgs decays to WW*, ZZ* and ττ pairs, and make use of events with two same-sign leptons or more than three light leptons produced in association with b-quark jets from top-quark decays. Multivariate techniques allow the background due to jets misidentified as leptons to be reduced, while similar algorithms provide discrimination against irreducible background from tt + W and tt + Z production. Events with reconstructed hadronic τ-lepton decays are studied separately.
The latest results of ttH searches at CMS (see figure) show that we are on the verge of measuring this crucial process with sufficient precision to confirm or disprove the previous observed excess. With a larger data set it should be possible to have clear evidence for ttH production by the end of Run 2.
New observations using ESO’s Very Large Telescope (VLT) in Chile indicate that massive, star-forming galaxies in the early universe were dominated by normal, baryonic matter. This is in stark contrast to present-day galaxies, where the effects of dark matter on the rotational velocity of spiral galaxies seem to be much greater. The surprising result, published in Nature by an international team of astronomers led by Reinhard Genzel at the Max Planck Institute for Extraterrestrial Physics in Germany, suggests that dark matter was less influential in the early universe than it is today.
Whereas normal matter in the cosmos can be viewed as brightly shining stars, glowing gas and clouds of dust, dark matter does not emit, absorb or reflect light. This elusive, transparent matter can only be observed via its gravitational effects, one of which is a higher speed of rotation in the outer parts of spiral galaxies. The disc of a spiral galaxy rotates with a velocity of hundreds of kilometres per second, making a full revolution in a period of hundreds of millions of years. If a galaxy’s mass consisted entirely of normal matter, the sparser outer regions should rotate more slowly than the dense regions at the centre. But observations of nearby spiral galaxies show that their inner and outer parts actually rotate at approximately the same speed.
It is widely accepted that the observed “flat rotation curves” indicate that spiral galaxies contain large amounts of non-luminous matter in a halo surrounding the galactic disc. This traditional view is based on observations of numerous galaxies in the local universe, but is now challenged by the latest observations of galaxies in the distant universe. The rotation curve of six massive, star-forming galaxies at the peak of galaxy formation, 10 billion years ago, was measured with the KMOS and SINFONI instruments on the VLT, and the results are intriguing. Unlike local spiral galaxies, the outer regions of these distant galaxies seem to be rotating more slowly than regions closer to the core – suggesting they contain less dark matter than expected. The same decreasing velocity trend away from the centres of the galaxies is also found in a composite rotation curve that combines data from around 100 other distant galaxies, which have too weak a signal for an individual analysis.
Genzel and collaborators identify two probable causes for the unexpected result. Besides a stronger dominance of normal matter with the dark matter playing a much smaller role, they also suggest that early disc galaxies were much more turbulent than the spiral galaxies we see in our cosmic neighbourhood. Both effects seem to become more marked as astronomers look further back in time into the early universe. This suggests that three to four billion years after the Big Bang, the gas in galaxies had already efficiently condensed into flat, rotating discs, while the dark-matter halos surrounding them were much larger and more spread out. Apparently it took billions of years longer for dark matter to condense as well, so its dominating effect is only seen on the rotation velocities of galaxy discs today.
This explanation is consistent with observations showing that early galaxies were much more gas-rich and compact than today’s galaxies. Embedded in a wider dark-matter halo, their rotation curves would be only weakly influenced by its gravity. It would be therefore interesting to explore whether the suggestion of a slow condensation of dark-matter halos could help shed light on this mysterious component of the universe.
The accelerating expansion of the universe, first realised 20 years ago, has been confirmed by numerous observations. Remarkably, whatever the source of the acceleration, it is the primary driver of the dynamical evolution of the universe in the present epoch. That we are unable to know the nature of this so-called dark energy is one of the most important puzzles in modern fundamental physics. Whether due to a cosmological constant, a new dynamical field, a deviation from general relativity on cosmological scales, or something else, dark energy has triggered numerous theoretical models and experimental programmes. Physicists and astronomers are convinced that pinning down the nature of this mysterious component of the universe will lead to a revolution in physics.
Based on the current lambda-cold-dark-matter (ΛCDM) model of cosmology – which has only two ingredients: general relativity with a nonzero cosmological constant and cold dark matter – we identify at this time three dominant components of the universe: normal baryonic matter, which makes up only 5% of the total energy density; dark matter (27%); and dark energy (68%). This model is extremely successful in fitting observations, such as the Planck mission’s measurements of the cosmic microwave background, but it gives no clues about the nature of the dark-matter or dark-energy components. It should also be noted that the assumption of a nonzero cosmological constant, implying a nonzero vacuum energy density, leads to what has been called the worst prediction ever made in physics: its value as measured by astronomers falls short of what is predicted by the Standard Model for particle physics by well over 100 orders of magnitude.
It is only by combining several complementary probes that the source of the acceleration of the universe can be understood.
Depending on what form it takes, dark energy changes the dynamical evolution during the expansion history of the universe as predicted by cosmological models. Specifically, dark energy modifies the expansion rate as well as the processes by which cosmic structures form. Whether the acceleration is produced by a new scalar field or by modified laws of gravity will impact differently on these observables, and the two effects can be decoupled using several complementary cosmological probes. Type 1a supernovae and baryon acoustic oscillations (BAO) are very good probes of the expansion rate, for instance, while gravitational lensing and peculiar velocities of galaxies (as revealed by their redshift) are very good probes of gravity and the growth rate of structures (see panel “The geometry of the universe” below). It is only by combining several complementary probes that the source of the acceleration of the universe can be understood. The changes are extremely small and are currently undetectable at the level of individual galaxies, but by observing many galaxies and treating them statistically it is possible to accurately track the evolution and therefore get a handle on what dark energy physically is. This demands new observing facilities capable of both measuring individual galaxies with high precision and surveying large regions of the sky to cover all cosmological scales.
Euclid science parameters
Euclid is a new space-borne telescope under development by the European Space Agency (ESA). It is a medium-class mission of ESA’s Cosmic Vision programme and was selected in October 2011 as the first-priority cosmology mission of the next decade. Euclid will be launched at the end of 2020 and will measure the accelerating expansion of our universe from the time it kicked in around 10 billion years ago to our present epoch, using four cosmological probes that can explore both dark-energy and modified-gravity models. It will capture a 3D picture of the distribution of the dark and baryonic matter from which the acceleration will be measured to per-cent-level accuracy, and measure possible variations in the acceleration to 10% accuracy, improving our present knowledge of these parameters by a factor 20–60. Euclid will observe the dynamical evolution of the universe and the formation of its cosmic structures over a sky area covering more than 30% of the celestial sphere, corresponding to about five per cent of the volume of the observable universe.
The dark-matter distribution will be probed via weak gravitational-lensing effects on galaxies. Gravitational lensing by foreground objects slightly modifies the shape of distant background galaxies, producing a distortion that directly reveals the distribution of dark matter (see panel “Tracking cosmic structure” below). The way such lensing changes as a function of look-back time, due to the continuing growth of cosmic structure from dark matter, strongly depends on the accelerating expansion of the universe and turns out to be a clear signature of the amount and nature of dark energy. Spectroscopic measurements, meanwhile, will enable us to determine tiny local deviations of the redshift of galaxies from their expected value derived from the general cosmic expansion alone (see image below). These deviations are signatures of peculiar velocities of galaxies produced by the local gravitational fields of surrounding massive structures, and therefore represent a unique test of gravity. Spectroscopy will also reveal the 3D clustering properties of galaxies, in particular baryon acoustic oscillations.
Together, weak-lensing and spectroscopy data will reveal signatures of the physical processes responsible for the expansion and the hierarchical formation of structures and galaxies in the presence of dark energy. A cosmological constant, a new dark-energy component or deviations to general relativity will produce different signatures. Since these differences are expected to be very small, however, the Euclid mission is extremely demanding scientifically and also represents considerable technical, observational and data-processing challenges.
By further analysing the Euclid data in terms of power spectra of galaxies and dark matter and a description of massive nonlinear structures like clusters of galaxies, Euclid can address cosmological questions beyond the accelerating expansion. Indeed, we will be able to address any topic related to power spectra or non-Gaussian properties of galaxies and dark-matter distributions. The relationship between the light- and dark-matter distributions of galaxies, for instance, can be derived by comparing the galaxy power spectrum as derived from spectroscopy with the dark-matter power spectrum as derived from gravitational lensing. The physics of inflation can then be explored by combining the non-Gaussian features observed in the dark-matter distribution in Euclid data with the Planck data. Likewise, since Euclid will map the dark-matter distribution with unprecedented accuracy, it will be sensitive to subtle features produced by neutrinos and thereby help to constrain the sum of the neutrino masses. On these and other topics, Euclid will provide important information to constrain models.
Euclid’s science objectives translate into stringent performance requirements.
The definition of Euclid’s science cases, the development of the scientific instruments and the processing and exploitation of the data are under the responsibility of the Euclid Consortium (EC) and carried out in collaboration with ESA. The EC brings together about 1500 scientists and engineers in theoretical physics, particle physics, astrophysics and space astronomy from around 200 laboratories in 14 European countries, Canada and the US. Euclid’s science objectives translate into stringent performance requirements. Mathematical models and detailed complete simulations of the mission were used to derive the full set of requirements for the spacecraft pointing and stability, the telescope, scientific instruments, data-processing algorithms, the sky survey and the system calibrations. Euclid’s performance requirements can be broadly grouped into three categories: image quality, radiometric and spectroscopic performance. The spectroscopic performance in particular puts stringent demands on the ground-processing algorithms and demands a high level of control over cleanliness during assembly and launch.
Dark-energy payload
The Euclid satellite consists of a service module (SVM) and a payload module (PLM), developed by ESA’s industrial contractors Thales Alenia Space of Turin and Airbus Defence and Space of Toulouse, respectively. The two modules are substantially thermally and structurally decoupled to ensure that the extremely rigid and cold (around 130 K) optical bench located in the PLM is not disturbed by the warmer (290 K±20 K) and more flexible SVM. The SVM comprises all the conventional spacecraft subsystems and also hosts the instrument’s warm electronics units. The Euclid image-quality requirements demand very precise pointing and minimal “jitter”, while the survey requirements call for fast and accurate movements of the satellite from one field to another. The attitude and orbit control system consists of several sensors to provide sub-arc-second stability during an exposure time, and cold gas thrusters with micronewton resolution are used to actuate the fine pointing. Three star trackers provide the absolute inertial attitude accuracy. Since the trackers are mounted on the SVM, which is separate from the telescope structure and thus subject to thermo-elastic deformation, the fine guidance system is located on the same focal plane of the telescope and endowed with absolute pointing capabilities based on a reference star catalogue.
The PLM is designed to provide an extremely stable detection system enabling the sharpest possible images of the sky. The size of the point spread function (PSF), which is the image of a point source such as an unresolved star, closely resembles the Airy disc, the theoretical limit of the optical system. The PSF of Euclid images is comparable to those of the Hubble space telescope’s, considering Euclid’s smaller primary mirror, and is more than three times smaller compared with what can be achieved by the best ground-based survey telescopes under optimum viewing conditions. The telescope is composed of a 1.2 m-diameter three-mirror “anastigmatic Korsch” arrangement that feeds two instruments: a wide-field visible imager (VIS) for the shape measurement of galaxies, and a near-infrared spectrometer and photometer (NISP) for their spectroscopic and photometric redshift measurements. An important PLM design driver is to maintain a high and stable image quality over a large field of view. Building on the heritage of previous European high-stability telescopes such as Gaia, which is mapping the stars of the Milky Way with high precision, all mirrors, the telescope truss and the optical bench are made of silicon carbide, a ceramic material that combines extreme stiffness with very good thermal conduction. The PLM structure is passively cooled to a stable temperature of around 130 K, and a secondary mirror mechanism will be employed to refocus the telescope image on the VIS detector plane after launch and cool down.
The VIS instrument receives light in one broad visible band covering the wavelength range 0.55–0.90 μm. To avoid additional image distortions, it has no imaging optics of its own and is equipped with a camera made up of 36 4 k × 4 k-pixel CCDs with a pixel scale of 0.1 arc second that must be aligned to a precision better than 15 μm over a distance of 30 cm. Pixel-wise, the VIS camera is the second largest camera that will be flown in space after Gaia’s and will produce the largest images ever generated in space. Unlike Gaia, VIS will compress and transmit all raw scientific images to Earth for further data processing. The instrument is capable of measuring the shapes of about 55,000 galaxies per image field of 0.5 square degrees. The NISP instrument, on the other hand, provides near-infrared photometry in the wavelength range 0.92–2.0 μm and has a slit-less spectroscopy mode equipped with three identical grisms (grating prisms) covering the wavelength range 1.25–1.85 μm. The grisms are mounted in different orientations to separate overlapping spectra of neighbouring objects, and the NISP device is capable of delivering redshifts for more than 900 galaxies per image field. The NISP focal plane is equipped with 16 near infrared HgCdTe detector arrays of 2 k × 2 k pixels with 0.3 arcsec pixels, which represents the largest near-infrared focal plane ever built for a space mission.
The exquisite accuracy and stability of Euclid’s instruments will provide certainty that any observed galaxy-shape distortions are caused by gravitational lensing and are not a result of artefacts in the optics. The telescope will deliver a field of view of more than 0.5 square degrees, which is an area comparable to two full Moons, and the flat focal plane of the Korsch configuration places no extra requirements on the surface shape of the sensors in the instruments. As the VIS and NISP instruments share the same field of view, Euclid observations can be carried out through both channels in parallel. Besides the Euclid satellite data, the Euclid mission will combine the photometry of the VIS and NISP instruments with complementary ground-based observations from several existing and new telescopes equipped with wide-field imaging or spectroscopic instruments (such as CFHT, ESO/VLT, Keck, Blanco, JST and LSST). These combined data will be used to derive an estimate of redshift for the two billion galaxies used for weak lensing, and to decouple coherent weak gravitational-lensing patterns from intrinsic alignments of galaxies. Organising the ground-based observations over both hemispheres and making these data compatible with the Euclid data turns out to be a very complex operation that involves a huge data volume, even bigger than the Euclid satellite data volume.
Ground control
One Euclid field of 0.5 square degrees will generate 520 Gb/day of VIS compressed data and 240 Gb/day of NISP compressed data, and one such field is obtained in an observing period lasting about 1 hour and 15 minutes. All raw science data are transmitted to the ground via a high-density link. Even though the nominal mission will last for six years, mapping out the 36% of the sky at the required sensitivity and accuracy within this time involves large amounts of data to be transmitted at a rate of around 850 Gb/day during just four hours of contact with the ground station. The complete processing pipeline from Euclid’s raw data to the final data products is a large IT project involving a few hundred software engineers and scientists, and has been broken down into functions handled by almost a dozen separate expert groups. A highly varied collection of data sets must be homogenised for subsequent combination: data from different ground and space-based telescopes, visible and near-infrared data, and slit-less spectroscopy. Very precise and accurate shapes of galaxies are measured, giving two orders of magnitude improvement with respect to current analyses.
Based on the current knowledge of the Euclid mission and the present ground-station development, no showstoppers have been identified. Euclid should meet its performance requirements at all levels, including the design of the mission (a survey of 15,000 square degrees in less than six years) and for the space and ground segments. This is very encouraging and most promising, taking into account the multiplicity of challenges that Euclid presents.
On the scientific side, the Euclid mission meets the precision and accuracy requested to characterise the source of the accelerating expansion of the universe and decisively reveal its nature. On the technical side, there are difficult challenges to be met in achieving the required precision and accuracy of galaxy-shape, photometric and spectroscopic redshift measurements. Our current knowledge of the mission provides a high degree of confidence that we can overcome all of these challenges in time for launch.
The evolution of structure is seeded by quantum fluctuations in the very early universe, which were amplified by inflation. These seeds grew to create the cosmic microwave background (CMB) anisotropies after approximately 100,000 years and eventually the dark-matter distribution of today. In the same way that supernovae provide a standard candle for astronomical observations, periodic fluctuations in the density of the visible matter called baryon acoustic oscillations (BAO) provide a standard cosmological length scale that can be used to understand the impact of dark energy. By comparing the distance of a supernova or structure with its measured redshift, the geometry of the universe can be obtained.
Hydrodynamical cosmological simulations of a ΛCDM universe at three different epochs (left-to-right, image left), corresponding to redshift z = 6, z = 2 and our present epoch. Each white point represents the concentration of dark matter, gas and stars, the brightest regions being the densest. The simulation shows the growth rate of structure and the formation of galaxies, clusters of galaxies, filaments and large-scale structures over cosmic time. Euclid uses the large-scale structures made out of matter and dark matter as a standard yardstick: starting from the CMB, we assume that the typical scale of structures (or the peak in the spatial power spectrum) increases proportionally with the expansion of the universe. Euclid will determine the typical scale as a function of redshift by analysing power spectra at several redshifts from the statistical analysis of the dark-matter structures (using the weak lensing probe) or the ordinary matter structures based on the spectroscopic redshifts from the BAO probe. The structures will evolve with redshift also due to the properties of gravity. Information on the growth of structure at different scales in addition to different redshifts is needed to discriminate between models of dark energy and modified gravity
Gravitational-lensing effects produced by cosmic structures on distant galaxies (right). Numerical simulations (below) show the distribution of dark matter (filaments and clumps with brightness proportional to their mass density) over a line of sight of one billion light-years. The yellow lines show how light beams emitted by distant galaxies are deflected by mass concentrations located along the line of sight. Each deflection slightly modifies the original shape of the lensed galaxies, increasing their original intrinsic ellipticity by a small amount.
Since all distant galaxies are lensed, all galaxies eventually show a coherent ellipticity pattern projected on the sky that directly reveals the projected distribution of dark matter and its power spectrum. The 3D distribution of dark matter can then be reconstructed by slicing the universe into redshift bins and recovering the ellipticity pattern at each redshift. The growth rate of cosmic structures derived from this inversion process strongly depends on the nature of dark energy and gravity, and will be detected by the outstanding image quality of Euclid’s VIS instrument.
Fifty years have passed since Dick Dalitz presented his explicit constituent-quark model at the 1966 International Conference on High Energy Physics in Berkeley, US. Murray Gell Mann and George Zweig independently introduced the quark concept in 1964, and the idea had also been anticipated by André Petermann in a little-known paper received by Nuclear Physics in 1963. But it was Dalitz who developed the model and considered excitations of quarks by analogy with the behaviour of nucleons in atomic nuclei. His primary focus was on the spectroscopy of baryons, which were interpreted as bound states of three quarks. Dalitz realised that the restrictions enforced by the Pauli exclusion principle led to a distinct pattern of supermultiplets. Today, this simple model remains in excellent agreement with experiments, in particular for mesons that comprise a quark–antiquark pair.
Despite its success in matching empirical data, the theoretical underpinning of this non-relativistic model for light hadrons has always been unclear. One of the remarkable features of hadron spectroscopy is that, half a century after the invention of the constituent-quark model, the particle data tables are filled with states that fit with a non-relativistic spectrum almost to the exclusion of anything else. Quarks are but a few MeV in mass, and are therefore surely relativistic when confined within the 1 fm radius of a proton, yet the constituent-quark model treats them as if relativity plays no role.
In the case of mesons, which fit the quark model arguably even better than baryons, this incongruity is especially significant. When Dalitz spoke in 1966, it made sense to emphasise baryons because they outnumbered the known mesons at that time. Following the discovery of charm and heavy flavours in the late 1970s, however, the spectroscopy of mesons flourished and the correlations among a meson’s spin (J), parity (P) and charge conjugation (C) were also found to be in accord with those of a non-relativistic system.
Following Dalitz’s description of the baryon spectrum, Greenberg, Nambu, Lipkin and others noted that the model’s ad-hoc correlation of baryon spins with the constraints of the Pauli principle required some novel degree of freedom, which we call “colour”. The advent of quantum chromodynamics (QCD) in the 1970s provided the rationale for this concept, explaining the existence of quark–antiquark or three-quark combinations in terms of colour-singlet clusters. But QCD did not explain the non-relativistic pattern of states. Feynman, who in his final years devoted his attention to this issue, asserted: “The [non-relativistic] quark model is correct as it explains so much data. It is for theorists to explain why.” Today, physicists still await this explanation. Yet the empirical guide of the quark model is so well established that hadrons outside of this straitjacket are deemed “exotic”.
Although the restriction to colour singlets within QCD explains the existence of qq and qqq hadrons, it raised the question of why the spectroscopy of QCD is so meagre. Colour singlets also allow combinations of pairs of quarks and antiquarks (“tetraquark” mesons), four quarks and an antiquark (“pentaquark” baryons), in addition to states comprised solely of gluons (“glueballs”). Furthermore, combinations called “hybrids” in which the gluonic fields entrapping the quark and antiquark are themselves excited are also theoretically possible within QCD (figure 1). Glueballs, tetraquarks and hybrid mesons, predicted in the late 1970s, can form correlations among a meson’s J, P and C quantum numbers that are forbidden by the non-relativistic model. Indeed, it is the lack of any empirical evidence for such exotic states in the meson spectrum that helped to establish the constituent-quark model in the first place. It is therefore ironic that searches for such states at modern experiments are now being used to establish the dynamic role of gluonic excitations in hadron spectroscopy.
Although QCD is well tested to high precision in the perturbative regime, where it is now an essential tool in the planning and interpretation of experiments, its implications for the strong-interaction limit are far less understood. Forty years after its discovery, and notwithstanding the advent of lattice QCD, hadron physics is still led by empirical data, from which clues to novel properties in the strong interactions may emerge. The search for exotic hadrons is an essential part of this strategy, and in recent years several new hadrons have been discovered that do not fit well within the traditional quark model.
Strange sightings
With hindsight, one of the first clues to the existence of quarks came in the 1950s from measurements of cosmic-ray interactions in the atmosphere, which revealed hadrons with unusual production and decay properties. These “strange” hadrons, we now know, contain one or more strange quarks or strange antiquarks, yet history has left us with a perverse convention whereby strange quarks are deemed to carry negative strangeness, and strange antiquarks are positive. Thus mesons can have one unit of strangeness, in either positive or negative amounts, while baryons can have strangeness –1, –2 or –3 (antibaryons, in turn, can have positive strangeness).
A baryon with positive strangeness (or an antibaryon with negative strangeness) is therefore classed as exotic. The minimal configuration for such a baryon would involve four quarks together with the strange antiquark, giving a total of five and the technically incorrect name of “pentaquark”. A claim to have found such a state – the θ(1540) – made headlines nearly two decades ago but is now widely disregarded. The scepticism was not that a pentaquark exists, since QCD can accommodate such a state, but that it appeared to be anomalously stable. More recently, the LHCb experiment at CERN’s Large Hadron Collider (LHC) reported decays of the Λb pentaquark-likebaryon that revealed similar structures with a mass of around 4.4 GeV (CERN Courier September 2015 p5). These have normal strong-interaction lifetimes and have been interpreted as clusters of three quarks plus a charm–anticharm pair. Whether these are genuinely compact pentaquarks, or instead bound states of a charmed baryon and a meson or some other dynamic artefact, they do appear to qualify as “exotic” in that they do not fit easily into a traditional three-constituent picture.
There have also been interesting meson sightings at lepton colliders in recent decades. Electron–positron annihilation above energies of 4 GeV in numerous experiments reveals a series of peaks in the total cross-section that are consistent with radial excitations of the fundamental cc J/ψ meson: the ψ(2S), ψ(4040), ψ(4160) and ψ(4415), which are non-exotic and fit within the non-relativistic spectrum. Evidence for exotic mesons has come from data on specific final states, notably those containing a J/ψ with one or more pions, which have revealed several novel states. Historically, the first clue for an exotic charmonium meson of this type above a mass of 4 GeV came around a decade ago from the BaBar experiment at SLAC in the US. Analysing the process e+e–→ J/ψππ, researchers there found a clear resonant-like structure dubbed Y(4260), which has no place in the qq spectrum because its mass lies between the ψ(4160) and ψ(4415) cc states. More remarkably, this state decays into charmonium and pions with a standard strong-interaction width of the order of 100 MeV rather than 100 keV, which is more typical for such a channel.
The clue to the nature of this meson appears to be that the mass of the Y meson (4260 MeV) is near the threshold for the production of DD1 – the combination of pseudoscalar (D) and axial (D1) charmed mesons (figure 2). This is the first channel in e+e– annihilation where charmed meson pairs can be produced with no orbital angular momentum (i.e. via S-wave processes). Thus at threshold there is no angular-momentum barrier against a DD1 pair being created effectively at rest, and rearranging their constituents into the form of J/ψ and light flavours (the latter then seeding pions). Thus the structure could simply be a threshold effect rather than a true resonance, or an exotic “molecule” made of D and D1 charmed mesons.
The decay of the Y(4260) into J/ψππ reveals a manifestly exotic structure. The J/ψπ± channel is electrically charged with a pronounced peak called Z(3900), as reported by both the BESIII experiment in China and Belle in Japan in 2013. Another sharp peak observed by BESIII – the Z(4020) – appears in the flavour-exotic channel containing a pion and a charmonium meson. Since it can carry electric charge, this state must contain ud (or du) in addition to its cc content, and therefore cannot be explained as a bound state of a single quark and antiquark. In principle, these states should be accessible in decays of B mesons, but there is no sign of them so far.
Nonetheless, B decays are a source of further exotic structures. For example, the invariant-mass spectrum of B → K π±ψ(2S) contains a structure called the Z(4430) observed by Belle and LHCb in the ψ(2S)π invariant-mass spectrum, which contains both hidden charm and isospin and hence must contain (at least) two quarks and two antiquarks. These features first need to be established as genuine and not artefacts associated with some specific production process. Their appearance and decay in other channels would help in this regard, while the observation of analogous signals for other combinations of flavour may also signpost the underlying dynamics. If real, these states are the product of charmonium cc– and light-quark basis states (a summary of charmonium candidates can be seen in figure 3).
Proceed with caution
It is clear that peaks are being found that cannot be interpreted as qqq or qq clusters. But one should not leap to the conclusion that we have discovered some fundamentally novel state built from, say, diquarks and antidiquarks or, for baryons, a pentaquark. A qq qq “tetraquark”, for example, looks less exotic when trivially rewritten as qqqq, which is suggestive of two bound conventional mesons. Indeed, these could be the two mesons in the invariant mass of which the peak was seen. Unless the peak is seen in different channels, and ideally in different production mechanisms, one should be cautious.
For example, when three or more hadrons are produced in a single decay it is common to discover peaks in invariant-mass spectra just above the two-body thresholds. These are not resonances, although papers on the arXiv preprint server are full of models built on the assumption that they are. Instead, the peaks likely arise due to competition between two effects. First, phase space opens up for the production of the two-body channel, but as the invariant mass increases, the chance of this exclusive two-body mode dies off because the probability for the wavefunctions of the two hadrons to overlap decreases. Any peak seen within a few hundred MeV of such a threshold is most likely to be the accidental result of this phenomenon. Such “cusps” have been proposed as explanations of several recent exotic candidates, such as the Z(3900) and Z(10610) spotted at BESIII and Belle, among others. Whether the tetraquark candidates X(4274), X(4500) and X(4700) recently observed at LHCb, in addition to the X(4140) found by the CDF experiment at Fermilab in 2009, herald the birth of a new QCD spectroscopy or are examples of more mundane dynamics such as cusps, is also the subject of considerable debate. In short, if a peak occurs above a two-body threshold in a single channel: beware.
Enter the deuson
More interesting for exotic-hadron studies are peaks that lie just below threshold. Such states are well known in the baryon sector, the deuteron being a good example. The nuclear force driven by pion exchange that binds neutrons and protons inside the atomic nucleus should also occur between pairs of mesons, at least for those that are stable on the timescale of the strong interaction. Thus on purely phenomenological and conservative grounds, we should anticipate meson molecules (or, by analogy with the deuteron, “deusons”), which would take us beyond the simple quark-model spectroscopy. The Y(4260) could be an example of such a state, since both DD1 and D*D0 S-wave thresholds lie in this region and pion exchange may play a role in linking the two channels (figure 4). If these states are indeed deusons then there should also be partners with isospin. Establishing whether these structures are singletons or have siblings is therefore another important step in identifying their dynamical origins.
The first sign of deusons may be expected in the axial-vector channel formed from a pseudoscalar and vector charmed (or bottom) meson. This is because pion exchange can occur between a pair of vector mesons or as an exchange force between a pseudoscalar-vector combination, but not within a state of two pseudoscalars as this would violate parity conservation. The enigmatic state X(3872), which was first observed in B decays by Belle in 2003 and occurs at the D0D*0 + cc threshold, has long been a prime candidate for a deuson. If so, there should be analogous states in the BB* as well as charm-bottom flavour mixtures and perhaps siblings with two units of charm or bottom. Whether these states have charged partners is one of many model-dependent details. That some of these states should occur seems unavoidable, however, and if doubly charmed states exist they should be produced at the LHC.
Whereas for baryons the attractive forces arise in the exchange or “t channel”, for pairs of mesons there can also be contributions due to qq annihilation in the direct s-channel. In QCD this can also mask the search for glueballs: for example, the scalar glueball of lattice QCD predicted at a mass of around 1.5 GeV mixes with the nonet of scalar qq states in this very region. The pattern of these scalars empirically is consistent with such dynamics.
Scalar mesons are interesting not least because the theoretical interest in multiquark or molecular states originated in such particles 40 years ago, after Robert Jaffe noticed that the chromo-magnetic QCD forces are powerfully attractive in the nonet of light-flavoured scalar mesons. Intriguingly, this idea has remained consistent with the observed nonet of scalars below 1 GeV ever since. The main question that remains unresolved is to what extent these states are dominantly formed from coloured diquarks and their antidiquarks, or are better described as molecular states formed from colour-singlet π and K mesons.
LHCb in particular has shown that it is possible to identify light scalars among the decay debris of heavy-flavoured mesons, offering a new opportunity to investigate their nature and dynamics. Indeed, the kinematic reach of the LHC potentially enables a multitude of information to be obtained about heavy-flavoured mesons in both conventional and exotic combinations. We might therefore hope that information about exotic mesons will be extended into different flavour sectors to help identify the source of the binding.
Remarkably robust
In general, the simple qq picture of mesons appears to remain remarkably robust so long as there are no nearby prominent channels for pair production of hadrons in the S-wave channel. “Exotic” mesons and baryons seem to correlate with some S-wave channel sharing quantum numbers with a nominal qq state and causing the appearance of a state near the corresponding S-wave threshold. In some of these cases, but not all, the familiar forces of conventional nuclear physics play a role, and the multi-particle events at the LHC have the kinematic reach to include all combinations of non-strange, strange, charm and bottom mesons. How many of these can in practice be identified is the challenge, but identifying the dynamics of states “beyond qq” may depend on it.
In conclusion, these exotic states need to be studied in different production mechanisms and in a variety of decay channels. A genuine resonant state should appear in different modes, whereas a structure that appears in a single production mechanism and a unique decay channel is suggestive of some dynamical feature that is not truly resonant. While interesting in its own right, such a state is not “exotic” in the sense of hadron spectroscopy.
As for truly exotic states, there are different levels of exoticity. For flavoured hadrons: the least exotic are meson analogues of nuclei – “deusons” driven by pion exchange between pairs of mesons. Next are “hybrids”: states anticipated in QCD where the gluonic degrees of freedom are excited in the presence of quarks and/or antiquarks. Finally, the most exotic of all would be colour-singlet combinations of compact diquarks, which are allowed in principle by QCD and would lead to a rich spectroscopy. At present their status is like the search for extraterrestrial life: while one feels that in the richness of nature such entities must exist, they seem reluctant to reveal themselves.
Recently, the CMS collaboration performed an updated search for a neutral Higgs boson decaying into two τ leptons using 13 fb−1 of data recorded during 2016. Although the existence of the Higgs has been established beyond doubt since its debut in the CMS and ATLAS detectors in 2012, the vast majority of Higgs bosons recorded so far concern its decay into pairs of bosons. Observing the Higgs via its decays into pairs of fermions further tests the predictions of the Standard Model (SM). In particular, τ leptons have played a major role in measuring the Yukawa couplings between the Higgs and fermions, and thus proved to be an important tool for discovering new physics at the LHC.
CMS first reported evidence for Higgs to ττ decays in 2014. With a lifetime of around 10–13 seconds and a mass of 1.776 GeV, τ leptons present a unique but challenging experimental signature at hadron colliders. Their very short lifetime means that τ particles decay in the LHC beam pipe before reaching the inner layers of the CMS detector. Approximately 35% of the time, the τ decays into two neutrinos plus a lighter lepton, while 65% of the time it decays into a single neutrino and hadrons. τ decays yield low charged and neutral particle multiplicities: more than 95% of the hadronic decays contain just one or three charged hadrons and less than two neutral pions. The primary difficulty when dealing with the τ is the distinction between genuine τ leptons and copiously produced quark and gluon jets that can be misidentified as taus.
To identify the dominant τ decay modes, CMS has developed a powerful τ reconstruction algorithm, which makes use of the single-particle reconstruction procedure (called particle flow). Charged hadrons are combined with photons from neutral pion decays to reconstruct τ decay modes with one or three charged hadrons and neutral pions (figure 1). The algorithm also pays particular attention to the effects of detector materials in converting photons into electron–positron pairs. The large magnetic field of CMS causes secondary electrons to bend, resulting in broad signatures in the phi (azimuthal) co-ordinate, and “strips” are created by clustering photons and electrons via an iterative process. In a new development for LHC Run 2, the strip size is allowed to vary based on the momentum of the clustered candidates.
Applying the latest τ algorithm, along with numerous other analysis techniques, CMS finds no excess of events in which a Higgs decays into two τ leptons compared to the expectation from the SM. Instead, upper limits were determined for the product of the production cross-section and branching fraction for masses in the region 90–3200 GeV, and the results were also interpreted in the context of the Minimal Supersymmetric SM (MSSM) (figure 2). The LHC is now operating at its highest energy and an increase in instantaneous luminosity is planned. The next few years of operations will therefore be vital for further testing the SM and MSSM using the τ lepton as a tool.
Although the night sky appears dark between the stars and galaxies that we can see, a strong background emission is present in other regions of the electromagnetic spectrum. At millimetre wavelengths, the cosmic microwave background (CMB) dominates this emission, while a strong X-ray background peaks at sub-nanometre wavelengths. For the past 50 years it has also been known that a diffuse gamma-ray background at picometre wavelengths also illuminates the sky away from the strong emission of the Milky Way and known extra-galactic sources.
This so-called isotropic gamma-ray background (IGRB) is expected to be uniform on large scales, but can still contain anisotropies on smaller scales. The study of these anisotropies is important for identifying the nature of the unresolved IGRB sources. The best candidates are star-forming galaxies and active galaxies, in particular blazars, which have a relativistic jet pointing towards the Earth. Another possibility to be investigated is whether there is a detectable contribution from the decay or the annihilation of dark-matter particles, as predicted by models of weakly interacting massive particles (WIMPs).
Using NASA’s Fermi Gamma-ray Space Telescope, a team led by Mattia Fornasa from the University of Amsterdam in the Netherlands studied the anisotropies of the IGRB in observations acquired over more than six years. This follows earlier results published in 2012 by the Fermi collaboration and shows that there are two different classes of gamma-ray sources. A specific type of blazar appears to dominate at the highest energies, while at lower frequencies star-forming galaxies or another class of blazar is thought to imprint a steeper spectral slope in the IGRB. A possible additional contribution from WIMP annihilation could not be identified by Fornasa and collaborators.
The constraints on dark matter will improve with new data continuously collected by Fermi
The first step in such an analysis is to exclude the sky area most contaminated by the Milky Way and extra-galactic sources, and then to subtract remaining galactic contributions and the uniform emission of the IGRB. The resulting images include only the IGRB anisotropies, which can be characterised by computing the associated angular power spectrum (APS) similarly to what is done for the CMB anisotropies. The authors do this both for a single image (“auto-APS”) and between images recorded in two different energy regions (“cross-APS”).
The derived auto-APS and cross-APS are found to be consistent with a Poisson distribution, which means they are constant on all angular scales. This absence of scale dependence in gamma-ray anisotropies suggests that the main contribution comes from distant active galactic nuclei. On the other hand, the emission by star-forming galaxies and dark-matter structures would be dominated by their local distribution that is less uniform on the sky and thus would lead to enhanced power at characteristic angular scales. This allowed Fornasa and co-workers to derive exclusion limits on the dark-matter parameter space. Although less stringent than the best limits achieved from the average intensity of the IGRB or from the observation of dwarf spheroidal galaxies, they independently confirm the absence, so far, of a gamma-ray signal from dark matter.
The constraints on dark matter will improve with new data continuously collected by Fermi, but a potentially more promising approach is to complement them at higher gamma-ray energies with data from the future Cherenkov Telescope Array and possibly also with high-energy neutrinos detected by IceCube.
One of the greatest scientific discoveries of the century took place on 14 September 2015. At 09.50 UTC on that day, a train of gravitational waves launched by two colliding black holes 1.4 billion light-years away passed by the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in Louisiana, US, causing a fractional variation in the distance between the mirrors of about one part in 1021. Just 7 ms later, the same event – dubbed GW150914 – was picked up by the twin aLIGO detector in Washington 3000 km away (figure 1). A second black-hole coalescence was observed on 26 December 2015 (GW151226) and a third candidate event was also recorded, although its statistical significance was not high enough to claim a detection. A search that had gone on for half a century had finally met with success, ushering in the new era of gravitational-wave astronomy.
Black holes are the simplest physical objects in the universe: they are made purely from warped space and time and are fully described by their mass and intrinsic rotation, or spin. The gravitational-wave train emitted by coalescing binary black holes comprises three main stages: a long “inspiral” phase, where gravitational waves slowly and steadily drain the energy and angular momentum from the orbiting black-hole pair; the “plunge and merger”, where black holes move at almost the speed of light and then coalesce into the newly formed black hole; and the “ringdown” stage during which the remnant black hole settles to a stationary configuration (figure 2). Each dynamical stage contains fingerprints of the astrophysical source, which can be identified by first tracking the phase and amplitude of the gravitational-wave train and then by comparing it with highly accurate predictions from general relativity.
aLIGO employs waveform models built by combining analytical and numerical relativity. The long, early inspiral phase, characterised by a weak gravitational field and low velocities, is well described by the post-Newtonian formalism (which expands the Einstein field equation and the gravitational radiation in powers of v/c, but loses accuracy as the two bodies come closer and closer). Numerical relativity provides the most accurate solution for the last stages of inspiral, plunge, merger and ringdown, but such models are time-consuming to produce – the state-of-the-art code of the Simulating eXtreme Spacetimes collaboration took three weeks and 20,000 CPU hours to compute the gravitational waveform for the event GW150914 and three months and 70,000 CPU hours for GW151226.
A few hundred thousand different waveforms were used as templates by aLIGO during the first observing run, covering compact binaries with total masses 2–100 times that of the Sun and mass ratios up to 1:99. Novel approaches to the two-body problem that extend post-Newtonian theory into the strong-field regime and combine it with numerical relativity had to be developed to provide aLIGO with accurate and efficient waveform models, which were based on several decades of steady work in general relativity (figure 3). Further theoretical work will be needed to deal with more sensitive searches in the future if we want to take full advantage of the discovery potential of gravitational-wave astronomy.
aLIGO’s first black holes
The two gravitational-wave signals observed by aLIGO have different morphologies that reveal quite distinct binary black-hole sources. GW150914 is thought to be composed of two stellar black holes with masses 36 MSun and 29 MSun, which formed a black hole of about 62 MSun rotating at almost 70% of its maximal rotation speed, while GW151226 had lower black-hole masses (of about 14 MSun and 8 MSun) and merged in a 21 MSun black-hole remnant. Although the binary’s individual masses for GW151226 have larger uncertainties compared with GW150914 (since the former happened at a higher frequency where aLIGO sensitivity degrades), the analysis ruled out the possibility that the lower-mass object in GW151226 was a neutron star. A follow-up analysis also revealed that the individual black holes had spins less than 70% of the maximal value, and that at least one of the black holes in GW151226 was rotating at 20% of its maximal value or faster. Finally, the aLIGO data show that the binaries that produced GW150914 and GW151226 were at comparable distances from the Earth and that the peak of the gravitational-wave luminosity was about 3 × 1056 erg/sec, making them by far the most luminous transient events in the universe.
Owing to the signal’s length and the particular orientation of the binary plane with respect to the aLIGO detectors, no information about the spin precession of the system could be extracted. It has therefore not yet been possible to determine the precise astrophysical production route for these objects. Whereas the predictions for the rate of binary black-hole mergers from astrophysical-formation mechanisms traditionally vary by several orders of magnitude, the aLIGO detections so far have already established the rate to be somewhat on the high side of the range predicted by astrophysical models at 9–240 per Gpc3 per year. Larger black-hole masses and higher coalescence rates raise the interesting possibility that a stochastic background of gravitational waves composed of unresolved signals from binary black-hole mergers could be observed when aLIGO reaches its design sensitivity in 2019.
The sky localisation of GW150914 and GW151226, which is mainly determined by recording the time delays of the signals arriving at the interferometers, extended over several hundred square degrees. This can be compared with the 0.2 square degrees covered by the full Moon as seen from the Earth, and makes it very hard to search for an electromagnetic counterpart to black-hole mergers. Nevertheless, the aLIGO results kicked off the first campaign for possible electromagnetic counterparts of gravitational-wave signals, involving almost 20 astronomical facilities spanning the gamma-ray, X-ray, optical, infrared and radio regions of the spectrum. No convincing evidence of electromagnetic signals emitted by GW150914 and GW151226 was found, in line with expectations from standard astrophysical scenarios. Deviations from the standard scenario may arise if one considers dark electromagnetic sectors, spinning black holes with strong magnetic fields that need to be sustained until merger, and black holes surrounded by clouds of axions (see “Linking waves to particles”).
The new aLIGO observations have put the most stringent limits on higher post-Newtonian terms.
aLIGO’s observations allow us to test general relativity in the so-far-unexplored, highly dynamical and strong-field gravity regime. As the two black holes that emitted GW150914 and GW151226 started to merge, the binary’s orbital period varied considerably and the phase of the gravitational-wave signal changed accordingly. It is possible to obtain an analytical representation of the phase evolution in post-Newtonian theory, in which the coefficients describe a plethora of dynamical and radiative physical effects, and long-term timing observations of binary pulsars have placed precise bounds on the leading-order post-Newtonian coefficients. However, the new aLIGO observations have put the most stringent limits on higher post-Newtonian terms – setting upper bounds as low as 10% for some coefficients (figure 4). It was even possible to investigate potential deviations during the non-perturbative coalescence phase, and again general relativity passed this test without doubt.
The first aLIGO observations could neither test the second law of black-hole mechanics, which states that the black-hole entropy cannot decrease, nor the “no-hair” theorem, which says that a black hole is only described by mass and spin, for which we require to extract the mass and spin of the final black hole from the data. But we expect that future, multiple gravitational-wave detections with higher signal-to-noise ratios will shed light on these important theoretical questions. Despite those limitations, aLIGO has provided the most convincing evidence to date that stellar-mass compact objects in our universe with masses larger than roughly five solar masses are described by black holes: that is, by the solutions to the Einstein field equations (see “General relativity at 100”).
From binaries to cosmology
During its first observation run, lasting from mid-September 2015 to mid-January 2016, aLIGO did not detect gravitational waves from binaries composed of either two neutron stars, or a black hole and a neutron star. Nevertheless, it set the most stringent upper limits on the rates of such processes: 12.6 × 103 and 3.6 × 103 per Gpc3 per year, respectively. The aLIGO rates imply that we expect to detect those binary systems a few years after aLIGO and the French–Italian experiment Virgo reach their design sensitivity. Observing gravitational waves from binaries made up of matter is exciting because it allows us to infer the neutron-star equation of state and also to unveil the possible origin of short-hard gamma-ray bursts (GRBs) – enormous bursts of electromagnetic radiation observed in distant galaxies.
Neutron stars are extremely dense objects that form when massive stars run out of nuclear fuel and collapse. The density in the core is expected to be more than 1014 times the density of the Sun, at which the standard structure of nuclear matter breaks down and new phases of matter such as superfluidity and superconductivity may appear. All mass and spin parameters being equal, the gravitational-wave train emitted by a binary containing a neutron star differs from the one emitted by two black holes only in the late inspiral phase, when the neutron star is tidally deformed or disrupted. By tracking the gravitational-wave phase it will be possible to measure the tidal deformability parameter, which contains information about the neutron-star interior, and ultimately to discriminate between some equations of state. The merger of double neutron stars and/or black-hole–neutron-star binaries is currently considered the most likely source of short-hard GRBs, and we expect a plethora of electromagnetic signals from the coalescence of such compact objects that will test the short-hard GRB/binary-merger paradigm.
Bursts of gravitational waves lasting for tenths of milliseconds are also produced during the catastrophic final moments of all stars, when the stellar core undergoes a sudden collapse (or supernova explosion) to a neutron star or a black hole. At design sensitivity, aLIGO and Virgo could detect bursts from the core’s “bounce”, provided that the supernova took place in the Milky Way or neighbouring galaxies, with more extreme emission scenarios observable to much further distances. Highly magnetised rotating neutron stars called pulsars are also promising astrophysical sources of gravitational waves. Mountains just a few centimetres in height on the crust of pulsars can cause the variation in time of the pulsar’s quadrupole moment, producing a continuous gravitational-wave train at twice the rotation frequency of the pulsar. The most recent LIGO all-sky searches and targeted observations of known pulsars have already started to invade the parameter space of astrophysical interest, setting new upper limits on the source’s ellipticity, which depends on the neutron-star’s equation of state.
Lastly, several physical mechanisms in the early universe could have produced gravitational waves, such as cosmic inflation, first-order phase transitions and vibrations of fundamental and/or cosmic strings. Being that gravitational waves are almost unaffected by matter, they provide us with a pristine snapshot of the source at the time they were produced. Thus, gravitational waves may unveil a period in the history of the universe around its birth that we cannot otherwise access. The first observation run of aLIGO has set the most stringent constraints on the stochastic gravitational-wave background, which is generally expressed by the dimensionless energy density of gravitational waves, of < 1.7 × 10−7. Digging deeper, at design sensitivity aLIGO is expected to reach a value of 10−9, while next-generation detectors such as the Einstein Telescope and the Cosmic Explorer may achieve values as low as 10−13 – just two orders of magnitude above the background predicted by the standard “slow-roll” inflationary scenario.
Grand view
The sensitivity of existing interferometer experiments on Earth will be improved in the next 5–10 years by employing a quantum-optics phenomenon called squeezed light. This will reduce the sky-localisation errors of coalescing binaries, provide a better measurement of tidal effects and the neutron-star equation of state in binary mergers, and enhance our chances of observing gravitational waves from pulsars and supernovae. The ability to identify the source of gravitational waves will also improve over time, as upgraded and new gravitational-wave observatories come online.
Furthermore, pulsar signals offer an alternative Pulsar Timing Array (PTA) detection scheme that is currently operating. Gravitational waves passing through pulsars and the Earth would modify the time of arrival of the pulses, and searches for correlated signatures in the pulses’ times of arrival from the most stable known pulsars by PTA projects could detect the stochastic gravitational-wave background from unresolved supermassive binary black-hole inspirals in the 10−9–10−7 Hz frequency region. Results from the North-American NANOGrav, European EPTA and Australian PPTA collaborations have already set interesting upper limits on the astrophysical background, and could achieve a detection in the next five years.
The past year has been a milestone for gravitational-wave research in space, with the results of the LISA Pathfinder mission published in June 2016 exceeding all expectations and proving that LISA, planned for 2034, will work successfully (see “Catching a gravitational wave”). LISA would be sensitive to gravitational waves between 10−4–10−2 Hz, thus detecting sources different from the ones observed on the Earth such as supermassive binary black holes, extreme mass-ratio inspirals, and the astrophysical stochastic background from white-dwarf binaries in our galaxy. In the meantime, a new ground facility to be built in 10–15 years – such as the Einstein Telescope in Europe and the Cosmic Explorer in the US – will be required to maximise the scientific potential of gravitational-wave physics and astrophysics. These future detectors will allow such high sensitivity to binary coalescences that we can probe binary black holes in all our universe, enabling the most exquisite tests of general relativity in the highly dynamical, strong-field regime. That will challenge our current knowledge of gravity, fundamental and nuclear physics, unveiling the nature of the most extreme objects in our universe.
Einstein’s long path towards general relativity (GR) began in 1907, just two years after he created special relativity (SR), when the following apparently trivial idea occurred to him: “If a person falls freely, he will not feel his own weight.” Although it was long known that all bodies fall in the same way in a gravitational field, Einstein raised this thought to the level of a postulate: the equivalence principle, which states that there is complete physical equivalence between a homogeneous gravitational field and an accelerated reference frame. After eight years of hard work and deep thinking, in November 1915 he succeeded in extracting from this postulate a revolutionary theory of space, time and gravity. In GR, our best description of gravity, space–time ceases to be an absolute, non-dynamical framework as envisaged by the Newtonian view, and instead becomes a dynamical structure that is deformed by the presence of mass-energy.
GR has led to profound new predictions and insights that underpin modern astrophysics and cosmology, and which also play a central role in attempts to unify gravity with other interactions. By contrast to GR, our current description of the fundamental constituents of matter and of their non-gravitational interactions – the Standard Model (SM) – is given by a quantum theory of interacting particles of spins 0, ½ and 1 that evolve within the fixed, non-dynamical Minkowski space–time of SR. The contrast between the homogeneous, rigid and matter-independent space–time of SR and the inhomogeneous, matter-deformed space–time of GR is illustrated in figure 1.
The universality of the coupling of gravity to matter (which is the most general form of the equivalence principle) has many observable consequences such as: constancy of the physical constants; local isotropy of space; local Lorentz invariance; universality of free fall and universality of gravitational redshift. Many of these have been verified to high accuracy. For instance, the universality of the acceleration of free fall has been verified on Earth at the 10–13 level, while the local isotropy of space has been verified at the 10–22 level. Einstein’s field equations (see panel below) also predict many specific deviations from Newtonian gravity that can be tested in the weak-field, quasi-stationary regime appropriate to experiments performed in the solar system. Two of these tests – Mercury’s perihelion advance, and light deflection by the Sun – were successfully performed, although with limited precision, soon after the discovery of GR. Since then, many high-precision tests of such post-Newtonian gravity have been performed in the solar system, and GR has passed each of them with flying colours.
Precision tests
Similar to what is done in precision electroweak experiments, it is useful to quantify the significance of precision gravitational experiments by parameterising plausible deviations from GR. The simplest, and most conservative, deviation from Einstein’s pure spin-2 theory is defined by adding a long-range (massless) spin-0 field, φ, coupled to the trace of the energy-momentum tensor. The most general such theory respecting the universality of gravitational coupling contains an arbitrary function of the scalar field defining the “observable metric” to which the SM matter is minimally and universally coupled.
In the weak-field slow-motion limit, appropriate to describing gravitational experiments in the solar system, the addition of φ modifies Einstein’s predictions only through the appearance of two dimensionless parameters, γ and β. The best current limits on these “post-Einstein” parameters are, respectively, (2.1±2.3) × 10–5 (deduced from the additional Doppler shift experienced by radio-wave beams connecting the Earth to the Cassini spacecraft when they passed near the Sun) and < 7 × 10–5, from a study of the global sensitivity of planetary ephemerides to post-Einstein parameters.
In the regime of radiative and/or strong gravitational fields, by contrast, pulsars (rotating neutron stars emitting a beam of radio waves) in gravitationally bound orbits have provided crucial tests of GR. In particular, measurements of the decay in the orbital period of binary pulsars have provided direct experimental confirmation of the propagation properties of the gravitational field. Theoretical studies of binaries in GR have shown that the finite velocity of propagation of the gravitational interaction between the pulsar and its companion generates damping-like terms at order (v/c)5 in the equations of motion that lead to a small orbital period decay. This has been observed in more than four different systems since the discovery of binary pulsars in 1974, providing direct proof of the reality of gravitational radiation. Measurements of the arrival times of pulsar signals have also allowed precision tests of the quasi-stationary strong-field regime of GR, since their values may depend both on the unknown masses of the binary system and on the theory of gravity used to describe the strong self-gravity of the pulsar and its companion (figure 2).
The radiation revelation
Einstein realised that his field equations had wave-like solutions in two papers in June 1916 and January 1918 (see panel below). For many years, however, the emission of gravitational waves (GWs) by known sources was viewed as being too weak to be of physical significance. In addition, several authors – including Einstein himself – had voiced doubts about the existence of GWs in fully nonlinear GR.
The situation changed in the early 1960s when Joseph Weber understood that GWs arriving on Earth would have observable effects and developed sensitive resonant detectors (“Weber bars”) to search for them. Then, prompted by Weber’s experimental effort, Freeman Dyson realised that, when applying the quadupolar energy-loss formula derived by Einstein to binary systems made of neutron stars, “the loss of energy by gravitational radiation will bring the two stars closer with ever-increasing speed, until in the last second of their lives they plunge together and release a gravitational flash at a frequency of about 200 cycles and of unimaginable intensity.” The vision of Dyson has recently been realised thanks, on the one hand, to the experimental development of drastically more sensitive non-resonant kilometre-scale interferometric detectors and, on the other hand, to theoretical advances that allowed one to predict in advance the accurate shape of the GW signals emitted by coalescing systems of neutron stars and black holes (BHs).
The recent observations of the LIGO interferometers have provided the first detection of GWs in the wave zone. They also provide the first direct evidence of the existence of BHs via the observation of their merger, followed by an abrupt shut-off of the GW signal, in complete accord with the GR predictions.
BHs are perhaps the most extraordinary consequence of GR, because of the extreme distortion of space and time that they exhibit. In January 1916, Karl Schwarzschild published the first exact solution of the (vacuum) Einstein equations, supposedly describing the gravitational field of a “mass point” in GR. It took about 50 years to fully grasp the meaning and astrophysical plausibility of these Schwarzschild BHs. Two of the key contributions that led to our current understanding of BHs came from Oppenheimer and Snyder, who in 1939 suggested that a neutron star exceeding its maximum possible mass will undergo gravitational collapse and thereby form a BH, and from Kerr 25 years later, who discovered a generalisation of the Schwarzschild solution describing a BH endowed both with mass and spin.
The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies.
Another remarkable consequence of GR is theoretical cosmology, namely the possibility of describing the kinematics and the dynamics of the whole material universe. The field of relativistic cosmology was ushered in by a 1917 paper by Einstein. Another key contribution was the 1924 paper of Friedmann that described general families of spatially curved, expanding or contracting homogeneous cosmological models. The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies. Quantitative confirmations of GR on cosmological scales have also been obtained, notably through the observation of a variety of gravitational lensing systems.
Dark clouds ahead
In conclusion, all present experimental gravitational data (universality of free fall, post-Newtonian gravity, radiative and strong-field effects in binary pulsars, GW emission by coalescing BHs and gravitational lensing) have been found to be compatible with the predictions of Einstein’s theory. There are also strong constraints on sub-millimetre modifications of Newtonian gravity from torsion-balance tests of the inverse square law.
One might, however, wish to keep in mind the presence of two dark clouds in our current cosmology, namely the need to assume that most of the stress-energy tensor that has to be put on the right-hand side of the GR field equations to account for the current observations is made of yet unseen types of matter: dark matter and a “cosmological constant”. It has been suggested that these signal a breakdown of Einstein’s gravitation at large scales, although no convincing theoretical modification of GR at large distances has yet been put forward.
GWs, BHs and dynamical cosmological models have become essential elements of our description of the macroscopic universe. The recent and bright beginning of GW astronomy suggests that GR will be an essential tool for discovering new aspects of the universe (see “The dawn of a new era”). A century after its inception, GR has established itself as the standard theoretical description of gravity, with applications ranging from the Global Positioning System and the dynamics of the solar system, to the realm of galaxies and the primordial universe.
However, in addition to the “dark clouds” of dark matter and energy, GR also poses some theoretical challenges. There are both classical challenges (notably the formation of space-like singularities inside BHs), and quantum ones (namely the non-renormalisability of quantum gravity – see “Gravity’s quantum side”). It is probable that a full resolution of these challenges will be reached only through a suitable extension of GR, and possibly through its unification with the current “spin ≤ 1” description of particle physics, as suggested both by supergravity and by superstring theory.
It is therefore vital that we continue to submit GR to experimental tests of increasing precision. The foundational stone of GR, the equivalence principle, is currently being probed in space at the 10–15 level by the MICROSCOPE satellite mission of ONERA and CNES. The observation of a deviation of the universality of free fall would imply that Einstein’s purely geometrical description of gravity needs to be completed by including new long-range fields coupled to bulk matter. Such an experimental clue would be most valuable to indicate the road towards a more encompassing physical theory.
General relativity makes waves
There are two equivalent ways of characterising general relativity (GR). One describes gravity as a universal deformation of the Minkowski metric, which defines a local squared interval between two infinitesimally close space–time points and, consequently, the infinitesimal light cones describing the local propagation of massless particles. The metric field gμν is assumed in GR to be universally and minimally coupled to all the particles of the Standard Model (SM), and to satisfy Einstein’s field equations:
Here, Rμν denotes the Ricci curvature (a nonlinear combination of gμν and of its first and second derivatives), Tμν is the stress-energy tensor of the SM particles (and fields), and G denotes Newton’s gravitational constant.
The second way of defining GR, as proven by Richard Feynman, Steven Weinberg, Stanley Deser and others, states that it is the unique, consistent, local, special-relativistic theory of a massless spin-2 field. It is then found that the couplings of the spin-2 field to the SM matter are necessarily equivalent to a universal coupling to a “deformed” space–time metric, and that the propagation and self-couplings of the spin-2 field are necessarily described by Einstein’s equations.
Following the example of Maxwell, who had found that the electromagnetic-field equations admit propagating waves as solutions, Einstein found that the GR field equations admit propagating gravitational waves (GWs). He did so by considering the weak-field limit (gμν = ημν + hμν) of his equations, namely,
where hμν = hμν – ½h ημν. When choosing the co-ordinate system so as to satisfy the gravitational analogue of the Lorenz gauge condition, so that
the linearised field equations simplify to the diagonal inhomogeneous wave equation, which can be solved by retarded potentials.
There are two main results that derive from this wave equation: first, a GW is locally described by a plane wave with two transverse tensorial polarisations (corresponding to the two helicity states of the massless spin-2 graviton) and travelling at the velocity of light; second, a slowly moving, non self-gravitating source predominantly emits a quadupolar GW.
Gravitational waves alternatively compress and stretch space–time as they propagate, exerting tidal forces on all objects in their path. Detectors such as Advanced LIGO (aLIGO) search for this subtle distortion of space–time by measuring the relative separation of mirrors at the ends of long perpendicular arms, which form a simple Michelson interferometer with Fabry–Perot cavities in the arms: a beam splitter directs laser light to mirrors at the ends of the arms and the reflected light is recombined to produce an interference pattern. When a gravitational wave passes through the detector, the strain it exerts changes the relative lengths of the arms and causes the interference pattern to change.
The arms of the aLIGO detectors are each 4 km long to help maximise the measured length change. Even on this scale, however, the induced length changes are tiny: the first detected gravitational waves, from the merger of two black holes, changed the arm length of the aLIGO detectors by just 4 × 10–18 m, which is approximately 200 times smaller than the proton radius. Achieving the fantastically high sensitivity required to detect this event was the culmination of decades of research and development.
Battling noise
The idea of using an interferometer to detect gravitational waves was first concretely proposed in the 1970s and full-scale detectors began to be constructed in the mid-1990s, including GEO600 in Germany, Virgo in Italy and the LIGO project in the US. LIGO consists of detectors at two sites separated by about 3000 km – Hanford (in Washington state) and Livingston in Louisiana – and undertook its first science runs in 2002–2008. Following a major upgrade, the observatory restarted in September 2015 as aLIGO with an initial sensitivity four times greater than its predecessor. Since the detectors measure strain in space–time, the effective increase in volume, or event rate, of aLIGO is a factor 43 higher.
A major issue facing aLIGO designers is to isolate the detectors from various noise sources. At a frequency of around 10 Hz, the motion of the Earth’s surface or seismic noise is about 10 orders of magnitude larger than required, with the seismic noise falling off at higher frequencies. A powerful solution is to suspend the mirrors as pendulums: a pendulum acts as a low-pass filter, providing significant reductions in motion at frequencies above the pendulum frequency. In aLIGO, a chain of four suspended masses is used to provide a factor 107 reduction in seismic motion. In addition, the entire suspension is attached to an advanced seismic isolation system using a variety of active and passive techniques, which further isolate noise by a factor 1000. At 10 Hz, and in the absence of other noise sources, these systems could already increase the sensitivity of the detectors to roughly 10–19 m/√(Hz). At even lower frequencies (10 μHz), the daily tides stretch and shrink the Earth by the order of 0.4 mm over 4 km.
Another source of low-frequency noise arises from moving mass interacting with the detector mirrors via the Newtonian inverse square law. The dominant source of this noise is from surface seismic waves, which can produce density fluctuations of the Earth’s surface close to the interferometer mirrors and result in a fluctuating gravitational force on them. While methods of monitoring and subtracting this noise are being investigated, the performance of Earth-based detectors is likely to always be limited at frequencies below 1 Hz by this noise source.
Thermal noise associated with the thermal energy of the mirrors and their suspensions can also cause the mirrors to move, providing a significant noise source at low-to-mid-range frequencies. The magnitude of thermal noise is related to the mechanical loss of the materials: similar to a high-quality wine glass, a material with a low loss will ring for a long time with a pure note because most of the thermal motion is confined to frequencies close to the resonance. For this reason, aLIGO uses fibres fabricated from fused silica – a type of very pure glass with very low mechanical loss – for the final stage of the mirror suspension. Pioneered in the GEO600 detector near Hanover in Germany, the use of silica fibres in place of the steel wires used in the initial LIGO detectors significantly reduces thermal noise from suspension.
aLIGO also has much reduced quantum noise compared with the original LIGO.
Low-loss fused silica is also used for the 40 kg interferometer mirrors, which use multi-layered optical coatings to achieve the high reflectivity required. For aLIGO, a new optical coating was developed comprising a stack of alternating layers of silica and titania-doped “tantala”, reducing the coating thermal noise by about 20%. However, at the aLIGO design sensitivity (which is roughly 10 times higher than the initial aLIGO set-up) thermal noise will be the limiting noise source at frequencies of around 60 Hz – close to the frequency at which the detectors are most sensitive.
aLIGO also has much reduced quantum noise compared with the original LIGO. This noise source has two components: radiation-pressure noise and shot noise. The former results from fluctuations in the number of photons hitting the detector mirrors, which is more significant at lower frequencies, and has been reduced by using mirrors four times heavier than the initial LIGO mirrors. Photon shot noise, resulting from statistical fluctuations in the number of photons at the output of the detector, limits sensitivity at higher frequencies. Since shot noise is inversely proportional to the square root of the power, it can be reduced by using higher laser power. In the first observing run of aLIGO, 100 kW of laser power was circulating in the detector arms, with the potential to increase it to up to 750 kW in future runs. Optical cavities are also used to store light in the arms and build up laser power.
In addition to reductions in these fundamental noise sources, many other technological improvements were required to reduce more technical noise sources. Improvements over the initial LIGO detector included a thermal compensation system to reduce thermal lensing effects in the optics, reduced electronic noise in control circuits and finer polishing of the mirror substrates to reduce the amount of scattered light in the detectors.
Upgrades on the ground
Having detected their first gravitational wave almost as soon as they switched on in September 2015, followed by a further event a few months later, the aLIGO detectors began their second observation run on 30 November. Dubbed “O2”, it is scheduled to last for six months. More observation runs are envisaged, with more upgrades in sensitivity taking place between them.
The next major upgrade, expected in around 2018, will see the injection of “squeezed light” to further reduce quantum noise. However, to gain the maximum sensitivity improvement from squeezing, a reduction in coating thermal noise is also likely to be required. With these and other relatively short-term upgrades, it is expected that a factor-two improvement over the aLIGO design sensitivity could be achieved. This would allow events such as the first detection to be observed with a signal-to-noise ratio almost 10 times better than the initial result. Further improvements in sensitivity will almost certainly require more extensive upgrades or new facilities, possibly involving longer detectors or cryogenic cooling of the mirrors.
aLIGO is expected to soon be joined in observing runs by Advanced Virgo, giving a network of three geographically separated detectors and thus improving our ability to locate the position of gravitational-wave sources on the sky. Discussions are also under way for an aLIGO site in India. In Japan, the KAGRA detector is under construction: this detector will use cryogenic cooling to reduce thermal noise and is located underground to reduce seismic and gravity gradient effects. When complete, KAGRA is expected to have similar sensitivity to aLIGO.
Longer term, in Europe a detector known as the Einstein Telescope (ET) has been proposed to provide a factor 10 more sensitivity than aLIGO. ET would not only have arms measuring 10 km long but would take a new approach to noise reduction using two very different detectors: a high-power room-temperature interferometer optimised for sensitivity at high frequencies, where shot noise limits performance, and a low-power cryogenic interferometer optimised for sensitivity at low frequencies (where performance is limited by thermal noise). ET would require significant changes in detector technology and also be constructed underground to reduce the effect of seismic noise and gravity-gradient noise on low-frequency sensitivity.
The final frontier
Obtaining significantly improved sensitivity at lower frequencies is difficult on Earth because they are swamped by local mass motion. Gaining sensitivity at very low frequencies, which is where we must look for signals from massive black-hole collisions and other sources that will provide exquisite science results, is only likely to be achieved in space. This concept has been on the table since the 1970s and has evolved into the Laser Interferometer Space Antenna (LISA) project, which is led by the European Space Agency (ESA) with contributions from 14 European countries and the US.
A survey mission called LISA Pathfinder was launched on 3 December 2015 from French Guiana. It is currently located 1.5 million km away at the first Earth–Sun Lagrange point, and will take data until the end of May 2017. The aim of LISA Pathfinder was to demonstrate technologies for a space-borne gravitational-wave detector based on the same measurement philosophy as that used by ground-based detectors. The mission has clearly demonstrated that we can place test masses (gold–platinum cubes with 46 mm sides separated by 38 cm) into free fall, such that the only varying force acting on them is gravity. It has also validated a host of complementary techniques, including: operating a drag-free spacecraft using cold gas thrusters; electrostatic control of free-floating test masses; short-arm interferometry and test-mass charge control. When combined, these novel features allow differential accelerometry at the 10–15 g level, which is the sensitivity needed for a space-borne gravitational-wave detector. Indeed, if Pathfinder test-mass technology were used to build a full-scale LISA detector, it would recover almost all of the science originally anticipated for LISA without any further improvements.
The success of Pathfinder, coming hot on the heels of the detection of gravitational waves, is a major boost for the international gravitational-wave community. It comes at an exceptional time for the field, with ESA currently inviting proposals for the third of its Cosmic Vision “large missions” programme. Developments are now needed to move from LISA Pathfinder to LISA proper, but these are now well understood and technology development programmes are planned and under way. The timeline for this mission leads to a launch in the early 2030s and the success of Pathfinder means we can look forward with excitement to the fantastic science that will result.
A team of astronomers has estimated that the number of galaxies in the observable universe is around two trillion (2 × 1012), which is 10 times more than could be observed by the Hubble Space Telescope in a hypothetical all-sky survey. Although the finding does not affect the matter content of the universe, it shows that small galaxies unobservable by Hubble were much more numerous in the distant, early universe.
Asking how many stars and galaxies there are in the universe might seem a simple enough question, but it has no simple answer. For instance, it is only possible to probe the observable universe, which is limited to the region from where light could reach us in less time than the age of the universe. The Hubble Deep Field images captured in the mid-1990s gave us the first real insight into this fundamental question: myriad faint galaxies were revealed, and extrapolating from the tiny area on the sky suggested that the observable universe contains about 100 billion galaxies.
Now, an international team led by Christopher Conselice of the University of Nottingham in the UK has shown that this number is at least 10 times too low. The conclusion is based on a compilation of many published deep-space observations from Hubble and other telescopes. Conselice and co-workers derived the distance and the mass of the galaxies to deduce how the number of galaxies in a given mass interval evolves over the history of the universe. The team extrapolated its results to infer the existence of faint galaxies, which the current generation of telescopes cannot observe, and found that galaxies are less big and more numerous in the distant universe compared with local regions. Since less-massive galaxies are also the dimmest and therefore the most difficult to observe at great distances, the researchers conclude that the Hubble ultra-deep-field observations are missing about 90% of all galaxies in any observed area in the sky. The total number of galaxies in the observable universe, they suggest, is more like two trillion.
This intriguing result must, however, be put in context. Critically, the galaxy count depends heavily on the lower limit that one chooses for the galaxy mass: since there are more low-mass than high-mass galaxies, any change in this value has huge effects. Conselice and his team took a stellar-mass limit of one million solar masses, which is a very small value corresponding to a galaxy 1000 times smaller than the Large Magellanic Cloud (which is itself about 20–30 times less massive than the Milky Way). The authors explain that were they to take into account even smaller galaxies of 100,000 solar masses, the estimated total number of galaxies would be seven times greater.
The result also does not mean that the universe contains more visible matter than previously thought. Rather, it shows that the bigger galaxies we see in the local universe have been assembled via multiple mergers of smaller galaxies, which were much more numerous in the early, distant universe. While the vast majority of these small, faint and remote galaxies are not yet visible with current technology, they offer great opportunities for future observatories, in particular the James Webb Space Telescope (Hubble’s successor), which is planned for launch in 2018.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.