Comsol -leaderboard other pages

Topics

AWAKE makes waves

In early December, the AWAKE collaboration made an important step towards a pioneering accelerator technology that would reduce the size and cost of particle accelerators. Having commissioned the facility with first beam in November, the team has now installed a plasma cell and observed a strong modulation of high-energy proton bunches as they pass through it. This signals the generation of very strong electric fields that could be used to accelerate electrons to high energies over short distances.

AWAKE (Advanced Proton Driven Plasma Wakefield Acceleration Experiment) is the first facility to investigate the use of plasma wakefields driven by proton beams. The experiment involves injecting a “drive” bunch of protons from CERN’s Super Proton Synchrotron (SPS) into a 10 m-long tube containing a plasma. The bunch then splits into a series of smaller bunches via a process called self-modulation, generating a strong wakefield as they move through the plasma. “Although plasma-wakefield technology has been explored for many years, AWAKE is the first experiment to use protons as a driver – which, given the high energy of the SPS, can drive wakefields over much longer distances compared with electron- or laser-based schemes,” says AWAKE spokesperson Allen Caldwell of the Max Planck Institute for Physics in Munich.

While it has long been known that plasmas may provide an alternative to traditional accelerating methods based on RF cavities, turning this concept into a practical device is a major challenge. The next step for the AWAKE collaboration is to inject a second beam of electrons, the “witness” beam, which is accelerated by the wakefield just as a surfer accelerates by riding a wave. “To have observed indications for the first time of proton-bunch self-modulation, after just a few days of tests, is an excellent achievement. It’s down to a very motivated and dedicated team,” says Edda Gschwendtner, CERN AWAKE project leader.

Quark–gluon plasma insights

Powerful supercomputer simulations of colliding atomic nuclei have provided new insights about quark–gluon plasma (QGP), a superhot fluid of de-confined partons produced in heavy-ion collisions at the LHC and at RHIC, Brookhaven National Laboratory. Shown in the image are the transverse (arrows) and longitudinal vorticity (contour) distributions of a strongly coupled quark–gluon plasma in the transverse plane at forward spatial rapidity. The coupling between spin and local vorticity shifts the energy level of fermions, leading to different phase-space distributions for fermions with different spin states and therefore spin polarisation along the direction of the local vorticity.

The international team responsible for the work, which involved weeks of processing on a GPU cluster, suggests that longitudinal spin correlations can be used to study the vortex structure of the expanding QGP in high-energy heavy-ion collisions. Different from global transverse polarisation, the longitudinal spin correlation does not decrease with beam energy or vanish in event averages. This provides a unique opportunity to study the local fluid vorticity of the QGP at LHC energies, concludes the team. “We can think about this as opening a completely new window of looking at quark–gluon plasmas, and how to study them,” says team member Xin-Nian Wang at the Central China Normal University and Lawrence Berkeley National Laboratory.

Proton–lead run tops record year of LHC operations

On 26 October, the LHC completed its 2016 proton–proton operations at a collision energy of 13 TeV, during which it exceeded the design value of the luminosity and broke many other records (CERN Courier December 2016 p5). As in most years, the machine was then reconfigured for a month-long heavy-ion run, devoted this year to colliding beams of protons (p) and lead nuclei (Pb). Following a feasibility test in 2012 and an initial month-long run in 2013, pPb collisions remain a novel mode of operation at the LHC. Despite this novelty, the LHC team was able to deliver enormous data sets to the experiments for the investigation of extreme nuclear matter during the 2016 run.

Asymmetric proton–nucleus collisions were originally seen as a means to disentangle cold from hot nuclear-matter effects studied in lead–lead collisions. Surprisingly, a more complex picture emerged following the pPb results of 2012 and 2013. For 2016, the LHC experiments requested a variety of apparently incompatible operating conditions, according to their diverse capabilities and physics programmes. Careful analysis of the beam physics and operational requirements led to an ambitious schedule comprising three different beam modes that could potentially fulfil all requests.

Following a technical stop, the first set-up for pPb collisions at a centre-of-mass energy for colliding nucleon pairs of 5.02 TeV started on 5 November and physics data-taking started on 10 November. This run was mainly dedicated to the LHC’s ALICE experiment  to increase an earlier collected sample of minimum-bias events. The other experiments also participated, with LHCb studying collisions between protons and a target of helium gas. As foreseen, the beam lifetimes were extremely long, allowing seven days of nearly uninterrupted running at a constant levelled luminosity of 0.8 × 1028 cm–2 s–1. A total of 660 million minimum-bias events were collected, increasing by a factor six the data set from 2013. One of the first fills also turned out to be the longest LHC fill ever, lasting almost 38 hours.

Just one day after the 5.02 TeV run ended, the second set-up involving new high-luminosity beam optics was complete and the LHC delivered pPb collisions at an energy of 8.16 TeV. This is the highest energy ever produced by a collider for such an asymmetric system, and included a short run for the LHCf experiment and also a third run in which the directions of the Pb and p beams were reversed. Thanks to the superb performance of the injectors and numerous improvements in the LHC, the luminosity soared to 9 × 1029 cm–2 s–1, which is 7.8 times the design value set some years ago. The luminosity could have been pushed even further had the intense flux of lead beam fragments from the collisions not risked quenching nearby magnets. On 4 December, the LHC was switched back to 5.02 TeV for a final 20 hours of pPb data taking, delivering a further 120 million minimum-bias events for ALICE.

That such a complex run could be implemented in such a short time was a triumph for the LHC and all those concerned with its design, construction and operation. Every one of the high-priority goals, plus some subsidiary ones, for ATLAS, CMS, ALICE and LHCb were comfortably exceeded. CMS recorded an integrated luminosity of nearly 200 nb–1 at 8.16 TeV, representing a six-fold increase of the sample collected from the first pPb run in 2013 at 5 TeV, and allowing the collaboration to investigate the behaviour of hard probes in high-multiplicity pPb collisions (see article on p11). ATLAS recorded a similar data set, while ALICE and LHCb each received totals well over 30 nb–1 at 8.16 TeV in the two beam directions.

ATLAS makes precision measurement of W mass

A precise measurement of the mass of the W boson, which was discovered at CERN in 1983, is vital because it is closely related to the masses of the top quark and the Higgs boson. Measuring the W mass tests this prediction and thus the self-consistency of the Standard Model (SM), since any deviation from theory would be a sign of new physics. The W mass was measured previously at CERN’s Large Electron–Positron (LEP) collider and Fermilab’s proton–antiproton collider, the Tevatron, yielding a world average of 80.385±0.015 GeV, which is consistent with the SM constraints of 80.358±0.008 GeV.

The ATLAS collaboration has now reported the first measurement of the W mass at the LHC, based on proton–proton collisions at a centre-of-mass energy of 7 TeV (corresponding to an integrated luminosity of 4.6 fb–1). The measured value, 80.370±0.019 GeV, matches the precision of the best single-experiment measurement of the W mass performed by the Tevatron’s CDF experiment, and is consistent with both the SM prediction and combined measurements (see figure).

Measuring the W mass is more challenging at the LHC compared with LEP and the Tevatron because there are a large number of interactions per beam crossing and significant contributions to W production from second-generation quarks (strange and charm). ATLAS measured the W mass by reconstructing the kinematic properties of leptonic decays, in which a W produces an electron or muon and a neutrino in the final state.

The analysis required a highly accurate calibration of the detector response, which was achieved via the large sample of Z-boson events and the precise knowledge of the Z mass. Accurate predictions of the W-boson production and decay properties are also crucial at a proton–proton collider. The enhanced amount of heavy-quark-initiated production and the ratio of valence and sea quarks in the proton affect the W boson’s transverse-momentum distribution and its polarisation, which makes the measurement sensitive to the parton distribution functions of the proton. To address these issues, ATLAS combined the most advanced theoretical predictions with experimental constraints from precise measurements of Z- and W-boson differential cross-sections and of Z-boson transverse momentum and polarisation.

Future analysis of larger data samples at the LHC would allow the reduction of the statistical uncertainty and of several experimental systematic uncertainties. Finally, a better knowledge of the parton distribution functions and improved QCD and electroweak predictions of W- and Z-boson production are crucial to further reduce the theoretical uncertainties.

Run 2 promises a harvest of beauty for LHCb

The first b-physics analysis using data from LHC Run 2, which began in 2015 with proton–proton collisions at an energy of 13 TeV, shows great promise for the physics programme of LHCb. During 2015 and 2016, the experiment collected a data sample corresponding to an integrated luminosity of about 2 fb–1. Although this value is smaller than the total integrated luminosity collected in the three years of Run 1 (3 fb–1), the significant increase of the LHC energy in Run 2 has almost doubled the production cross-section of beauty particles. Furthermore, the experiment has improved the performance of its trigger system and particle-identification capabilities. Once such an increase is taken into account, along with improvements in the trigger strategy and in the particle identification of the experiment, LHCb has already more than doubled the statistics of beauty particles on tape with respect to Run 1.

LHCb

The new analysis is based on 1 fb–1 of available data, aiming to measure the angle γ of the CKM unitarity triangle using B D0K*– decays. While B D0K decays have been extensively studied in the past, this is the first time the B D0K*– mode has been investigated. The analysis, first presented at CKM2016 (see “Triangulating in Mumbai” in Faces & Places), allows the LHCb collaboration to cross-check expectations for the increase of signal yields in Run 2 using real data. A significant increase, roughly corresponding to a factor three, is observed per unit of integrated luminosity. This demonstrates that the experiment has benefitted from the increase in b-production cross-section, but also that the trigger of the detector performs better than in Run 1. Although the statistical uncertainty on γ from this measurement alone is still large, the sensitivity will be improved by the addition of more data, as well as by the use of other D-meson decay modes. This bodes well for future measurements of γ to be performed in this and other decay modes with the full Run 2 data set.

Measurements of the angle γ are of great importance because it is the least well-known angle of the unitarity triangle. The latest combination from direct measurements with charged and neutral B-meson decays and a variety of D-meson final states, all performed with Run 1 data, yielded a central value of 72±7 degrees. LHCb’s ultimate aim, following detector upgrades relevant for LHC Run 3, is to determine γ with a precision below 1°, providing a powerful test of the Standard Model.

ALICE zeroes in on cold-matter effects


Measuring the production cross-section of charm hadrons in proton–proton collisions provides an important test of perturbative quantum chromodynamics (QCD). In proton–nucleus collisions, “cold-matter” effects related to the presence of nuclei in the colliding system are expected to modify the production cross-section and the transverse-momentum distribution of open-charm hadrons. Assessing such effects is thus crucial for interpreting the results from heavy-ion collisions, where a hot and dense medium of deconfined partons – the quark–gluon plasma (QGP) – is formed.

Previously, ALICE measured D-meson production in proton–lead collisions and found no substantial modification relative to proton–proton interactions within the kinematic range of the measurement (covering a transverse momentum, pT, between one and 24 GeV/c at mid-rapidity). Most cold-nuclear-matter effects are expected to modify charm production at low pT, but no measurement of D-meson production down to zero transverse momentum was performed at mid-rapidity at LHC energies.

ALICE

Recently the ALICE collaboration extended the measurement of the D0-meson cross-section down to zero pT in proton–proton collisions at 7 TeV and in proton–lead collisions at 5.02 TeV. In contrast to previous ALICE publications, the analysis relied on estimating and subtracting the combinatorial background without having to reconstruct the D0 decay vertex. This allowed the first measurement of the D0 signal in the interval 0 < pT < 1 GeV/c and a significant reduction of the uncertainties in the interval 1 < pT < 2 GeV/c compared with previous results.

The current precision of the measurement does not yet confirm the role of the different nuclear effects or the possible presence of additional hot-medium effects. However, applied to larger data sets in the future, the analysis technique will provide insight into the physics-rich region close to pT = 0.

Protons probe quark–gluon plasma at CMS


Proton–nucleus collisions provide a crucial tool to investigate the quark–gluon plasma (QGP), a state of nuclear matter with a high energy density spread over a relatively large volume. Although proton–lead (pPb) collision systems have been considered to be too small and dilute to themselves form a QGP, they have served as a reference in the search for QGP signatures in the collisions of two heavy ions. Nonetheless, in the first-ever pPb collisions at the LHC, collected in 2013, the CMS experiment observed QGP-like features in very high multiplicity pPb events.

Subsequent studies have supported the hypothesis that a dense, QGP-like medium may be formed in high multiplicity pPb systems. However, several key signatures of a dense QGP medium, observed in PbPb collisions, remain unestablished for pPb events. These unestablished signatures include the loss of energy from high-energy quarks and gluons (“jet quenching”) and the suppression of quarkonium states (J/ψ and ϒ mesons). A hint of a stronger suppression of ϒ(2S) mesons compared to ϒ(1S) mesons is observed in the 2013 pPb data, but a conclusive comparison with PbPb data at similar high multiplicities has not been possible because of limited statistical precision in the pPb data.

CMS

At the end of 2016, CMS again collected pPb collisions, with a higher energy and a larger accumulated data sample than in 2013. The experiment is thus poised to relaunch its comprehensive search for QPG signatures in high multiplicity pPb systems. Compared to 2013, the yields of relevant events (see figure) are enhanced by a factor of 20–30. This will enable many new studies that might provide conclusive results on the formation of QGP in pPb events.

Electron gun shrunk to matchbox size

An interdisciplinary team of researchers from DESY in Germany and MIT in the US has built a new kind of electron gun that is about the size of a matchbox. The new device uses laser-generated terahertz radiation, rather than traditional radio-frequency fields, to accelerate electrons from rest. Since terahertz radiation has a much shorter wavelength than radio waves, the new device measures just 34 × 24.5 × 16.8 mm – compared with the size of a car for traditional state-of-the-art electron guns.

This device reached an accelerating gradient of 350 MV per metre, which the team says is almost twice that of current electron guns. “We achieved an acceleration of a dense packet of 250,000 electrons from rest to 0.5 keV with minimal energy spread,” explains lead author W Ronny Huang of MIT, who carried out the work at the Center for Free-Electron Laser Science in Hamburg. The electron beams emerging from the device could already be used for low-energy electron diffraction experiments, he says, and will also have applications in ultrafast electron diffraction or for injecting electrons into linacs and X-ray light sources.

Compact star hints at vacuum polarisation

By studying an isolated neutron star, astronomers may have found the first observational indication of a strange quantum effect called vacuum birefringence, which was predicted in the 1930s by Werner Heisenberg and Hans Heinrich Euler.

Neutron stars are the very dense remnant cores of massive stars – at least 10 times more massive than our Sun – that have exploded as supernovae at the ends of their lives. In the 1990s, the Germany-led ROSAT space mission for soft X-ray astronomy discovered a new class of seven neutron stars that are known as the Magnificent Seven. The faint isolated objects emit pulses of X-rays every three to 11 seconds or so, but unlike most pulsars they have no detectable radio emission. The ultra-dense stars have an extremely high dipolar magnetic field (of the order 109–1010 T) and display an almost perfect black-body emission, making them unique laboratories to study neutron-star cooling processes.

A team led by Roberto Mignani from INAF Milan in Italy and the University of Zielona Gora, Poland, used ESO’s Very Large Telescope (VLT) at the Paranal Observatory in Chile to observe the neutron star RX J1856.5-3754. Despite being the brightest of the Magnificent Seven and located only around 400 light-years from Earth, its extreme dimness is at the limit of the VLT’s current capabilities to measure polarisation. The aim of the measurement was to detect a quantum effect predicted 80 years ago: since the vacuum is full of virtual particles that appear and vanish, a very strong magnetic field could polarise empty space and hence also light passing through it. Vacuum birefringence is too weak to be observed in laboratory experiments, but the phenomenon should be visible in the very strong magnetic fields around neutron stars.

ESO’s future European Extremely Large Telescope will allow astronomers to study this effect around many more neutron stars.

After careful analysis of the VLT data, Mignani and collaborators detected a significant degree (16%) of linear polarisation, which they say is likely due to vacuum birefringence occurring in the empty space surrounding RX J1856.5-3754. They claim that such a level of polarisation is not easily explained by other sources. For example, the contribution from dust grains in the interstellar medium were estimated to be less than 1%, which was corroborated by the detection of almost zero polarisation in the light from 42 nearby stars. The genuine thermal radiation of the neutron star is also expected to be polarised by its surface magnetic field, but this effect should cancel out if the emission comes from the entire surface of the neutron star over which the magnetic-field direction changes substantially.

The polarisation measurement in this neutron star constitutes the very first observational support for the predictions of QED vacuum polarisation effects. ESO’s future European Extremely Large Telescope will allow astronomers to study this effect around many more neutron stars, while the advent of X-ray polarimetric space missions offers another perspective to this new field of research.

The dawn of a new era

One of the greatest scientific discoveries of the century took place on 14 September 2015. At 09.50 UTC on that day, a train of gravitational waves launched by two colliding black holes 1.4 billion light-years away passed by the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in Louisiana, US, causing a fractional variation in the distance between the mirrors of about one part in 1021. Just 7 ms later, the same event – dubbed GW150914 – was picked up by the twin aLIGO detector in Washington 3000 km away (figure 1). A second black-hole coalescence was observed on 26 December 2015 (GW151226) and a third candidate event was also recorded, although its statistical significance was not high enough to claim a detection. A search that had gone on for half a century had finally met with success, ushering in the new era of gravitational-wave astronomy.

Black holes are the simplest physical objects in the universe: they are made purely from warped space and time and are fully described by their mass and intrinsic rotation, or spin. The gravitational-wave train emitted by coalescing binary black holes comprises three main stages: a long “inspiral” phase, where gravitational waves slowly and steadily drain the energy and angular momentum from the orbiting black-hole pair; the “plunge and merger”, where black holes move at almost the speed of light and then coalesce into the newly formed black hole; and the “ringdown” stage during which the remnant black hole settles to a stationary configuration (figure 2). Each dynamical stage contains fingerprints of the astrophysical source, which can be identified by first tracking the phase and amplitude of the gravitational-wave train and then by comparing it with highly accurate predictions from general relativity.

aLIGO employs waveform models built by combining analytical and numerical relativity. The long, early inspiral phase, characterised by a weak gravitational field and low velocities, is well described by the post-Newtonian formalism (which expands the Einstein field equation and the gravitational radiation in powers of v/c, but loses accuracy as the two bodies come closer and closer). Numerical relativity provides the most accurate solution for the last stages of inspiral, plunge, merger and ringdown, but such models are time-consuming to produce – the state-of-the-art code of the Simulating eXtreme Spacetimes collaboration took three weeks and 20,000 CPU hours to compute the gravitational waveform for the event GW150914 and three months and 70,000 CPU hours for GW151226.

A few hundred thousand different waveforms were used as templates by aLIGO during the first observing run, covering compact binaries with total masses 2–100 times that of the Sun and mass ratios up to 1:99. Novel approaches to the two-body problem that extend post-Newtonian theory into the strong-field regime and combine it with numerical relativity had to be developed to provide aLIGO with accurate and efficient waveform models, which were based on several decades of steady work in general relativity (figure 3). Further theoretical work will be needed to deal with more sensitive searches in the future if we want to take full advantage of the discovery potential of gravitational-wave astronomy.

aLIGO’s first black holes

The two gravitational-wave signals observed by aLIGO have different morphologies that reveal quite distinct binary black-hole sources. GW150914 is thought to be composed of two stellar black holes with masses 36 MSun and 29 MSun, which formed a black hole of about 62 MSun rotating at almost 70% of its maximal rotation speed, while GW151226 had lower black-hole masses (of about 14 MSun and 8 MSun) and merged in a 21 MSun black-hole remnant. Although the binary’s individual masses for GW151226 have larger uncertainties compared with GW150914 (since the former happened at a higher frequency where aLIGO sensitivity degrades), the analysis ruled out the possibility that the lower-mass object in GW151226 was a neutron star. A follow-up analysis also revealed that the individual black holes had spins less than 70% of the maximal value, and that at least one of the black holes in GW151226 was rotating at 20% of its maximal value or faster. Finally, the aLIGO data show that the binaries that produced GW150914 and GW151226 were at comparable distances from the Earth and that the peak of the gravitational-wave luminosity was about 3 × 1056 erg/sec, making them by far the most luminous transient events in the universe.

Owing to the signal’s length and the particular orientation of the binary plane with respect to the aLIGO detectors, no information about the spin precession of the system could be extracted. It has therefore not yet been possible to determine the precise astrophysical production route for these objects. Whereas the predictions for the rate of binary black-hole mergers from astrophysical-formation mechanisms traditionally vary by several orders of magnitude, the aLIGO detections so far have already established the rate to be somewhat on the high side of the range predicted by astrophysical models at 9–240 per Gpc3 per year. Larger black-hole masses and higher coalescence rates raise the interesting possibility that a stochastic background of gravitational waves composed of unresolved signals from binary black-hole mergers could be observed when aLIGO reaches its design sensitivity in 2019.

The sky localisation of GW150914 and GW151226, which is mainly determined by recording the time delays of the signals arriving at the interferometers, extended over several hundred square degrees. This can be compared with the 0.2 square degrees covered by the full Moon as seen from the Earth, and makes it very hard to search for an electromagnetic counterpart to black-hole mergers. Nevertheless, the aLIGO results kicked off the first campaign for possible electromagnetic counterparts of gravitational-wave signals, involving almost 20 astronomical facilities spanning the gamma-ray, X-ray, optical, infrared and radio regions of the spectrum. No convincing evidence of electromagnetic signals emitted by GW150914 and GW151226 was found, in line with expectations from standard astrophysical scenarios. Deviations from the standard scenario may arise if one considers dark electromagnetic sectors, spinning black holes with strong magnetic fields that need to be sustained until merger, and black holes surrounded by clouds of axions (see “Linking waves to particles”).

The new aLIGO observations have put the most stringent limits on higher post-Newtonian terms.

aLIGO’s observations allow us to test general relativity in the so-far-unexplored, highly dynamical and strong-field gravity regime. As the two black holes that emitted GW150914 and GW151226 started to merge, the binary’s orbital period varied considerably and the phase of the gravitational-wave signal changed accordingly. It is possible to obtain an analytical representation of the phase evolution in post-Newtonian theory, in which the coefficients describe a plethora of dynamical and radiative physical effects, and long-term timing observations of binary pulsars have placed precise bounds on the leading-order post-Newtonian coefficients. However, the new aLIGO observations have put the most stringent limits on higher post-Newtonian terms – setting upper bounds as low as 10% for some coefficients (figure 4). It was even possible to investigate potential deviations during the non-perturbative coalescence phase, and again general relativity passed this test without doubt.

The first aLIGO observations could neither test the second law of black-hole mechanics, which states that the black-hole entropy cannot decrease, nor the “no-hair” theorem, which says that a black hole is only described by mass and spin, for which we require to extract the mass and spin of the final black hole from the data. But we expect that future, multiple gravitational-wave detections with higher signal-to-noise ratios will shed light on these important theoretical questions. Despite those limitations, aLIGO has provided the most convincing evidence to date that stellar-mass compact objects in our universe with masses larger than roughly five solar masses are described by black holes: that is, by the solutions to the Einstein field equations (see “General relativity at 100”).

From binaries to cosmology

During its first observation run, lasting from mid-September 2015 to mid-January 2016, aLIGO did not detect gravitational waves from binaries composed of either two neutron stars, or a black hole and a neutron star. Nevertheless, it set the most stringent upper limits on the rates of such processes: 12.6 × 103 and 3.6 × 103 per Gpc3 per year, respectively. The aLIGO rates imply that we expect to detect those binary systems a few years after aLIGO and the French–Italian experiment Virgo reach their design sensitivity. Observing gravitational waves from binaries made up of matter is exciting because it allows us to infer the neutron-star equation of state and also to unveil the possible origin of short-hard gamma-ray bursts (GRBs) – enormous bursts of electromagnetic radiation observed in distant galaxies.

Neutron stars are extremely dense objects that form when massive stars run out of nuclear fuel and collapse. The density in the core is expected to be more than 1014 times the density of the Sun, at which the standard structure of nuclear matter breaks down and new phases of matter such as superfluidity and superconductivity may appear. All mass and spin parameters being equal, the gravitational-wave train emitted by a binary containing a neutron star differs from the one emitted by two black holes only in the late inspiral phase, when the neutron star is tidally deformed or disrupted. By tracking the gravitational-wave phase it will be possible to measure the tidal deformability parameter, which contains information about the neutron-star interior, and ultimately to discriminate between some equations of state. The merger of double neutron stars and/or black-hole–neutron-star binaries is currently considered the most likely source of short-hard GRBs, and we expect a plethora of electromagnetic signals from the coalescence of such compact objects that will test the short-hard GRB/binary-merger paradigm.

Bursts of gravitational waves lasting for tenths of milliseconds are also produced during the catastrophic final moments of all stars, when the stellar core undergoes a sudden collapse (or supernova explosion) to a neutron star or a black hole. At design sensitivity, aLIGO and Virgo could detect bursts from the core’s “bounce”, provided that the supernova took place in the Milky Way or neighbouring galaxies, with more extreme emission scenarios observable to much further distances. Highly magnetised rotating neutron stars called pulsars are also promising astrophysical sources of gravitational waves. Mountains just a few centimetres in height on the crust of pulsars can cause the variation in time of the pulsar’s quadrupole moment, producing a continuous gravitational-wave train at twice the rotation frequency of the pulsar. The most recent LIGO all-sky searches and targeted observations of known pulsars have already started to invade the parameter space of astrophysical interest, setting new upper limits on the source’s ellipticity, which depends on the neutron-star’s equation of state.

Lastly, several physical mechanisms in the early universe could have produced gravitational waves, such as cosmic inflation, first-order phase transitions and vibrations of fundamental and/or cosmic strings. Being that gravitational waves are almost unaffected by matter, they provide us with a pristine snapshot of the source at the time they were produced. Thus, gravitational waves may unveil a period in the history of the universe around its birth that we cannot otherwise access. The first observation run of aLIGO has set the most stringent constraints on the stochastic gravitational-wave background, which is generally expressed by the dimensionless energy density of gravitational waves, of < 1.7 × 10−7. Digging deeper, at design sensitivity aLIGO is expected to reach a value of 10−9, while next-generation detectors such as the Einstein Telescope and the Cosmic Explorer may achieve values as low as 10−13 – just two orders of magnitude above the background predicted by the standard “slow-roll” inflationary scenario.

Grand view

The sensitivity of existing interferometer experiments on Earth will be improved in the next 5–10 years by employing a quantum-optics phenomenon called squeezed light. This will reduce the sky-localisation errors of coalescing binaries, provide a better measurement of tidal effects and the neutron-star equation of state in binary mergers, and enhance our chances of observing gravitational waves from pulsars and supernovae. The ability to identify the source of gravitational waves will also improve over time, as upgraded and new gravitational-wave observatories come online.

Furthermore, pulsar signals offer an alternative Pulsar Timing Array (PTA) detection scheme that is currently operating. Gravitational waves passing through pulsars and the Earth would modify the time of arrival of the pulses, and searches for correlated signatures in the pulses’ times of arrival from the most stable known pulsars by PTA projects could detect the stochastic gravitational-wave background from unresolved supermassive binary black-hole inspirals in the 10−9–10−7 Hz frequency region. Results from the North-American NANOGrav, European EPTA and Australian PPTA collaborations have already set interesting upper limits on the astrophysical background, and could achieve a detection in the next five years.

The past year has been a milestone for gravitational-wave research in space, with the results of the LISA Pathfinder mission published in June 2016 exceeding all expectations and proving that LISA, planned for 2034, will work successfully (see “Catching a gravitational wave”). LISA would be sensitive to gravitational waves between 10−4–10−2 Hz, thus detecting sources different from the ones observed on the Earth such as supermassive binary black holes, extreme mass-ratio inspirals, and the astrophysical stochastic background from white-dwarf binaries in our galaxy. In the meantime, a new ground facility to be built in 10–15 years – such as the Einstein Telescope in Europe and the Cosmic Explorer in the US – will be required to maximise the scientific potential of gravitational-wave physics and astrophysics. These future detectors will allow such high sensitivity to binary coalescences that we can probe binary black holes in all our universe, enabling the most exquisite tests of general relativity in the highly dynamical, strong-field regime. That will challenge our current knowledge of gravity, fundamental and nuclear physics, unveiling the nature of the most extreme objects in our universe.

bright-rec iop pub iop-science physcis connect