Researchers at the Cryogenic Underground Observatory for Rare Events (CUORE), located at Gran Sasso National Laboratories (LNGS) in Italy, have reported the latest results in their search for neutrinoless double beta-decay based on CUORE’s first full data set. This exceedingly rare process, which is predicted to occur less than once about every 1026 years in a given nucleus, if it occurs at all, involves two neutrons in an atomic nucleus simultaneously decaying into two protons with the emission of two electrons and no neutrinos. This is only possible if neutrinos and antineutrinos are identical or “Majorana” particles, as posited by Ettore Majorana 80 years ago, such that the two neutrinos from the decay cancel each other out.
The discovery of neutrinoless double beta-decay (NDBD) would demonstrate that lepton number is not a symmetry of nature, perhaps playing a role in the observed matter–antimatter asymmetry in the universe, and constitute firm evidence for physics beyond the Standard Model. Following the discovery two decades ago that neutrinos have mass (a necessary condition for them to be Majorana particles), several experiments worldwide are competing to spot this exotic decay using a variety of techniques and different NDBD candidate nuclei.
CUORE is a tonne-scale cryogenic bolometer comprising 19 copper-framed towers that each house a matrix of 52 cube-shaped crystals of highly purified natural tellurium (containing more than 34% tellurium-130). The detector array, which has been cooled below a temperature of 10 mK and is shielded from cosmic rays by 1.4 km of rock and thick lead sheets, was designed and assembled over a 10 year period. Following initial results in 2015 from a CUORE prototype containing just one tower, the full detector with 19 towers was cooled down in the CUORE cryostat one year ago and the collaboration has now released its first publication, submitted to Physical Review Letters, with much higher statistics. The large volume of detector crystals greatly increases the likelihood of recording a NDBD event during the lifetime of the experiment.
Based on around seven weeks of data-taking, alternated with an intense programme of commissioning of the detector from May to September 2017 and corresponding to a total tellurium exposure of 86.3 kg per year, CUORE finds no sign of NDBD, placing a lower limit of the decay half-life of NDBD in tellurium-130 of 1.5 × 1025 years (90% C.L.). This is the most stringent limit to date on this decay, says the team, and suggests that the effective Majorana neutrino mass is less than 140−400 meV, where the large range results from the nuclear matrix-element estimates employed. “This is the first preview of what an instrument this size is able to do,” says CUORE spokesperson Oliviero Cremonesi of INFN. “Already, the full detector array’s sensitivity has exceeded the precision of the measurements reported in April 2015 after a successful two-year test run that enlisted one detector tower.”
Over the next five years CUORE will collect around 100 times more data. Combined with search results in other isotopes, the possible hiding places of Majorana neutrinos will shrink much further.
Advanced linear-accelerator (linac) technology developed at CERN and elsewhere will be used to develop a new generation of compact X-ray free-electron lasers (XFELs), thanks to a €3 million project funded by the European Commission’s Horizon 2020 programme. Beginning in January 2018, “CompactLight” aims to design the first hard XFEL based on 12 GHz X-band technology, which originated from research for a high-energy linear collider. A consortium of 21 leading European institutions, including Elettra, CERN, PSI, KIT and INFN, in addition to seven universities and two industry partners (Kyma and VDL), are partnering to achieve this ambitious goal within the three-year duration of the recently awarded grant.
X-band technology, which provides accelerating-gradients of 100 MV/m and above in a highly compact device, is now a reality. This is the result of many years of intense R&D carried out at SLAC (US) and KEK (Japan), for the former NLC and JLC projects, and at CERN in the context of the Compact Linear Collider (CLIC). This pioneering technology also withstood validation at the Elettra and PSI laboratories.
XFELs, the latest generation of light sources based on linacs, are particularly suitable applications for high-gradient X-band technology. Following decades of growth in the use of synchrotron X-ray facilities to study materials across a wide spectrum of sciences, technologies and applications, XFELs (as opposed to circular light sources) are capable of delivering high-intensity photon beams of unprecedented brilliance and quality. This provides novel ways to probe matter and allows researchers to make “movies” of ultrafast biological processes. Currently, three XFELs are up and running in Europe – FERMI@Elettra in Italy and FLASH and FLASH II in Germany, which operate in the soft X-ray range – while two are under commissioning: SwissFEL at PSI and the European XFEL in Germany (CERN Courier July/August 2017 p18), which operates in the hard X-ray region. Yet, the demand for such high-quality X-rays is large, as the field still has great and largely unexplored potential for science and innovation – potential that can be unlocked if the linacs that drive the X-ray generation can be made smaller and cheaper.
This is where CompactLight steps in. While most of the existing XFELs worldwide use conventional 3 GHz S-band technology (e.g. LCLS in the US and PAL in South Korea) or superconducting 1.3 GHz structures (e.g. European XFEL and LCLS-II), others use newer designs based on 6 GHz C-band technology (e.g. SCALA in Japan), which increases the accelerating gradient while reducing the linac’s length and cost. CompactLight gathers leading experts to design a hard-X-ray facility beyond today’s state of the art, using the latest concepts for bright electron-photo injectors, very-high-gradient X-band structures operating at frequencies of 12 GHz, and innovative compact short-period undulators (long devices that produce an alternating magnetic field along which relativistic electrons are deflected to produce synchrotron X-rays). Compared with existing XFELs, the proposed facility will benefit from a lower electron-beam energy (due to the enhanced undulator performance), be significantly more compact (as a consequence both of the lower energy and of the high-gradient X-band structures), have lower electrical power demand and a smaller footprint.
Success for CompactLight will have a much wider impact: not just affirming X-band technology as a new standard for accelerator-based facilities, but advancing undulators to the next generation of compact photon sources. This will facilitate the widespread distribution of a new generation of compact X-band-based accelerators and light sources, with a large range of applications including medical use, and enable the development of compact cost-effective X-ray facilities at national or even university level across and beyond Europe.
The CALorimetric Electron Telescope (CALET), a space mission led by the Japan Aerospace Exploration Agency with participation from the Italian Space Agency (ASI) and NASA, has released its first results concerning the nature of high-energy cosmic rays.
Having docked with the International Space Station (ISS) on 25 August 2015, CALET is carrying out a full science programme with long-duration observations of high-energy charged particles and photons coming from space. It is the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During the summer of 2017 a third experiment, ISS-CREAM, joined these two. Unlike AMS-02, CALET and ISS-CREAM have no magnetic spectrometer and therefore measure the inclusive electron and positron spectrum. CALET’s homogeneus calorimeter is optimised to measure electrons, and one of its main science goals is to measure the detailed shape of the electron spectrum.
Due to the large radiative losses during their travel in space, high-energy cosmic electrons are expected to originate from regions relatively close to Earth (of the order of a few thousand light-years). Yet their origin is still unknown. The shape of the spectrum and the anisotropy in the arrival direction might contain crucial information as to where and how electrons are accelerated. It could also provide a clue on possible signatures of dark matter – for example, the presence of a peak in the spectrum might tell us about a possible dark-matter decay or annihilation with an electron or positron in the final state – and shed light on the intriguing electron and positron spectra reported by AMS-02 (CERN Courier December 2016 p26).
To pinpoint possible spectral features on top of the overall power-law energy dependence of the spectrum, CALET was designed to measure the energy of the incident particle with very high resolution and with a large proton rejection power, well into the TeV energy region. This is provided by a thick homogeneous calorimeter preceded by a high-granularity pre-shower with imaging capabilities with a total thickness of 30 radiation length at normal incidence. The calibration of the two instruments is the key to control the energy scale and this is why CALET – a CERN-recognised experiment – performed several calibration tests at CERN.
The first data from CALET concern a measurement of the inclusive electron and positron spectrum in the energy range from 10 GeV to 3 TeV, based on about 0.7 million candidates (1.3 million in full acceptance). Above an energy of 30 GeV the spectrum can be fitted with a single power law with a spectral index of –3.152±0.016. A possible structure observed above 100 GeV requires further investigation with increased statistics and refined data analysis. Beyond 1 TeV, where a roll-off of the spectrum is expected and low statistics is an issue, electron data are now being carefully analysed to extend the measurement. CALET has been designed to measure electrons up to around 20 TeV and hadrons up to an energy of 1 PeV.
CALET is a powerful space observatory with the ability to identify cosmic nuclei from hydrogen to elements heavier than iron. It also has a dedicated gamma-ray-burst instrument (CGBM) that so far has detected bursts at an average rate of one every 10 days in the energy range of 7 KeV–20 MeV. The search for electromagnetic counterparts of gravitational waves (GWs) detected by the LIGO and Virgo observatories proceeds around the clock thanks to a special collaboration agreement with LIGO and Virgo. Upper limits on X-ray and gamma-ray counterparts of the GW151226 event were published and further research on GW follow-ups is being carried out. Space-weather studies relative to the relativistic electron precipitation (REP) from the Van Allen belts have also been released.
With more than 500 million triggers collected so far and an expected extension of the observation time on the ISS to five years, CALET is likely to produce a wealth of interesting results in the near future.
On 11 December, the Large Hadron Collider (LHC) is scheduled to complete its 2017 proton-physics run and go into standby for its winter shutdown and maintenance programme. With the LHC having surpassed this year’s integrated luminosity target of 45 fb–1 to both the ATLAS and CMS experiments 19 days before the end of the run, 2017 marks another successful year for the machine. September 2017 also saw the LHC’s total integrated luminosity since 2010 pass the milestone of 100 fb–1 per high-luminosity experiment (see panel). But the year has not been without its challenges, demonstrating once again the quirks and unprecedented complexities involved in operating the world’s highest-energy collider. The story of the LHC’s 2017 run unfolded in three main parts.
Following a longer than usual technical stop that began at the end of 2016, the LHC was cooled to its operating temperature in April and took first beam towards the end of the month, with first stable beams declared about four weeks later. Physics got off to a great start, with an impressively efficient ramp-up reaching 2556 bunches per beam and a peak luminosity of 1.6 × 1034 cm–2 s-1 in very good time.
Careful examination
However, from the start of the run, for some unknown reason the beams were occasionally dumped with a particular signature of localised beam loss and the onset of a fast-beam instability. The cause of the premature dumps was traced to a region called 16L2, referring to the sixteenth LHC half-cell to the left of point 2 (each half-cell comprises three dipoles, one quadrupole and associated corrector magnets). The hypothesis was that the problems were caused by the presence of frozen gas in the beam pipes in this region; air had perhaps entered during the cool down and had become trapped on and around the beam screen. All available diagnostics were deployed and careful examination of the beam losses in the region revealed steady-state losses, which occasionally increased rapidly followed by a very fast beam instability. The issue appeared to respond positively to a non-zero field in a local orbit corrector, and this allowed the LHC teams to establish more-or-less steady operation by careful control of the corrector in question.
To ameliorate and understand the situation better, an attempt was made to flush the gas supposedly condensed on the beam screen onto the cold mass of the magnets. To this end the beam screen around 16L2 was warmed up to around 80 K with careful monitoring of the vacuum conditions. Unfortunately, the manoeuvre was not a success: the 16L2 dumps became more frequent and many subsequent fills were lost to the problem. By this stage, electron-cloud effects had been identified as a possible co-factor in driving the instability, prompting the teams to change the bunch configuration to the so-called 8b4e scheme in which gaps are introduced into the bunch configuration. This significantly reduced the rate of 16L2 losses and allowed steady and productive running to be established by late summer.
New heights
Performance was further improved by a reduction in the “beta-star” parameter following a technical stop in the middle of September. This move exploited the excellent aperture, collimation-system performance, stability, and optics understanding of the LHC and benefited from many years of experience operating the machine. Working with an optimised 8b4e scheme and beta-star of 30 cm resulted in CMS and ATLAS reaching their event pile-up limit, forcing the deployment of luminosity levelling as is already routine in LHCb and ALICE. The peak-levelled luminosity under these running conditions is around 1.5 × 1034 cm–2 s–1, compared to more than 2 × 1034 cm–2 s–1 without levelling. The beam availability in the latter part of the year has been truly excellent and integrated-luminosity delivery reached new heights. One day in October was also dedicated to operation with xenon beams, taking advantage of their presence in the SPS for North Area’s fixed target programme (CERN Courier November 2017 p7).
Following a period of machine development and some special physics runs, the winter maintenance break is due to begin on 11 December. The year-end technical stop will see the usual extensive programme of maintenance and consolidation for both the machine and experiments. It will also see sector 12 warmed up to room temperature to fully resolve the 16L2 issue. Then, in the spring of 2018, the LHC will begin a final 13 TeV run before a long shutdown of two years to make key preparations for its high-luminosity upgrade.
A century of femtobarns
On 28 September, the LHC passed a high-energy proton–proton collision milestone: the accumulation of 100 fb<sup–1/sup> since its inception, equivalent to around 1015 collisions in each of the ATLAS and CMS experiments. The LHC started physics operations in late 2009, and by the middle of 2012 had delivered enough integrated luminosity to enabled physicists to discover the Higgs boson. After the first LHC long shutdown in 2013 and 2014, the LHC was restarted in 2015 at higher energy, paving the way for 2016, another record production year that notched up 40 fb<sup–1/sup>. Following this success, the target for 2017 and 2018 combined was raised to 90 fb<sup–1/sup>, which, despite some challenges this year, looks to be well within reach.
The LHCb collaboration has published the result of precision mass and width measurements of the χc1 and χc2 charmonium states, performed for the first time by using the newly discovered decays χc1→ J/ψμ+μ– and χc2 →J/ψμ+μ–. Previously it has not been possible to make precision measurements for these states at a particle collider due to the absence of a fully charged final state with a large enough decay rate, allowing powerful comparisons with results from earlier fixed-target experiments.
The dominant decay mode of such charmonium states is χc1,2→ J/ψγ. However, the precision measurement of the energy of the final-state photon, γ, is experimentally very challenging, particularly in the harsh environment of a hadron collider such as the LHC. For this reason, such measurements were only possible at dedicated experiments that exploited antiproton beams annihilating into fixed hydrogen targets and forming prompt χc1 states. By modulating the energy of the impinging antiprotons, it was possible to scan the invariant mass of the states with high precision. But the obvious difficulties in building such dedicated facilities has meant that precision mass measurements were only performed by two experiments: E760 and E835 at Fermilab, the latter being an upgrade of the former.
In these new Dalitz decays, χc1,2→ J/ψμ+μ–, where the J/ψ meson subsequently decays to another μ+μ– pair, the final state is composed of four charged muons. Thus these modes can be triggered and reconstructed very efficiently by the LHCb experiment. The high precision of the LHCb spectrometer already enabled several world-best mass measurements of heavy-flavour mesons and baryons to be performed, and now it has allowed the two narrow χc1 and χc2 peaks to be observed in the invariant J/ψμ+μ– mass distribution with excellent resolution (see figure). The values of the masses of the two states, along with the natural width of the χc2, have been determined with a similar precision to, and in good agreement with, those obtained by E760 and E835.
This new measurement opens an avenue to precision studies of the properties of χc mesons at the LHC, more than 40 years since the discovery of the first charmonium state, the J/ψ meson. It will allow precise tests of production mechanisms of charmomium states down to zero transverse momentum, providing information hardly accessible using other experimental techniques. In addition to the charmonium system, these observations are expected to have important consequences for the wider field of hadron spectroscopy at the LHC. With larger data samples, studies of the Dalitz decays of other heavy-flavour states, such as the exotic X(3872) and bottomonium states, will become possible. In particular, measurements of the properties of the X(3872) via a Dalitz decay may help to elucidate the nature of this enigmatic particle.
Recently, the ALICE collaboration measured the elliptic flow of J/ψ mesons with unprecedented precision in lead–lead (Pb–Pb) collisions and, for the first time, also in proton–lead (p–Pb) collisions. While the results at low transverse momentum (pT) in Pb–Pb collisions confirm that charm quarks flow with the quark–gluon plasma (QGP), the results at high pT do not agree with model predictions. Furthermore, their similarity to p–Pb collisions suggest that additional J/ψ flow-generation mechanisms are still to be identified.
The elliptic flow (v2) is the azimuthal anisotropy of the final-state particles, generated by the collective expansion of the almond-shaped interaction region of the colliding nuclei in non-central nucleus–nucleus collisions. The J/ψ meson is a bound state of charm and anti-charm quarks, which is created at early times in hard-scattering processes. Effects of the QGP on the production of J/ψ mesons are currently understood in terms of two mechanisms: suppression by dissociation due to the large surrounding colour-charge density and regeneration by recombination of de-confined charm quarks. If charm quarks thermalise in the medium, recombined states should inherit their flow.
A clear positive v2 for J/ψ mesons at forward rapidity is observed in Pb–Pb collisions at a nucleon–nucleon energy of 5.02 TeV for different collision centralities. In semi-central collisions, the J/ψ v2 increases with pT up to 4–6 GeV/c and saturates or decreases thereafter. The J/ψ v2 measurement at mid-rapidity has a larger background and is therefore less precise, but demonstrates potential for future studies at the high-luminosity LHC.
A comparison with available theoretical model calculations shows that the measured values at low pT (below 4 GeV/c) can only be explained through a large contribution from the recombination of thermalised charm quarks. The expected v2 without this contribution (labelled “primordial” v2 in the figure) is much smaller than the measured values. However, the models clearly underestimate the measured azimuthal asymmetry at higher transverse momentum and do not reproduce the overall pT dependence, suggesting that there is another mechanism to produce J/ψ v2. The J/ψ v2 has also been measured in p–Pb collisions at energies of 5.02 and 8.16 TeV at forward (p-travelling) and backward (Pb-travelling) rapidities. Interestingly, the J/ψ v2 in the smaller p–Pb collision system is similar to that in central Pb–Pb collisions at high pT. The possibly missing mechanism could therefore be the same in both collision systems.
The Higgs boson interacts more strongly with more massive particles, so the coupling between the top quark and the Higgs boson (the top-quark Yukawa coupling) is expected to be large. The coupling can be directly probed by measuring the rate of events in which a Higgs boson is produced in association with a pair of top quarks (ttH production). Using the 13 TeV LHC data set collected in 2015 and 2016, several ATLAS analyses targeting different Higgs boson decay modes were performed. The combination of their results, released in late October, provides the strongest single-experiment evidence to date for ttH production.
The H → bb decay channel offers the largest rate of ttH events, but extracting the signal is hard because of the large background of top quarks produced in association with a pair of bottom quarks. The analysis relies on the identification of b-jets and multivariate analysis techniques to reconstruct the events and determine whether candidates are more likely to arise from ttH production or from background processes.
The probability for the Higgs boson to decay to a pair of W bosons or a pair of τ leptons is smaller, but the backgrounds to ttH searches with these decays are also smaller and easier to estimate. These decays are targeted in searches for events with a pair of leptons carrying the same charge or three or more charged leptons (including electrons, muons, or hadronically decaying τ leptons). In total, seven different final states were probed in the latest ATLAS analysis.
Higgs boson decays to a pair of photons or to a pair of Z bosons with subsequent decays to lepton pairs (giving a four-lepton final state) are also considered. These decay channels have very small rates, but provide a high signal-to-background ratio.
In the combination of these ttH analyses, an excess with a significance of 4.2 standard deviations with respect to the “no-ttH-signal” hypothesis is observed, compared to 3.8 standard deviations expected for a Standard Model signal. This constitutes the first direct evidence for the ttH process occurring at ATLAS. A cross-section of 590+160–150 fb is measured, in good agreement with the Standard Model prediction of 507+35–50 fb. This measurement, when combined with other Higgs boson production and decay studies, will shed more light on the possible presence of physics beyond the Standard Model in the Higgs sector.
The CMS experiment has added another piece to the Higgs boson puzzle, reporting evidence that the Higgs decays to a pair of b quarks.
In the Standard Model (SM) the Higgs field couples to fermions, giving them their masses, through a Yukawa interaction. The recent CMS observation of the H →ττ channel provides direct evidence of this interaction. While it is clear that the Higgs boson couples to up-type quarks (based on overall agreement between the gluon–gluon fusion production channel cross-section and the SM prediction), the Higgs boson decay to bottom quark–antiquark pairs provides a unique tool to directly access the bottom-type quark couplings.
The Higgs boson decays to a pair of b quarks 58% of the time, making it by far the most frequent decay channel. However, at the LHC the signal is overwhelmed by QCD production, which is several orders of magnitude higher. This makes the H → bb process very elusive. The most effective way to observe it is to search for associated production with an electroweak vector boson (VH, with V being a W or a Z boson). Further background reduction is achieved by requiring the Higgs boson candidates to have large transverse momentum and by exploiting the peculiar VH kinematical event properties.
The latest CMS analysis is based on LHC data collected last year at an energy of 13 TeV. To identify jets originating from b quarks, the collaboration used a novel combined multivariate b-tagging algorithm that exploits the presence of soft leptons together with information such as track impact parameters and secondary vertices. A signal region enriched in VH events was then selected, together with several control regions to test the accuracy of the Monte Carlo simulations, and a simultaneous binned-likelihood fit of the signal and control regions used to extract the Higgs boson signal.
An excess of events is observed compared to the expectation in the absence of a H → bb signal. The significance of the excess is 3.3σ, where the expectation from SM Higgs boson production is 2.8σ. The signal strength corresponding to this excess, relative to the SM expectation, is 1.2±0.4. When combined with the Run 1 measurement at a lower energy, the signal significance is 3.8σ with 3.8σ expected and a signal strength of 1.1.
To validate the analysis procedure, the same methodology was used to extract a signal for the VZ process, with Z → bb, which has a nearly identical final state but with a different invariant mass and a larger production cross-section. The observed excess of events for the combined WZ and ZZ processes has a significance of 5σ from the background-only event-yield expectation, and the corresponding signal strength is 1.0±0.2.
Thanks to the outstanding performance of the LHC, the data set will significantly increase by the end of Run 2, in 2018. This will allow a consistent reduction of the uncertainties, and a 5σ observation of the H → bb decay is expected.
The energy spectrum of cosmic rays continuously bombarding the Earth spans many orders of magnitude, with the highest energy events topping 108 TeV. Where these extreme particles come from, however, has remained a mystery since their discovery more than 50 years ago. Now the Pierre Auger collaboration has published results showing that the arrival direction of ultra-high-energy cosmic rays (UHECRs) is far from uniform, giving a clue to their origins.
The discovery in 1963 at the Vulcano Ranch Experiment of cosmic rays with energies exceeding one million times the energy of the protons in the LHC raised many questions. Not only is the charge of these hadronic particles unknown, but the acceleration mechanisms required to produce UHECRs and the environments that can host these mechanisms are still being debated. Proposed origins include sources in the galactic centre, extreme supernova events, mergers of neutron stars, and extragalactic sources such as blazars. Unlike the case with photons or neutrinos, the arrival direction of charged cosmic rays does not point directly towards their origin because, despite their extreme energies, their paths are deflected by magnetic fields both inside and outside our galaxy. Since the deflection reduces as the energy goes up, however, some UHECRs with the highest energies might still contain information about their arrival direction.
At the Pierre Auger Observatory, cosmic rays are detected using a vast array of detectors spread over an area of 3000 km2 near the town of Malargüe in western Argentina. Like the first cosmic-ray detectors in the 1960s, the array measures the air showers induced as the cosmic rays interact with the atmosphere. The arrival times of the particles, measured with GPS receivers, are used to determine the direction from which the primary particles came within approximately one degree.
The presented dipole measurement is based on a total of 30,000 cosmic rays measured.
The collaboration studied the arrival direction of particles with energies in the range 4-8 EeV and for particles with energies exceeding 8 EeV. In the former data set, no clear anisotropy was observed, whereas for particles with energies above 8 EeV a dipole structure was observed (see figure), indicating that more particles come from a particular part of the sky. Since the maximum of the dipole is outside the galactic plane, the measured anisotropy is consistent with an extragalactic nature. The collaboration reports that the maximum, when taking into account the deflection of magnetic fields, is consistent with a region in the sky known to have a large density of galaxies, supporting the view that UHECRs are produced in other galaxies. The lack of anisotropy at lower energies could be a result of the higher deflection of these particles in the galactic magnetic field.
The presented dipole measurement is based on a total of 30,000 cosmic rays measured by the Pierre Auger Observatory, which is currently being upgraded. Although the results indicate an extragalactic origin, the particular source responsible for accelerating these particles remains unknown. The upgraded observatory will enable more data to be acquired and allow a more detailed investigation of the currently studied energy ranges. It will also open the possibility to explore even higher energies where the magnetic-field deflections become even smaller, making it possible to study the origin of UHECRs, their acceleration mechanism and the magnetic fields that deflect them.
On 14 September 2015, the world changed for those of us who had spent years preparing for the day when we would detect gravitational waves. Our overarching goal was to directly detect gravitational radiation, finally confirming a prediction made by Albert Einstein in 1916. A year after he had published his theory of general relativity, Einstein predicted the existence of gravitational waves in analogy to electromagnetic waves (i.e. photons) that propagate through space from accelerating electric charges. Gravitational waves are produced by astrophysical accelerations of massive objects, but travel through space as oscillations of space–time itself.
It took 40 years before the theoretical community agreed that gravitational waves are real and an integral part of general relativity. At that point, proving they exist became an experimental problem and experiments using large bars of aluminium were instrumented to detect a tiny change in shape from the passage of a gravitational wave. Following a vigorous worldwide R&D programme, a potentially more sensitive technique – suspended-mass interferometry – has superseded resonant-bar detectors. There was limited theoretical guidance regarding what sensitivity would be required to achieve detections from known astrophysical sources. But various estimates indicated that a strain sensitivity ΔL/L of approximately 10–21 caused by the passage of a gravitational wave would be needed to detect known sources such as binary compact objects (binary black-hole mergers, binary neutron-star systems or binary black-hole neutron-star systems). That’s roughly equivalent to measuring the Earth–Sun separation to a precision of the proton radius.
The US National Science Foundation approved the construction of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 1994 at two locations: Hanford in Washington state and Livingston in Louisiana, 3000 km away. At that time, there was a network of cryogenic resonant-bar detectors spread around the world, including one at CERN, but suspended-mass interferometers have the advantage of broadband frequency acceptance (basically the audio band, 10–10,000 Hz) and a factor-1000 longer arms, making it feasible to measure a smaller ΔL/L. Earth-based detectors are sensitive to the most violent events in the universe, such as the merger of compact objects, supernovae and gamma-ray bursts. The detailed interferometric concept and innovations had already been demonstrated during the 1980s and 1990s in a 30 m prototype in Garching, Germany, and a 40 m prototype at Caltech in the US. Nevertheless, these prototype interferometers were at least four orders of magnitude away from the target sensitivity.
Strategic planning
We built a flexible technical infrastructure for LIGO such that it could accommodate a future major upgrade (Advanced LIGO) without rebuilding too much infrastructure. Initial LIGO had mostly used demonstrated technologies to assure technical success, despite the large extrapolation from the prototype interferometers. After completing Initial LIGO construction in about 2000, we undertook an ambitious R&D programme for Advanced LIGO. Over a period of about 10 years, we performed six observational runs with Initial LIGO, each time searching for gravitational waves with improved sensitivity. Between each run, we made improvements, ran again, and eventually reached our Initial LIGO design sensitivity. But, unfortunately, we failed to detect gravitational waves.
We then undertook a major upgrade to Advanced LIGO, which had the goal of improving the sensitivity over Initial LIGO by at least a factor of 10 over the entire frequency range. To accomplish this, we developed a more powerful NdYAG laser system to reduce shot noise at high frequencies, a multiple suspension system and larger test masses to reduce thermal noise in the middle frequencies, and introduced active seismic isolation, which reduced seismic noise at frequencies of around 40 Hz by a factor of 100 (CERN Courier January/February 2017 p34). This was the key to our discovery of our first 30 solar-mass binary black-hole mergers, which are concentrated at low frequencies, two years ago. The increased sensitivity to such events had expanded the volume of the universe searched by a factor of up to 106, enabling a binary black-hole-merger detection coincidence within 6 ms between the Livingston and Hanford sites.
We recorded the last 0.2 seconds of this astrophysical collision: the final merger; coalescence; and “ring-down” phase, constituting the first direct observation of gravitational waves. The waveform was accurately matched by numerical-relativity calculations with a signal-to-noise ratio of 24:1 and a statistical probability easily exceeding 5σ. Beyond confirming Einstein’s prediction, this event represented the first direct observation of black holes, and established that stellar black holes exist in binary systems and that they merge within the lifetime of the universe (CERN Courier January/February 2017 p16). Surprisingly, the two black holes were each about 30 times the mass of the Sun – much heavier than expectations from astrophysics.
Run 2 surprises
Similar to Initial LIGO, we plan to reach Advanced LIGO design sensitivity in steps. After completion of the four-month-long first data run (called O1) in January 2016, we improved the interferometer at the Livingston site from 60 Mpc to 100 Mpc for binary neutron-star mergers, but fell somewhat short in Hanford due to some technical issues, which we decided to fix after LIGO’s second observational run (O2). We have now reported a total of four black-hole-merger events and are beginning to determine characteristics such as mass distributions and spin alignments that will help distinguish between the different possibilities for the origin of such heavy black holes. The leading ideas are that they originate in low-metallicity parts of the universe, were produced in dense clusters, or are primordial. They might even constitute some of the dark matter.
We recorded the last 0.2 seconds of this astrophysical collision: the final merger.
Advanced LIGO’s O2 run ended in August this year. Although it seemed almost impossible that it could be as exciting as O1, several more black-hole binary mergers have been reported, including one after the Virgo interferometer in Italy joined O2 in August and dramatically improved our ability to locate the direction of the source. In addition, the orientation of Virgo relative to the two LIGO interferometers enabled the first information on the polarisation of the gravitational waves. Together with other measurements, this allowed us to limit the existence of an additional tensor term in general relativity and showed that the LIGO–Virgo event is consistent with the predicted two-state polarisation picture.
Then, on 17 August, we really hit the jackpot: our interferometers detected a neutron-star binary merger for the first time. We observed a coincidence signal in both LIGO and Virgo that had strikingly different properties from the black-hole binary mergers we had spotted earlier. Like those, this event entered our detector at low frequencies and propagated to higher frequencies, but lasted much longer (around 100 s) and reached much higher frequencies. This is because the masses in the binary system were much lower and, in fact, are consistent with being neutron stars. A neutron star results from the collapse of a star into a compact object of between 1.1–1.6 solar masses. We have identified our event as the merger of two neutron stars, each about the size of Geneva, but having several hundred thousand times the mass of the Earth.
As we accumulate more events and improve our ability to record their waveforms, we look forward to studying nuclear physics under these extreme conditions. This latest event was the first observed gravitational-wave transient phenomenon also to have electromagnetic counterparts, representing multi-messenger astronomy. Combining the LIGO and Virgo signals, the source of the event was narrowed down to a location in the sky of about 28 square degrees, and it was soon recognised that the Fermi satellite had detected a gamma-ray burst shortly afterwards in the same region. A large and varied number of astronomical observations followed. The combined set of observations has resulted in an impressive array of new science and papers on gamma-ray bursts, kilonovae, gravitational-wave measurements of the Hubble constant, and more. The result even supports the idea that binary neutron-star collisions are responsible for the very heavy elements, such as platinum and gold.
Going deeper
Much has happened since our first detection, and this portends well for the future of this new field. Both LIGO and Virgo entered into a 15 month shutdown at the end of August to further improve noise levels and raise their laser power. At present, Advanced LIGO is about a factor of two below its design goal (corresponding to a factor of eight in event rates). We anticipate reaching design sensitivity by about 2020, after which the KAGRA interferometer in Japan will join us. A third LIGO interferometer (LIGO-India) is also scheduled for operation in around 2025. These observatories will constitute a network offering good global coverage and will accumulate a large sample of binary merger events, achieve improved pointing accuracy for multi-messenger astronomy, and hopefully will observe other sources of gravitational waves. This will not be the end of the story. Beyond the funded programme, we are developing technologies to improve our instruments beyond Advanced LIGO, including improved optical coatings and cryogenic test masses.
In the longer range, concepts and designs already exist for next-generation interferometers, having typically 10 times better sensitivity than will be achieved in Advanced LIGO and Virgo (see panel on previous page). In Europe, a mature concept called the Einstein Telescope is an underground interferometer facility in a triangular configuration (see panel on previous page), and in the US a very long (approximately 40 km) LIGO-like interferometer is under study. The science case for such next-generation devices is being developed through the Gravitational Wave International Committee (GWIC), which is the gravitational-wave field’s equivalent to the International Committee for Future Accelerators (ICFA) in particle physics. Although the science case appears very strong scientifically and technical solutions seem feasible, these are still very early days and many questions must be resolved before a new generation of detectors is proposed.
To fully exploit the new field of gravitational-wave science, we must go beyond ground-based detectors and into the pristine seismic environment of space, where different gravitational-wave sources will become accessible. As described earlier, the lowest frequencies accessible by Earth-based observatories are about 10 Hz. The Laser Interferometer Space Antenna (LISA), a European Space Agency project scheduled for launch in the early 2030s, was approved earlier this year and will cover frequencies around 10–1–10–4 Hz. LISA will consist of three satellites separated by 2.5 × 106 km in a triangular configuration and a heliocentric orbit, with light travelling continually along each arm to monitor the satellite separations for deviations from a passing gravitational wave. A test mission, LISA Pathfinder, was recently flown and demonstrated the key performance requirements for LISA in space (CERN Courier November 2017 p37).
Meanwhile, pulsar-timing arrays are being implemented to monitor signals from millisecond pulsars, with the goal of detecting low-frequency gravitational waves by studying correlations between pulsar arrival times. The sensitivity range of this technique is 10–6–10–9 Hz, where gravitational waves from massive black-hole binaries in the centres of merging galaxies with periods of months to years could be studied.
An ultimate goal is to study the Big Bang itself. Gravitational waves are not absorbed as they propagate and could potentially probe back to the very earliest times, while photons only take us to within 300,000 or so years after the Big Bang. However, we do not yet have detectors sensitive enough to detect early-universe signals. The imprint also of gravitational waves on the cosmic microwave background has been pursued by the Bicep2 experiment, but background issues so far mask a possible signal.
Although gravitational-wave science is clearly in its infancy, we have already learnt an enormous amount and numerous exciting opportunities lie ahead. These vary from testing general relativity in the strong-field limit to carrying out multi-messenger gravitational-wave astronomy over a wide range of frequencies – as demonstrated by the most recent and stunning observation of a neutron-star merger. Since Galileo first looked into a telescope and saw the moons of Jupiter, we have learnt a huge amount about the universe through modern-day electromagnetic astronomy. Now, we are beginning to look at the universe with a new probe and it does not seem to be much of a stretch to anticipate a rich new era of gravitational-wave science.
CERN LIGO–Virgo meeting weighs up 3G gravitational-wave detectors
Similar to particle physicists, gravitational-wave scientists are contemplating major upgrades to present facilities and developing concepts for next-generation observatories. Present-generation (G2) gravitational-wave detectors – LIGO in Hanford, Livingston and India, Virgo in Italy, GEO600 in Germany and KAGRA in Japan – are in different stages of development and have different capabilities (see main text), but all are making technical improvements to better exploit the science potential from gravitational waves over the coming years. As the network develops, the more accurate location information will enable the long-time dream of studying the same astrophysical event with gravitational waves and their electromagnetic and neutrino counterpart signals.
The case for making future, more sensitive next-generation gravitational-wave detectors is becoming very strong, and technological R&D and design efforts for 3G gravitational detectors may have interesting overlaps with both CERN capabilities and future directions. The 3G concepts have many challenging new features, including: making longer arms; going underground; incorporating squeezed quantum states; developing lower thermal-noise coatings; developing low-noise cryogenics; implementing Newtonian noise cancellation; incorporating adaptive controls; new computing capabilities and strategies; and new data-analysis methods.
In late August, coinciding with the end of the second Advanced LIGO observational run, CERN hosted a LIGO–Virgo collaboration meeting. On the final day, a joint meeting between LIGO–Virgo and CERN explored possible synergies between the two fields. It provided strong motivation for next-generation facilities in both particle and gravitational physics and revealed intriguing overlaps between them. On a practical level, the event identified issues facing both communities, such as geology and survey, vacuum and cryogenics, control systems, computing and governance.
The time for R&D, construction and commissioning is expected to be around a decade, with problems near to intractable. It is planned to use cryogenics to bring mirrors to the temperature of a few kelvin. The mirrors themselves are coated using ion beams for deposition, to obtain a controlled reflectivity that must be uniform over areas 1 m in diameter. These mirrors work in an ultra-high vacuum, and residual gas-density fluctuations must be minimal along a vacuum cavity of several tens of kilometres, which will be the approximate footprint of the 3G scientific infrastructure.
Data storage and analysis is another challenge for both gravitational and particle physicists. Unlike the large experiments at the LHC, which count or measure energy deposition in millions of pixels at the detector level, interferometers continuously sample signals from hundreds of channels, generating a large amount of data consisting of waveforms. Data storage and analysis places major demands on the computing infrastructure, and analysis of the first gravitational events called for the GRID infrastructure.
Interferometers have to be kept on an accurately controlled working point, with mirrors used for gravitational-wave detection positioned and oriented using a feedback control system, without introducing additional noise. Sensors and actuators are different in particle accelerators but the control techniques are similar.
Comparisons of the science capabilities, costs and technical feasibility for the next generation of gravitational-wave observatories are under active discussion, as is the question of how many 3G detectors will be needed worldwide and how similar or different they need be. Finally, there were discussions of how to form and structure a worldwide collaboration for the 3G detectors and how to manage such an ambitious project – similar to the challenge of building the next big particle-physics project after the LHC.
•Barry Barish, the author of this feature, shared the 2017 Nobel Prize in Physics with Kip Thorne and Rainer Weiss for the discovery of gravitational waves (CERN Courier November 2017 p37).
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.