Comsol -leaderboard other pages

Topics

Beams back in LHC for final phase of Run 2

Shortly after midday on 30 March, protons circulated in the Large Hadron Collider (LHC) for the first time in 2018. Following its annual winter shutdown for maintenance and upgrades, the machine now enters its seventh year of data taking and its fourth year operating at a centre-of-mass energy of 13 TeV.

The LHC restart, which involves numerous other links in the CERN accelerator chain, went smoothly. At the beginning of March, the first protons were injected into Linac2, and then into the Proton Synchrotron (PS) Booster. On 8 March the PS received beams, followed by the Super Proton Synchrotron (SPS) one week later. In parallel, the teams had been checking all the LHC hardware and safety installations. No fewer than 1560 electrical circuits had to be powered and about 10,000 tests performed before the LHC was deemed ready to accept protons.

The first beams circulating contain just one bunch, each of which contains 20 times fewer protons than in normal operation; the energy of the beam is also limited to the SPS injection energy of 450 GeV. Further adjustments and tests were undertaken in early April to allow the energy and density of the bunches to be increased.

Bunching up

As the Courier went to press, a few bunches had been injected and accelerated at full energy for optics and collimator commissioning. The first stable beams with only a few bunches are scheduled for 23 April, but could take place earlier thanks to the good progress made so far. This will be followed by a period of gradual intensity ramp-up, during which the number of bunches will be increased stepwise. Between each step, a formal check and validation will take place. The target is to fill each ring with 2556 bunches, and the experiments will be able to undertake serious data collection as soon as the number rises above 1200 bunches – which is expected in early May.

Since early December 2017, when the CERN accelerator complex entered its end of year technical stop, numerous important activities were completed on the LHC and other accelerators. Alongside standard maintenance, the LHC injectors underwent significant preparatory work for the LHC Injector Upgrade project (LIU) foreseen for 2019 and 2020 (CERN Courier October 2017 p32). In the LHC, an important activity was the partial warm-up of sector 1-2 to solve the so-called 16L2 issue, wherein frozen air from an accidental ingress caused beam instabilities and losses during last year’s run: a total of 7 l of frozen air was removed from each beam vacuum chamber during the warm up.

The objective for the 2018 run is to accumulate more data than was collected last year, targeting an integrated luminosity of 60 fb–1 (as opposed to the 50 fb–1 recorded in 2017). While the intensity of collisions is being ramped up in the LHC, data taking is already under way at various fixed-target experiments at CERN that are served by beams from the PS Booster, PS and SPS. The first beams for physics at the n_TOF experiment and the PS East Area started on 30 March. The nuclear-physics programme at ISOLDE restarted on 9 April, followed closely by that of the SPS North Area and, later, the Antiproton Decelerator.

2018 is an important year for the main LHC experiments (ALICE, ATLAS, CMS and LHCb) because it marks the last year of Run 2. In December, the accelerator complex will be shut down for a period of two years to allow significant upgrade work for the High-Luminosity LHC, with the deployment of the LIU project and the start of civil-engineering work. Operations of the HL-LHC will begin in earnest in the mid-2020s, promising an integrated luminosity of 3000 fb–1 by circa 2035.

DESY sets out vision for the future

On 20 March, the DESY laboratory in Germany presented its strategy for the coming decade, outlining the areas of science and innovation it intends to focus on. DESY is a member of the Helmholtz Association, a union of 18 scientific-technical and medical-biological research centres in Germany with a workforce of 39,000 and annual budget of 4.5 billion. The laboratory’s plans for the 2020s include building the world’s most powerful X-ray microscope (PETRA IV), expanding the European X-ray free-electron laser (XFEL), and constructing a new centre for data and computing science.

Founded in 1959, DESY became a leading high-energy-physics laboratory and today remains among the world’s top accelerator centres. Since the closure of the HERA collider in 2007, the lab’s main accelerators have been used to generate synchrotron radiation for research into the structure of matter, while DESY’s particle-physics division carries out experiments at other labs such as those at CERN’s Large Hadron Collider.

Together with other facilities on the Hamburg campus, DESY aims to strengthen its role as a leading international centre for research into the structure, dynamics and function of matter using X rays. PETRA IV is a major upgrade to the existing light source at DESY that will allow users to study materials and other samples in 100 times more detail than currently achievable, approaching the limit of what is physically possible with X rays. A technical design report will be submitted in 2021 and first experiments could be carried out in 2026.

Together with the international partners and operating company of the European XFEL, DESY is planning to comprehensively expand this advanced X-ray facility (which starts at the DESY campus and extends 3.4 km northwest). This includes developing the technology to increase the number of X-ray pulses from 27,000 to one million per second (CERN Courier July/August 2017 p18).

As Germany’s most important centre for particle physics, DESY will continue to be a key partner in international projects and to set up an attractive research and development programme. DESY’s Zeuthen site, located near Berlin, is being expanded to become an international centre for astroparticle physics, focusing on gamma-ray and neutrino astronomy as well as on theoretical astroparticle physics. A key contribution to this effort is a new science data-management centre for the planned Cherenkov Telescope Array (CTA), the next-generation gamma-ray observatory. DESY is also responsible for building CTA’s medium-sized telescopes and, as Europe’s biggest partner in the neutrino telescope IceCube located in the Antarctic, is playing an important role in upgrades to the facility.

The centre for data and computing science will be established at the Hamburg campus to meet the increasing demands of data-intensive research. It will start working as a virtual centre this year and there are plans to accommodate up to six scientific groups by 2025. The centre is being planned together with universities to integrate computer science and applied mathematics.

Finally, the DESY 2030 report lists plans to substantially increase technology transfer to allow further start-ups in the Hamburg and Brandenburg regions. DESY will also continue to develop and test new concepts for building compact accelerators in the future, and is developing a new generation of high-resolution detector systems.

“We are developing the campus in Hamburg together with partners at all levels to become an international port for science. This could involve investments worth billions over the next 15 years, to set up new research centres and facilities,” said Helmut Dosch, chairman of DESY’s board of directors, at the launch event. “The Zeuthen site, which we are expanding to become an international centre for astroparticle physics, is undergoing a similarly spectacular development.”

Flooded LHC data centre back in business

Following severe damage caused by flooding on 9 November, the INFN-CNAF Tier-1 data centre of the Worldwide LHC Computing Grid (WLCG) in Bologna, Italy, has been fully repaired and is back in business crunching LHC data. The incident was caused by the burst of a large water pipe at high pressure in a nearby street, which rapidly flooded the area where the data centre is located. Although the centre was designed to be waterproof against natural events, the volume of water was overwhelming: some 500 m3 of water and mud entered the various rooms, seriously damaging electronic appliances, computing servers, network and storage equipment. A room hosting four 1.4 MW electrical-power panels was filled first, leaving the centre without electricity.

The Bologna centre, which is one of 14 Tier-1 WLCG centres located around the world, hosts a good fraction of LHC data and associated computing resources. It is equipped with around 20,000 CPU cores, 25 PB of disk storage, and a tape library presently filled with about 50 PB of data. Offline computing activities for the LHC experiments were immediately affected. About 10% of the servers, disks, tape cartridges and computing nodes were reached by floodwater, and the mechanics of the tape library were also affected.

Despite the scale of the damage, INFN-CNAF personnel were not discouraged, quickly defining a roadmap to recovery and then attacking one by one all the affected subsystems. First, the rooms at the centre had to be dried and then meticulously cleaned to remove residual mud. Then, within a few weeks, new electrical panels were installed to allow subsystems to be turned back on.

Although all LHC disk-storage systems were reached by the water, the INFN-CNAF personnel were able to recover the data in their entirety, without losing a single bit. This was thanks in part to the available level of redundancy of the disk arrays and to their vertical layout. Wet tape cartridges hosting critical LHC data had to be sent to a specialised laboratory for data recovery.

A dedicated computing farm was set up very quickly at the nearby Cineca computing centre and connected to INFN-CNAF via a high-speed 400 Gbps link to enable the centre to reach the required LHC capacity for 2018. During March, three months since the incident, all LHC experiments were progressively put back online. Following the successful recovery, INFN is planning to move the centre to a new site in the coming years.

First hints of ultra-rare kaon decay

The NA62 collaboration at CERN has found a candidate event for the ultra-rare decay K+ π+ ν ν, demonstrating the experiment’s potential to test heavily-suppressed corners of the Standard Model (SM).

The SM prediction for the K+ π+ ν ν branching fraction is 0.84 ± 0.03 × 10–10. The very small value arises from the underlying coupling between s and d quarks, which only occurs in loops and is suppressed by the couplings of the quark-mixing CKM matrix. The SM prediction for this process is very clean, so finding even a small deviation would be a strong indicator of new physics.

NA62 was approved a decade ago and builds on a long tradition of kaon experiments at CERN (CERN Courier June 2016 p24). The experiment acts as a kaon factory, producing kaon-rich beams by firing high-energy protons from the Super Proton Synchrotron into a beryllium target and then using advanced Cherenkov and straw trackers to identify and measure the particles (see figure). Following pilot and commissioning runs in 2014 and 2015, the full NA62 detector was installed in 2016 enabling the first analysis of the K+ π+ ν ν channel.

Finding one candidate event from a sample of around 1.2 × 1011 events allowed the NA62 team to put an upper limit on the branching fraction of 14 × 10–10 at a confidence level of 95%. The result, first presented at Moriond in March, is thus compatible with the SM prediction, although the statistical errors are too large to probe beyond-SM physics.

Several candidate K+ π+ ν ν events have been previously reported by the E949 and E787 experiments at Brookhaven National Laboratory in the US, inferring a branching fraction of 1.73 ± 1.1 × 10–10 – again consistent, within large errors, with the SM prediction. Whereas the Brookhaven experiments observed kaon decays at rest in a target, however, NA62 observes them in-flight as they travel through a large vacuum tank and therefore creates a cleaner environment with less background events.

The NA62 collaboration expects to identify more events in the ongoing analysis of a 20-fold-larger dataset recorded in 2017. In mid-April the experiment began its 2018 operations with the aim of running for a record number of 218 days. If the SM prediction is correct, the experiment is expected to see about 20 events with the data collected before the end of this year.

“The K+ π+ ν ν decay is special because, within the SM, it allows one to extract the CKM element |Vtd| with a small theoretical uncertainty,” explains NA62 spokesperson Augusto Ceccucci. “Developing the necessary experimental sensitivity to be able to observe this decay in-flight has involved a long R&D programme over a period of five years, and this effort is now starting to pay off.”

Antihydrogen spectroscopy enters precision era

The ALPHA collaboration at CERN’s Antiproton Decelerator (AD) has reported the most precise direct measurement of antimatter ever made. The team has determined the spectral structure of the antihydrogen 1S–2S transition with a precision of 2 × 10–12, heralding a new era of high-precision tests between matter and antimatter and marking a milestone in the AD’s scientific programme (CERN Courier March 2018 p30).

Measurements of the hydrogen atom’s spectral structure agree with theoretical predictions at the level of a few parts in 1015. Researchers have long sought to match this stunning level of precision for antihydrogen, offering unprecedented tests of CPT invariance and searches for physics beyond the Standard Model. Until recently, the difficulty in producing and trapping sufficient numbers of delicate antihydrogen atoms, and acquiring the necessary optical laser technology to interrogate their spectral characteristics, has kept serious antihydrogen spectroscopy out of reach. Following a major programme by the low-energy-antimatter community at CERN during the past two decades and more, these obstacles have now been overcome.

“This is real laser spectroscopy with antimatter, and the matter community will take notice,” says ALPHA spokesperson Jeffrey Hangst. “We are realising the whole promise of CERN’s AD facility; it’s a paradigm change.”

ALPHA confines antihydrogen atoms in a magnetic trap and then measures their response to a laser with a frequency corresponding to a specific spectral transition. In late 2016, the collaboration used this approach to measure the frequency of the 1S–2S transition (between the lowest-energy state and the first excited state) of antihydrogen with a precision of 2 × 10–10, finding good agreement with the equivalent transition in hydrogen (CERN Courier January/February 2017 p8).

The latest result from ALPHA takes antihydrogen spectroscopy to the next level, using not just one but several detuned laser frequencies with slightly lower and higher frequencies than the 1S–2S transition frequency in hydrogen. This allowed the team to measure the spectral shape, or spread in colours, of the 1S–2S antihydrogen transition and get a more precise measurement of its frequency (see figure). The shape of the spectral line agrees very well with that expected for hydrogen, while the 1S–2S resonance frequency agrees at the level of 5 kHz out of 2.5 × 1015 Hz. This is consistent with CPT invariance at a relative precision of 2 × 10−12 and corresponds to an absolute energy sensitivity of 2 × 10−20 GeV.

Although the precision still falls short of that for ordinary hydrogen, the rapid progress made by ALPHA suggests hydrogen-like precision in antihydrogen is now within reach. The collaboration has also used its unique setup at the AD to tackle the hyperfine and other key transitions in the antihydrogen spectrum, with further seminal results expected this year. “When you look at the lineshape, you feel you have to pinch yourself – we are doing real spectroscopy with antimatter!” says Hangst.

Charm oscillations precisely measured by LHCb

Direct searches for particles beyond the Standard Model (SM) have so far come up empty handed, but perhaps physicists can get luckier with indirect searches. Quantum mechanics allows neutral flavoured mesons to transform (or oscillate) into their anti-meson counterparts and back via weak interactions. Novel particles may contribute to the amplitude that governs such oscillations, thus altering their rate or introducing charge-parity (CP) violating rate differences between mesons and anti-mesons. Depending on the flavour structure of what lies beyond the SM, precision studies of such effects can probe energies up to 105 TeV – far beyond the reach of direct searches at the maximum energy currently achievable at colliders.

Oscillations, first posited in 1954 by Gell-Mann and Pais, have been measured precisely for kaons and beauty mesons. But there is room for improvement for D mesons, which contain a charm quark. Neither a nonzero value for the mass difference between mass eigenstates of neutral D mesons, nor a departure from CP symmetry, have yet been established. Charm oscillations are especially attractive because the D-meson flavour is carried by an up-type (i.e. with an electric charge of +2/3) quark. Charm-meson oscillations therefore probe phenomena complementary to those probed by strange- and beauty-meson oscillations.

LHCb recently determined charm- oscillation parameters using 5 fb–1 of proton–proton collision data collected at the LHC in 2011–2016. About 5–10% of LHC collisions produce charm mesons; approximately 10,000 per second are reconstructable. Oscillations are studied by comparing production and decay flavour (i.e. whether a charm or an anti-charm is present) as a function of decay time. The charge of the pion from the strong-interaction decay D*+ D0 π+ determines the flavour at production. The decay flavour is inferred by restricting to K±π final states because charm (anti-charm) neutral mesons predominantly decay into so-called right-sign Kπ+ (K+π) pairs. Hence, a decay-time modulation of the wrong-sign yields of D0 Kπ+ and D0 K+π decays indicates oscillations. In addition, differing modulations between charm or anti-charm mesons indicate CP violation. Backgrounds and instrumental effects that induce a decay-time dependence in the wrong-sign yield, or a difference between charm and anti-charm rates, may introduce harmful biases.

LHCb used track-quality, particle identification, and D0 and D*+ invariant masses to isolate a prominent signal of 0.7 million wrong-sign decays overlapping a smooth background. Decays of mesons produced as charm or anti-charm were analysed independently. The wrong-sign yield as a function of decay time was fitted to determine the oscillation parameters. Statistical uncertainties dominate the precision. Systematic effects include biases from signal candidates originated from beauty hadrons, residual peaking backgrounds, and instrumental asymmetries associated with differing K+π and Kπ+ reconstruction efficiencies. With about 10–4–10–5 absolute (10% fractional) precision, the results are twice as precise as the previous best results (also by LHCb) and show no evidence of CP violation in charm oscillations.

ATLAS focuses on Higgs-boson decays to vector bosons

Decays of the Higgs boson to vector bosons (WW, ZZ, γγ) provide precise measurements of the boson’s coupling strength to other Standard Model (SM) particles. In new analyses, ATLAS has measured these decays for different production modes using the full 2015 and 2016 LHC datasets recorded at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 36.1 fb–1.

With a predicted branching fraction of 21%, the Higgs-boson decay to two W bosons (H  WW) is the second most common decay mode after its decay to two b quarks. The new analysis follows a similar strategy to the earlier ones carried out using the LHC datasets recorded at 7 and 8 TeV. It focuses on the gluon-gluon fusion (ggF) and vector-boson fusion (VBF) production modes, with the subsequent decay to an electron, a muon and two neutrinos (H  WW  eνμν). The main backgrounds come from SM production of W and top-quark pairs; other backgrounds involve Z ττ with leptonic τ decays and single-W production with misidentified leptons from associated jets.

Events are classified according to the number of jets they contain: events with zero or one jet are used to probe ggF production, while events with two or more jets are used to target VBF production. Due to the spin-zero nature of the Higgs boson, the electron and muon are preferentially emitted in the same direction. The ggF analysis exploits this and other kinematic information via a sequence of selection requirements, while the VBF analysis combines lepton and jet variables in a boosted decision tree to separate the Higgs-boson signal from background processes.

The transverse mass of the selected events from the zero and one-jet signal regions is shown in the left figure, with red denoting the expectation from the Higgs boson and other colours representing background processes. These events are combined with those from the two-jet signal region to derive cross sections times branching fractions for ggF and VBF production of 12.3 +2.3–2.1 pb and 0.50+0.30–0.29 pb, respectively, to be compared to the SM predictions of 10.4 ± 0.6 pb and 0.81 ± 0.02 pb.

ATLAS also performed a combination of inclusive and differential cross-section measurements using Higgs-boson decays to two photons and two Z bosons, where each Z decays to a pair of oppositely charged electrons or muons. The combination of the two channels allows the study of Higgs-boson production rates versus event properties with unprecedented precision. For example, the measurement of the Higgs-boson rapidity distribution can provide information about the underlying parton density functions. The transverse momentum distribution (figure) is sensitive to the coupling between the Higgs boson and light quarks at low transverse momentum, and to possible couplings to non-SM particles at high values. The measured cross sections are found to be consistent with SM predictions.

ALICE closes in on parton energy loss

In a new publication submitted to the Journal of High Energy Physics, the ALICE collaboration has reported transverse momentum (pT) spectra of charged hadrons in proton–proton (pp), proton–lead (pPb) and lead–lead (PbPb) collisions at an energy of 5.02 TeV per nucleon pair. The results shed further light on the dense quark-gluon plasma (QGP) thought to have existed shortly after the Big Bang.

At high transverse momentum, hadrons originate from the fragmentation of partons produced in hard-scattering processes. These processes are well understood in pp collisions and can be modelled using perturbative quantum chromodynamics.

In PbPb collisions, the spectra are modified by the energy loss that the partons suffer when propagating in the QGP. Proton–lead collisions serve as a baseline for initial-state effects such as the modification of the gluon density of the nucleons of colliding lead nuclei.

To characterise the change of spectra in nuclear collisions with respect to the expectation from pp collisions, the nuclear modification factors RPbPb (RpPb) are calculated by dividing the pT spectra from PbPb (pPb) collisions by the spectra measured in pp collisions, scaled by the number of binary nucleon–nucleon collisions in the PbPb (pPb) collisions (see figure).

The nuclear modification factor in proton–lead collisions is consistent with unity at high transverse momentum. This shows that initial-state effects from the parton density in the lead nucleus are small and that the strong suppression observed in PbPb collisions is caused by final-state parton-energy loss in the QGP. The new results with higher statistics have much improved systematic uncertainties compared to the earlier publications based on Run 1 data. This is possible because of the improvements in the particle reconstruction and its description in Monte Carlo simulations, as well as data-driven corrections based on identified particles.

The suppression in PbPb collisions at 5.02 TeV is found to be similar to that at the collision energy of 2.76 TeV despite the harder spectrum at the higher energy, which indicates a stronger parton-energy loss and a larger energy density of the medium at the higher energy.

Theoretical models are able to describe the main features of the ALICE data; the improved precision of the measurements will allow researchers to constrain theoretical uncertainties further and to determine transport coefficients in the QGP. The upcoming PbPb run scheduled for November this year and the large pp reference sample collected at the end of 2017 will improve the statistical precision substantially and further extend the covered range of the transverse momentum.

CMS observes rarest Z boson decay mode

The amazing performance of the LHC provides CMS with a large sample of Z bosons. With such high statistics, the CMS collaboration can now probe rare decay channels that were not accessible to experiments at the former Large Electron Positron (LEP) collider. One of these channels, first theoretically studied in the early 1990s, is the decay of the Z boson to a J/ψ meson and two additional leptons. Theoretical calculations of this process, illustrated in the top figure, predict a branching fraction of 6.7–7.7 × 10–7.

The new analysis was performed using proton–proton collision data collected during 2016, corresponding to an integrated luminosity of 35.9 fb–1. To separate signal and background events, a 2D unbinned maximum likelihood fit was used which exploits as discriminating variables the invariant masses of the reconstructed J/ψ and Z states. Due to the limited separation sensitivity of the prompt J/ψ decays from ψ(2S)  J/ψ X decays, the sum of the two modes is indicated with ψ. The decay modes Z ψ μ+μ and Z ψ e+e were searched for, resulting in a yield of 13 and 11 reconstructed candidates in the two channels, respectively. The significance of the Z ψ + observation (where = μ, e) is greater than five standard deviations.

Using the Z μ+μμ+μ decay mode as a reference sample and after removing the (ψ2S)  J/ψ X contribution, the branching fraction ratio B(Z  J/ψ +)/B(Z μ+μμ+μ) in the fiducial phase space of the CMS detector is measured to be 0.70 ± 0.18 (stat) ± 0.05 (syst), assuming null J/ψ polarisation.

Extrapolating from the fiducial volume to the full space and assuming that the extrapolation uncertainties of the two channels cancel in the ratio, a qualitative estimate of B(Z  J/ψ +) can be extracted. The measured value of approximately 8 × 10–7 is consistent with the prediction of the Standard Model.

This is the first observation of this decay mode, and is the rarest Z-decay channel observed to date. With this analysis, CMS has started a new era of rare Z decay measurements. Looking forward, the full Run 2 data can lead to a more precise measurement of this decay’s branching fraction. This is particularly important since this process is a background to the even rarer process whereby a Higgs boson decays into a J/ψ and lepton pair, and rare decays are a rich target in which to detect new physics.

Hubble expansion discrepancy deepens

In the 1920s, Edwin Hubble discovered that the universe is expanding by showing that more distant galaxies recede faster from Earth than nearby ones. Hubble’s measurements of the expansion rate, now called the Hubble constant, had relatively large errors, but astronomers have since found ways of measuring it with increasing precision. One way is direct and entails measuring the distance to far-away galaxies, whereas another is indirect and involves using cosmic microwave background (CMB) data. However, over the last decade a mismatch between the values derived from the two methods has become apparent. Adam Riess from the Space Telescope Science Institute in Baltimore, US, and colleagues have now made a more precise direct measurement that reinforces the mismatch and could signal new physics.

Riess and co-workers’ new value relies on improved measurements of the distances to distant galaxies, and builds on previous work by the team. The measurements are based on more precise measurements of type Ia supernovae within the galaxies. Such supernovae have a known luminosity profile, so their distances from Earth can be determined from how bright they are observed to be. But their luminosity needs to be calibrated – a process that requires an exact measurement of their distance, which is typically rather large.

To calibrate their luminosity, Riess and his team used Cepheid stars, which are closer to Earth than type Ia supernovae. Cepheids have an oscillating apparent brightness, the period of which is directly related to their luminosity, and so their apparent brightness can also be used to measure their distance. Riess and colleagues measured the distance to Cepheids in the Milky Way using parallax measurements from the Hubble Space Telescope, which determine the apparent shift of the stars against the background sky as the Earth moves to the other side of the Sun. The researchers measured this minute shift for several Cepheids, giving a direct measurement of their distance. The team then used this measurement to estimate the distance to distant galaxies containing such stars, which in turn can be used to calibrate the luminosity of supernovae in those galaxies. Finally, they used this calibration to determine the distance to even more distant galaxies with supernovae. Using such a “distance ladder”, the team obtained a value for the Hubble constant of  73.5 ± 1.7   km s–1 Mpc–1. This value is more precise than the 73.2 ± 1.8   km s–1 Mpc–1 value obtained by the team in 2016, and it is 3.7 sigma away from the 66.9 ± 0.6   km s–1 Mpc–1 value derived from CMB observations made by the Planck satellite.

Future data could also potentially help to identify the source of the discrepancy

Reiss and colleagues’ results therefore reinforce the discrepancy between the results obtained through the two methods. Although each method is complex and may thus be subject to error, the discrepancy is now at a level that a coincidence seems unlikely. It is difficult to imagine that systematic errors in the distance-ladder method are the root cause of the tension, says the team. Figuring out the nature of the discrepancy is pivotal because the Hubble constant is used to calculate several cosmological quantities, such as the age of the universe. If the discrepancy is not due to errors, explaining it will require new physics beyond the current standard model of cosmology. But future data could also potentially help to identify the source of the discrepancy. Upcoming Cepheid data from ESA’s Gaia satellite could reduce the uncertainty in the distance-ladder value, and new measurements of the expansion rate using a third method based on observations of gravitational waves could throw new light on the problem.

bright-rec iop pub iop-science physcis connect