Comsol -leaderboard other pages

Topics

CMS inches to the top of the Higgs-coupling mountain

CCnew7_04_17

The discovery of the Higgs boson in 2012, a fundamentally new type of scalar particle, has provided the particle-physics community with a new tool with which to search for new physics beyond the Standard Model (SM). Originally discovered via its decay into two photons or four leptons, the SM Higgs boson is also predicted to interact with fermions with coupling strengths proportional to the fermion masses. The top quark, being the heaviest elementary fermion known, has the largest coupling to the Higgs boson. Precise measurements of such processes therefore provide a sensitive means to search for new physics.

The top-Higgs coupling is crucial for the production of Higgs bosons at the LHC, since the process with the largest production cross-section (gluon–gluon fusion) proceeds via a virtual top-quark loop. In this sense, Higgs production itself provides indirect evidence for the top-Higgs coupling. Direct experimental access to the top-Higgs coupling, on the other hand, comes from the study of the associated production of a Higgs boson and a top-quark pair. This production mode, while proceeding at a rate about 100 times smaller than gluon fusion, provides a highly distinctive signature in the detector, which includes leptons and/or jets from the decay of the two top quarks.

Combined ATLAS and CMS results on ttH production based on the LHC’s Run 1 data set showed an intriguing excess: the measured rate was above the SM prediction with a statistical significance corresponding to 2.3σ. With the increase of the LHC energy from 8 to 13 TeV for Run 2, the ttH production cross-section is expected to increase by a factor four – putting the ttH analyses in the crosshairs of the CMS collaboration in its search for new physics.

CMS

Compared to the first evidence for Higgs production in 2012, namely Higgs-boson decays into clean final states containing two photons or four leptons, the ttH process is much more rare, and the expected signal yields in these modes are just a few events. For this reason, searches for ttH production have been driven by the higher sensitivity achieved in Higgs decay modes with larger branching fractions, such as H  bb, H  WW, and H ττ. The search in the H  bb final state is challenging because of the large background from the production of top-quark pairs in association with jets, and the results are currently limited by systematic and theoretical uncertainties.

A compromise between expected signal yield and background uncertainty can be obtained from final states containing leptons. Such analyses target Higgs decays to WW*, ZZ* and ττ pairs, and make use of events with two same-sign leptons or more than three light leptons produced in association with b-quark jets from top-quark decays. Multivariate techniques allow the background due to jets misidentified as leptons to be reduced, while similar algorithms provide discrimination against irreducible background from tt + W and tt + Z production. Events with reconstructed hadronic τ-lepton decays are studied separately.

The latest results of ttH searches at CMS (see figure) show that we are on the verge of measuring this crucial process with sufficient precision to confirm or disprove the previous observed excess. With a larger data set it should be possible to have clear evidence for ttH production by the end of Run 2.

LHCb brings cosmic collisions down to Earth

In an effort to improve our understanding of cosmic rays, the LHCb collaboration has generated high-energy collisions between protons and helium nuclei similar to those that take place when cosmic rays strike the interstellar medium. Such collisions are expected to produce a certain number of antiprotons, and are currently one of the possible explanations for the small fraction of antiprotons (about one per 10,000 protons) observed in cosmic rays outside of the Earth᾿s atmosphere. By measuring the antimatter component of cosmic rays, we can potentially unveil new high-energy phenomena, notably a possible contribution from the annihilation or decay of dark-matter particles.

In the last few years, space-borne detectors devoted to the study of cosmic rays have dramatically improved our knowledge of the antimatter component. Data from the Alpha Magnetic Spectrometer (AMS-02), which is attached to the International Space Station and operated from a control centre at CERN, published last year are currently the most precise and provide the antiproton over proton fraction up to an antiproton energy of 350 GeV (CERN Courier December 2016 p26). The interpretation of these data is currently limited by poor knowledge of the antiproton production cross-sections, however, and no data are available so far on antiproton production in proton–helium collisions.

LHCb physicists were able to mimic cosmic collisions between 6.5 TeV protons and at-rest helium nuclei

The LHCb’s recently installed internal gas target “SMOG” (System for Measuring Overlap with Gas) provides the unique possibility to study fixed-target proton collisions at the unprecedented energy offered by the LHC, with the forward geometry of the LHCb detector well suited for this configuration. The SMOG device allows a tiny amount of a noble gas to be injected inside the LHC beam pipe near the LHCb vertex detector region. The gas pressure is less than a billionth of atmospheric pressure so as not to perturb LHC operations, but this is sufficient to observe hundreds of millions of beam–gas collisions per hour. By operating SMOG with helium, LHCb physicists were able to mimic cosmic collisions between 6.5 TeV protons and at-rest helium nuclei – a configuration that closely matches the energy scale of the antiproton production observed by space-borne experiments. Data-taking was carried out during May 2016 and lasted just a few hours.

LHCb’s advanced particle-identification capabilities were used to determine the yields of antiprotons, among other charged particles, in the momentum range 12–110 GeV. A novel method has been developed to precisely determine the amount of gas in the target: events are counted where a single electron elastically scattered off the beam is projected inside the detector acceptance. Owing to their distinct signature, these events could be isolated from the much more abundant interactions with the helium nuclei. The cross-section for proton–electron elastic scattering is very well known and allows the density of atomic electrons to be computed.

The result for the antiproton production has been compared to the most popular cosmic-ray models describing soft hadronic collisions, revealing significant disagreements with their predictions. The accuracy of the LHCb measurement is below 10% for most of the accessible phase space, and is expected to contribute to the continuous progress in turning high-energy astroparticle physics into a high-precision science.

ALICE reveals dominance of collective flow

The study of the anisotropic flow in heavy-ion collisions at the LHC, which measures the momentum anisotropy of the final-state particles, has been effective in characterising the extreme states of matter produced in such collisions. Much evidence of collective anisotropic flow and the production of a quark–gluon plasma (QGP) in heavy-ion collisions has already been reported. However, ALICE recently devised a new technique to test for the collective nature of the flow using measurements of differential transverse-momentum correlators, P2. These quantities measure the degree of correlation between the momenta of produced particles and are used to probe the evolution of the QGP fireball produced in heavy-ion collisions. For specific dynamic processes, one can derive how the shape and strength of momentum correlations is related to those of particle-number correlations.

ALICE

Collective-flow models posit that the enormous energy density achieved in heavy-ion collisions generates large pressure gradients that drive the expansion of the QGP fireball. In non-central collisions, the nuclear overlap region is anisotropic and approximately almond shaped, with the longer axis oriented perpendicular to the reaction plane formed by the impact parameter and the beam direction. This produces pressure gradients that are largest in the reaction plane. Particle production thus becomes an anisotropic and collective process mostly determined by the orientation relative to the reaction plane. The anisotropy in the transverse plane is quantified in terms of Fourier coefficients (vn), whose values depend on the initial spatial anisotropy of the fireball as well as pressure gradients. If the geometry of the system and the pressure gradients dominate correlations of produced particles, one expects a specific scaling relation between vn[P2] coefficients of momentum correlations and the regular flow coefficients vn. The presence of other sources of particle correlation, generically called non-flow, are expected to break this simple scaling, however.

ALICE has now found that the scaling relation between vn[P2] and regular vn coefficients is well verified for particle pairs with a minimum separation of 0.9 unit of rapidity (figure, right panel), but breaks down for shorter intervals (left panel) where non-flow effects such as resonance decays and jet fragmentation play an important role. The observed scaling at rapidity greater than 0.9 thus confirms that collective flow determined by the geometry of the collision system dominates the correlation dynamics in heavy-ion collisions at the LHC. ALICE also observed, in the five per cent most central collisions, that the third-order coefficients v3[P2] are larger than the second-order coefficients, v2[P2]. Such coefficient hierarchy is also observed in particle-number correlations but only for the two per cent most central collisions. The observable P2 thus provides better sensitivity to initial state fluctuations that engender finite third-harmonic values.

Dark-matter surprise in early universe

New observations using ESO’s Very Large Telescope (VLT) in Chile indicate that massive, star-forming galaxies in the early universe were dominated by normal, baryonic matter. This is in stark contrast to present-day galaxies, where the effects of dark matter on the rotational velocity of spiral galaxies seem to be much greater. The surprising result, published in Nature by an international team of astronomers led by Reinhard Genzel at the Max Planck Institute for Extraterrestrial Physics in Germany, suggests that dark matter was less influential in the early universe than it is today.

Whereas normal matter in the cosmos can be viewed as brightly shining stars, glowing gas and clouds of dust, dark matter does not emit, absorb or reflect light. This elusive, transparent matter can only be observed via its gravitational effects, one of which is a higher speed of rotation in the outer parts of spiral galaxies. The disc of a spiral galaxy rotates with a velocity of hundreds of kilometres per second, making a full revolution in a period of hundreds of millions of years. If a galaxy’s mass consisted entirely of normal matter, the sparser outer regions should rotate more slowly than the dense regions at the centre. But observations of nearby spiral galaxies show that their inner and outer parts actually rotate at approximately the same speed.

It is widely accepted that the observed “flat rotation curves” indicate that spiral galaxies contain large amounts of non-luminous matter in a halo surrounding the galactic disc. This traditional view is based on observations of numerous galaxies in the local universe, but is now challenged by the latest observations of galaxies in the distant universe. The rotation curve of six massive, star-forming galaxies at the peak of galaxy formation, 10 billion years ago, was measured with the KMOS and SINFONI instruments on the VLT, and the results are intriguing. Unlike local spiral galaxies, the outer regions of these distant galaxies seem to be rotating more slowly than regions closer to the core – suggesting they contain less dark matter than expected. The same decreasing velocity trend away from the centres of the galaxies is also found in a composite rotation curve that combines data from around 100 other distant galaxies, which have too weak a signal for an individual analysis.

Genzel and collaborators identify two probable causes for the unexpected result. Besides a stronger dominance of normal matter with the dark matter playing a much smaller role, they also suggest that early disc galaxies were much more turbulent than the spiral galaxies we see in our cosmic neighbourhood. Both effects seem to become more marked as astronomers look further back in time into the early universe. This suggests that three to four billion years after the Big Bang, the gas in galaxies had already efficiently condensed into flat, rotating discs, while the dark-matter halos surrounding them were much larger and more spread out. Apparently it took billions of years longer for dark matter to condense as well, so its dominating effect is only seen on the rotation velocities of galaxy discs today.

This explanation is consistent with observations showing that early galaxies were much more gas-rich and compact than today’s galaxies. Embedded in a wider dark-matter halo, their rotation curves would be only weakly influenced by its gravity. It would be therefore interesting to explore whether the suggestion of a slow condensation of dark-matter halos could help shed light on this mysterious component of the universe.

Editor’s note

After 13 years as the Courier’s Astrowatch contributor, astronomer Marc Türler is moving to pastures new. We thank him for his numerous lively columns keeping readers up to date with the latest astro results.

Euclid to pinpoint nature of dark energy

The accelerating expansion of the universe, first realised 20 years ago, has been confirmed by numerous observations. Remarkably, whatever the source of the acceleration, it is the primary driver of the dynamical evolution of the universe in the present epoch. That we are unable to know the nature of this so-called dark energy is one of the most important puzzles in modern fundamental physics. Whether due to a cosmological constant, a new dynamical field, a deviation from general relativity on cosmological scales, or something else, dark energy has triggered numerous theoretical models and experimental programmes. Physicists and astronomers are convinced that pinning down the nature of this mysterious component of the universe will lead to a revolution in physics.

Based on the current lambda-cold-dark-matter (ΛCDM) model of cosmology – which has only two ingredients: general relativity with a nonzero cosmological constant and cold dark matter – we identify at this time three dominant components of the universe: normal baryonic matter, which makes up only 5% of the total energy density; dark matter (27%); and dark energy (68%). This model is extremely successful in fitting observations, such as the Planck mission’s measurements of the cosmic microwave background, but it gives no clues about the nature of the dark-matter or dark-energy components. It should also be noted that the assumption of a nonzero cosmological constant, implying a nonzero vacuum energy density, leads to what has been called the worst prediction ever made in physics: its value as measured by astronomers falls short of what is predicted by the Standard Model for particle physics by well over 100 orders of magnitude.

It is only by combining several complementary probes that the source of the acceleration of the universe can be understood.

Depending on what form it takes, dark energy changes the dynamical evolution during the expansion history of the universe as predicted by cosmological models. Specifically, dark energy modifies the expansion rate as well as the processes by which cosmic structures form. Whether the acceleration is produced by a new scalar field or by modified laws of gravity will impact differently on these observables, and the two effects can be decoupled using several complementary cosmological probes. Type 1a supernovae and baryon acoustic oscillations (BAO) are very good probes of the expansion rate, for instance, while gravitational lensing and peculiar velocities of galaxies (as revealed by their redshift) are very good probes of gravity and the growth rate of structures (see panel “The geometry of the universe” below). It is only by combining several complementary probes that the source of the acceleration of the universe can be understood. The changes are extremely small and are currently undetectable at the level of individual galaxies, but by observing many galaxies and treating them statistically it is possible to accurately track the evolution and therefore get a handle on what dark energy physically is. This demands new observing facilities capable of both measuring individual galaxies with high precision and surveying large regions of the sky to cover all cosmological scales.

Euclid science parameters

Euclid is a new space-borne telescope under development by the European Space Agency (ESA). It is a medium-class mission of ESA’s Cosmic Vision programme and was selected in October 2011 as the first-priority cosmology mission of the next decade. Euclid will be launched at the end of 2020 and will measure the accelerating expansion of our universe from the time it kicked in around 10 billion years ago to our present epoch, using four cosmological probes that can explore both dark-energy and modified-gravity models. It will capture a 3D picture of the distribution of the dark and baryonic matter from which the acceleration will be measured to per-cent-level accuracy, and measure possible variations in the acceleration to 10% accuracy, improving our present knowledge of these parameters by a factor 20–60. Euclid will observe the dynamical evolution of the universe and the formation of its cosmic structures over a sky area covering more than 30% of the celestial sphere, corresponding to about five per cent of the volume of the observable universe.

The dark-matter distribution will be probed via weak gravitational-lensing effects on galaxies. Gravitational lensing by foreground objects slightly modifies the shape of distant background galaxies, producing a distortion that directly reveals the distribution of dark matter (see panel “Tracking cosmic structure” below). The way such lensing changes as a function of look-back time, due to the continuing growth of cosmic structure from dark matter, strongly depends on the accelerating expansion of the universe and turns out to be a clear signature of the amount and nature of dark energy. Spectroscopic measurements, meanwhile, will enable us to determine tiny local deviations of the redshift of galaxies from their expected value derived from the general cosmic expansion alone (see image below). These deviations are signatures of peculiar velocities of galaxies produced by the local gravitational fields of surrounding massive structures, and therefore represent a unique test of gravity. Spectroscopy will also reveal the 3D clustering properties of galaxies, in particular baryon acoustic oscillations.

Together, weak-lensing and spectroscopy data will reveal signatures of the physical processes responsible for the expansion and the hierarchical formation of structures and galaxies in the presence of dark energy. A cosmological constant, a new dark-energy component or deviations to general relativity will produce different signatures. Since these differences are expected to be very small, however, the Euclid mission is extremely demanding scientifically and also represents considerable technical, observational and data-processing challenges.

By further analysing the Euclid data in terms of power spectra of galaxies and dark matter and a description of massive nonlinear structures like clusters of galaxies, Euclid can address cosmological questions beyond the accelerating expansion. Indeed, we will be able to address any topic related to power spectra or non-Gaussian properties of galaxies and dark-matter distributions. The relationship between the light- and dark-matter distributions of galaxies, for instance, can be derived by comparing the galaxy power spectrum as derived from spectroscopy with the dark-matter power spectrum as derived from gravitational lensing. The physics of inflation can then be explored by combining the non-Gaussian features observed in the dark-matter distribution in Euclid data with the Planck data. Likewise, since Euclid will map the dark-matter distribution with unprecedented accuracy, it will be sensitive to subtle features produced by neutrinos and thereby help to constrain the sum of the neutrino masses. On these and other topics, Euclid will provide important information to constrain models.

Euclid’s science objectives translate into stringent performance requirements.

The definition of Euclid’s science cases, the development of the scientific instruments and the processing and exploitation of the data are under the responsibility of the Euclid Consortium (EC) and carried out in collaboration with ESA. The EC brings together about 1500 scientists and engineers in theoretical physics, particle physics, astrophysics and space astronomy from around 200 laboratories in 14 European countries, Canada and the US. Euclid’s science objectives translate into stringent performance requirements. Mathematical models and detailed complete simulations of the mission were used to derive the full set of requirements for the spacecraft pointing and stability, the telescope, scientific instruments, data-processing algorithms, the sky survey and the system calibrations. Euclid’s performance requirements can be broadly grouped into three categories: image quality, radiometric and spectroscopic performance. The spectroscopic performance in particular puts stringent demands on the ground-processing algorithms and demands a high level of control over cleanliness during assembly and launch.

Dark-energy payload

The Euclid satellite consists of a service module (SVM) and a payload module (PLM), developed by ESA’s industrial contractors Thales Alenia Space of Turin and Airbus Defence and Space of Toulouse, respectively. The two modules are substantially thermally and structurally decoupled to ensure that the extremely rigid and cold (around 130 K) optical bench located in the PLM is not disturbed by the warmer (290 K±20 K) and more flexible SVM. The SVM comprises all the conventional spacecraft subsystems and also hosts the instrument’s warm electronics units. The Euclid image-quality requirements demand very precise pointing and minimal “jitter”, while the survey requirements call for fast and accurate movements of the satellite from one field to another. The attitude and orbit control system consists of several sensors to provide sub-arc-second stability during an exposure time, and cold gas thrusters with micronewton resolution are used to actuate the fine pointing. Three star trackers provide the absolute inertial attitude accuracy. Since the trackers are mounted on the SVM, which is separate from the telescope structure and thus subject to thermo-elastic deformation, the fine guidance system is located on the same focal plane of the telescope and endowed with absolute pointing capabilities based on a reference star catalogue.

The PLM is designed to provide an extremely stable detection system enabling the sharpest possible images of the sky. The size of the point spread function (PSF), which is the image of a point source such as an unresolved star, closely resembles the Airy disc, the theoretical limit of the optical system. The PSF of Euclid images is comparable to those of the Hubble space telescope’s, considering Euclid’s smaller primary mirror, and is more than three times smaller compared with what can be achieved by the best ground-based survey telescopes under optimum viewing conditions. The telescope is composed of a 1.2 m-diameter three-mirror “anastigmatic Korsch” arrangement that feeds two instruments: a wide-field visible imager (VIS) for the shape measurement of galaxies, and a near-infrared spectrometer and photometer (NISP) for their spectroscopic and photometric redshift measurements. An important PLM design driver is to maintain a high and stable image quality over a large field of view. Building on the heritage of previous European high-stability telescopes such as Gaia, which is mapping the stars of the Milky Way with high precision, all mirrors, the telescope truss and the optical bench are made of silicon carbide, a ceramic material that combines extreme stiffness with very good thermal conduction. The PLM structure is passively cooled to a stable temperature of around 130 K, and a secondary mirror mechanism will be employed to refocus the telescope image on the VIS detector plane after launch and cool down.

The VIS instrument receives light in one broad visible band covering the wavelength range 0.55–0.90 μm. To avoid additional image distortions, it has no imaging optics of its own and is equipped with a camera made up of 36 4 k × 4 k-pixel CCDs with a pixel scale of 0.1 arc second that must be aligned to a precision better than 15 μm over a distance of 30 cm. Pixel-wise, the VIS camera is the second largest camera that will be flown in space after Gaia’s and will produce the largest images ever generated in space. Unlike Gaia, VIS will compress and transmit all raw scientific images to Earth for further data processing. The instrument is capable of measuring the shapes of about 55,000 galaxies per image field of 0.5 square degrees. The NISP instrument, on the other hand, provides near-infrared photometry in the wavelength range 0.92–2.0 μm and has a slit-less spectroscopy mode equipped with three identical grisms (grating prisms) covering the wavelength range 1.25–1.85 μm. The grisms are mounted in different orientations to separate overlapping spectra of neighbouring objects, and the NISP device is capable of delivering redshifts for more than 900 galaxies per image field. The NISP focal plane is equipped with 16 near infrared HgCdTe detector arrays of 2 k × 2 k pixels with 0.3 arcsec pixels, which represents the largest near-infrared focal plane ever built for a space mission.

The exquisite accuracy and stability of Euclid’s instruments will provide certainty that any observed galaxy-shape distortions are caused by gravitational lensing and are not a result of artefacts in the optics. The telescope will deliver a field of view of more than 0.5 square degrees, which is an area comparable to two full Moons, and the flat focal plane of the Korsch configuration places no extra requirements on the surface shape of the sensors in the instruments. As the VIS and NISP instruments share the same field of view, Euclid observations can be carried out through both channels in parallel. Besides the Euclid satellite data, the Euclid mission will combine the photometry of the VIS and NISP instruments with complementary ground-based observations from several existing and new telescopes equipped with wide-field imaging or spectroscopic instruments (such as CFHT, ESO/VLT, Keck, Blanco, JST and LSST). These combined data will be used to derive an estimate of redshift for the two billion galaxies used for weak lensing, and to decouple coherent weak gravitational-lensing patterns from intrinsic alignments of galaxies. Organising the ground-based observations over both hemispheres and making these data compatible with the Euclid data turns out to be a very complex operation that involves a huge data volume, even bigger than the Euclid satellite data volume.

Ground control

One Euclid field of 0.5 square degrees will generate 520 Gb/day of VIS compressed data and 240 Gb/day of NISP compressed data, and one such field is obtained in an observing period lasting about 1 hour and 15 minutes. All raw science data are transmitted to the ground via a high-density link. Even though the nominal mission will last for six years, mapping out the 36% of the sky at the required sensitivity and accuracy within this time involves large amounts of data to be transmitted at a rate of around 850 Gb/day during just four hours of contact with the ground station. The complete processing pipeline from Euclid’s raw data to the final data products is a large IT project involving a few hundred software engineers and scientists, and has been broken down into functions handled by almost a dozen separate expert groups. A highly varied collection of data sets must be homogenised for subsequent combination: data from different ground and space-based telescopes, visible and near-infrared data, and slit-less spectroscopy. Very precise and accurate shapes of galaxies are measured, giving two orders of magnitude improvement with respect to current analyses.

Based on the current knowledge of the Euclid mission and the present ground-station development, no showstoppers have been identified. Euclid should meet its performance requirements at all levels, including the design of the mission (a survey of 15,000 square degrees in less than six years) and for the space and ground segments. This is very encouraging and most promising, taking into account the multiplicity of challenges that Euclid presents.

On the scientific side, the Euclid mission meets the precision and accuracy requested to characterise the source of the accelerating expansion of the universe and decisively reveal its nature. On the technical side, there are difficult challenges to be met in achieving the required precision and accuracy of galaxy-shape, photometric and spectroscopic redshift measurements. Our current knowledge of the mission provides a high degree of confidence that we can overcome all of these challenges in time for launch.

The geometry of the universe

Quantum fluctuations

The evolution of structure is seeded by quantum fluctuations in the very early universe, which were amplified by inflation. These seeds grew to create the cosmic microwave background (CMB) anisotropies after approximately 100,000 years and eventually the dark-matter distribution of today. In the same way that supernovae provide a standard candle for astronomical observations, periodic fluctuations in the density of the visible matter called baryon acoustic oscillations (BAO) provide a standard cosmological length scale that can be used to understand the impact of dark energy. By comparing the distance of a supernova or structure with its measured redshift, the geometry of the universe can be obtained.

Hydrodynamical cosmological simulations of a ΛCDM universe at three different epochs (left-to-right, image left), corresponding to redshift z = 6, z = 2 and our present epoch. Each white point represents the concentration of dark matter, gas and stars, the brightest regions being the densest. The simulation shows the growth rate of structure and the formation of galaxies, clusters of galaxies, filaments and large-scale structures over cosmic time. Euclid uses the large-scale structures made out of matter and dark matter as a standard yardstick: starting from the CMB, we assume that the typical scale of structures (or the peak in the spatial power spectrum) increases proportionally with the expansion of the universe. Euclid will determine the typical scale as a function of redshift by analysing power spectra at several redshifts from the statistical analysis of the dark-matter structures (using the weak lensing probe) or the ordinary matter structures based on the spectroscopic redshifts from the BAO probe. The structures will evolve with redshift also due to the properties of gravity. Information on the growth of structure at different scales in addition to different redshifts is needed to discriminate between models of dark energy and modified gravity

.

Tracking cosmic structure

Gravitational-lensing effects produced by cosmic structures on distant galaxies (right). Numerical simulations (below) show the distribution of dark matter (filaments and clumps with brightness proportional to their mass density) over a line of sight of one billion light-years. The yellow lines show how light beams emitted by distant galaxies are deflected by mass concentrations located along the line of sight. Each deflection slightly modifies the original shape of the lensed galaxies, increasing their original intrinsic ellipticity by a small amount.

Since all distant galaxies are lensed, all galaxies eventually show a coherent ellipticity pattern projected on the sky that directly reveals the projected distribution of dark matter and its power spectrum. The 3D distribution of dark matter can then be reconstructed by slicing the universe into redshift bins and recovering the ellipticity pattern at each redshift. The growth rate of cosmic structures derived from this inversion process strongly depends on the nature of dark energy and gravity, and will be detected by the outstanding image quality of Euclid’s VIS instrument.

How dark matter became a particle

Astronomers have long contemplated the possibility that there may be forms of matter in the universe that are imperceptible, either because they are too far away, too dim or intrinsically invisible. Lord Kelvin was perhaps the first, in 1904, to attempt a dynamical estimate of the amount of dark matter in the universe. His argument was simple yet powerful: if stars in the Milky Way can be described as a gas of particles acting under the influence of gravity, one can establish a relationship between the size of the system and the velocity dispersion of the stars. Henri Poincaré was impressed by Kelvin’s results, and in 1906 he argued that since the velocity dispersion predicted in Kelvin’s estimate is of the same order of magnitude as that observed, “there is no dark matter, or at least not so much as there is of shining matter”.

The Swiss–US astronomer Fritz Zwicky is arguably the most famous and widely cited pioneer in the field of dark matter. In 1933, he studied the redshifts of various galaxy clusters and noticed a large scatter in the apparent velocities of eight galaxies within the Coma Cluster. Zwicky applied the so-called virial theorem – which establishes a relationship between the kinetic and potential energies of a system of particles – to estimate the cluster’s mass. In contrast to what would be expected from a structure of this scale – a velocity dispersion of around 80 km/s – the observed average velocity dispersion along the line of sight was approximately 1000 km/s. From this comparison, Zwicky concluded: “If this would be confirmed, we would get the surprising result that dark matter is present in much greater amount than luminous matter.

In the 1950s and 1960s, most astronomers did not ask whether the universe had a significant abundance of invisible or missing mass. Although observations from this era would later be seen as evidence for dark matter, back then there was no consensus that the observations required much, or even any, such hidden material, and certainly there was not yet any sense of crisis in the field. It was in 1970 that the first explicit statements began to appear arguing that additional mass was needed in the outer parts of some galaxies, based on comparisons between predicted and measured rotation curves. The appendix of a seminal paper published by Ken Freeman in 1970, prompted by discussions with radio-astronomer Mort Roberts, concluded that: “If [the data] are correct, then there must be in these galaxies additional matter which is undetected, either optically or at 21 cm. Its mass must be at least as large as the mass of the detected galaxy, and its distribution must be quite different from the exponential distribution which holds for the optical galaxy.” (Figure 1 below.)

Several other lines of evidence began to appear that supported the same conclusion. In 1974, two influential papers (by Jaan Einasto, Ants Kaasik and Enn Saar, and by Jerry Ostriker, Jim Peebles and Amos Yahil) argued that a common solution existed for the mass discrepancies observed in clusters and in galaxies, and made the strong claim that the mass of galaxies had been until then underestimated by a factor of about 10.

By the end of the decade, opinion among many cosmologists and astronomers had crystallised: dark matter was indeed abundant in the universe. Although the same conclusion was reached by many groups of scientists with different subcultures and disciples, many individuals found different lines of evidence to be compelling during this period. Some astronomers were largely persuaded by new and more reliable measurements of rotation curves, such as those by Albert Bosma, Vera Rubin and others. Others were swayed by observations of galaxy clusters, arguments pertaining to the stability of disc galaxies, or even cosmological considerations. Despite disagreements regarding the strengths and weaknesses of these various observations and arguments, a consensus nonetheless began to emerge by the end of the 1970s in favour of dark-matter’s existence.

Enter the particle physicists

From our contemporary perspective, it can be easy to imagine that scientists in the 1970s had in mind halos of weakly interacting particles when they thought about dark matter. In reality, they did not. Instead, most astronomers had much less exotic ideas in the form of comparatively low-luminosity versions of otherwise ordinary stars and gas. Over time, however, an increasing number of particle physicists became aware of and interested in the problem of dark matter. This transformation was not just driven by new scientific results, but also by sociological changes in science that had been taking place for some time.

Half a century ago, cosmology was widely viewed as something of a fringe science, with little predictive power or testability. Particle physicists and astrophysicists did not often study or pursue research in each other’s fields, and it was not obvious what their respective communities might have to offer one another. More than any other problem in science, it was dark matter that brought particle physicists and astronomers together.

As astrophysical alternatives were gradually ruled out one by one, the view that dark matter is likely to consist of one or more yet undiscovered species of subatomic particle came to be held almost universally among both particle physicists and astrophysicists alike.

Perhaps unsurprisingly, the first widely studied particle dark-matter candidates were neutrinos. Unlike all other known particle species, neutrinos are stable and do not experience electromagnetic or strong interactions – which are essential characteristics for almost any viable dark-matter candidate. The earliest discussion of the role of neutrinos in cosmology appeared in a 1966 paper by Soviet physicists Gershtein and Zeldovich, and several years later the topic began to appear in the West, beginning in 1972 with a paper by Ram Cowsik and J McClelland. Despite the very interesting and important results of these and other papers, it is notable that most of them did not address or even acknowledge the possibility that neutrinos could account for the missing mass that had been observed by astronomers on galactic and cluster scales. An exception included the 1977 paper by Lee and Weinberg, whose final sentence reads: “Of course, if a stable heavy neutral lepton were discovered with a mass of order 1–15 GeV, the gravitational field of these heavy neutrinos would provide a plausible mechanism for closing the universe.

While this is still a long way from acknowledging the dynamical evidence for dark matter, it was an indication that physicists were beginning to realise that weakly interacting particles could be very abundant in our universe, and may have had an observable impact on its evolution. In 1980, the possibility that neutrinos might make up the dark matter received a considerable boost when a group studying tritium beta decay reported that they had measured the mass of the electron antineutrino to be approximately 30 eV – similar to the value needed for neutrinos to account for the majority of dark matter. Although this “discovery” was eventually refuted, it motivated many particle physicists to consider the cosmological implications of their research.

Although we know today that dark matter in the form of Standard Model neutrinos would be unable to account for the observed large-scale structure of the universe, neutrinos provided an important template for the class of hypothetical species that would later be known as weakly interacting massive particles (WIMPs). Astrophysicists and particle physicists alike began to experiment with a variety of other, more viable, dark-matter candidates.

Cold dark-matter paradigm

The idea of neutrino dark matter was killed off in the mid-1980s with the arrival of numerical simulations. These could predict how large numbers of dark-matter particles would evolve under the force of gravity in an expanding universe, and therefore allow astronomers to assess the impact of dark matter on the formation of large-scale structure. In fact, by comparing the results of these simulations with those of galaxy surveys, it was soon realised that no relativistic particle could account for dark matter. Instead, the paradigm of cold dark matter – i.e. made of particles that were non-relativistic at the epoch of structure formation – was well on its way to becoming firmly established.

Meanwhile, in 1982, Jim Peebles pointed out that the observed characteristics of the cosmic microwave background (CMB) also seemed to require the existence of dark matter. If just baryons existed, then one could only explain the observed degree of large-scale structure if the universe started in a fairly anisotropic or “clumpy” state. But by this time, the available data already set an upper limit on CMB anisotropies at a level of 10–4 – too meagre to account for the universe’s structure. Peebles argued that this problem would be relieved if the universe was instead dominated by massive weakly interacting particles whose density fluctuations begin to grow prior to the decoupling of matter and radiation during which the CMB was born. Among other papers, this received enormous attention within the scientific community and helped establish cold dark matter as the leading paradigm to describe the structure and evolution of the universe at all scales.

Solutions beyond the Standard Model

Neutrinos might be the only known particles that are stable, electrically neutral and not strongly interacting, but the imagination of particle physicists did not remain confined to the Standard Model for long. Instead, papers started to appear that openly contemplated many speculative and yet undiscovered particles that might account for dark matter. In particular, particle physicists began to find new candidates for dark matter within the framework of a newly proposed space–time symmetry called supersymmetry. The cosmological implications of supersymmetry were discussed as early as the late 1970s. In Piet Hut’s 1977 paper on the cosmological constraints on the masses of neutrinos, he wrote that the dark-matter argument was not limited to neutrinos or even to weakly interacting particles. The abstract of his paper mentions another possibility made within the context of the supersymmetric partner of the graviton, the spin-3/2 gravitino: “Similar, but much more severe, restrictions follow for particles that interact only gravitationally. This seems of importance with respect to supersymmetric theories,” wrote Hut.

In their 1982 paper, Heinz Pagels and Joel Primack also considered the cosmological implications of gravitinos. But unlike Hut’s paper, or the other preceding papers that had discussed neutrinos as a cosmological relic, Pagels and Primack were keenly aware of the dark-matter problem and explicitly proposed that gravitinos could provide the solution by making up the missing mass. In many ways, their paper reads like a modern manuscript on supersymmetric dark matter, motivating supersymmetry by its various attractive features and then discussing both the missing mass in galaxies and the role that dark matter could play in the formation of large-scale structure. Around the same time, supersymmetry was being further developed into its more modern form, leading to the introduction of R-parity and constructions such as the minimal supersymmetric standard model (MSSM). Such supersymmetric models included not only the gravitino as a dark-matter candidate, but also neutralinos – electrically neutral mixtures of the superpartners of the photon, Z and Higgs bosons.

Over the past 35 years, neutralinos have remained the single most studied candidate for dark matter and have been the subject of many thousand scientific publications. Papers discussing the cosmological implications of stable neutralinos began to appear in 1983. In the first two of these, Weinberg and Haim Goldberg independently discussed the case of a photino (a neutralino whose composition is dominated by the superpartner of the photon) and derived a lower bound of 1.8 GeV on its mass by requiring that the density of such particles does not overclose the universe. A few months later, a longer paper by John Ellis and colleagues considered a wider range of neutralinos as cosmological relics. In Goldberg’s paper there is no mention of the phrase “dark matter” or of any missing mass problem, and Ellis et al. took a largely similar approach by simply requiring only that the cosmological abundance of neutralinos not be so large as to overly slow or reverse the universe’s expansion rate. Although most of the papers on stable cosmological relics written around this time did not yet fully embrace the need to solve the dark-matter problem, occasional sentences could be found that reflected the gradual emergence of a new perspective.

During the years that followed, an increasing number of particle physicists would further motivate proposals for physics beyond the Standard Model by showing that their theories could account for the universe’s dark matter. In 1983, for instance, John Preskill, Mark Wise and Frank Wilczek showed that the axion, originally proposed to solve the strong CP problem in quantum chromodynamics, could account for all of the dark matter in the universe. In 1993, Scott Dodelson and Lawrence Widrow proposed a scenario in which an additional, sterile neutrino species that did not experience electroweak interactions could be produced in the early universe and realistically make up the dark matter. Both the axion and the sterile neutrino are still considered as well-motivated dark-matter candidates, and are actively searched for with a variety of particle and astroparticle experiments.

The triumph of particle dark matter

In the early 1980s there was still nothing resembling a consensus about whether dark matter was made of particles at all, with other possibilities including planets, brown dwarfs, red dwarfs, white dwarfs, neutron stars and black holes. Kim Griest would later coin the term “MACHOs” – short for massive astrophysical compact halo objects – to denote this class of dark-matter candidates, in response to the alternative of WIMPs. There is a consensus today, based on searches using gravitational microlensing surveys and determinations of the cosmic baryon density based on measurements of the primordial light-element abundances and the CMB, that MACHOs do not constitute a large fraction of the dark matter.

An alternative explanation for particle dark matter is to assume that there is no dark matter in the first place, and that instead our theory of gravity needs to be modified. This simple idea, which was put forward in 1982 by Mordehai Milgrom, is known as modified Newtonian dynamics (MOND) and has far-reaching consequences. At the heart of MOND is the suggestion that the force due to gravity does not obey Newton’s second law, F = ma. If instead gravity scaled as F = ma2/a0 in the limit of very low accelerations (a << a0 ~ 1.2 × 10−10 m/s2), then it would be possible to account for the observed motions of stars and gas within galaxies without postulating the presence of any dark matter.

In 2006, a group of astronomers including Douglas Clowe transformed the debate between dark matter and MOND with the publication of an article entitled: “A direct empirical proof of the existence of dark matter”. In this paper, the authors described the observations of a pair of merging clusters collectively known as the Bullet Cluster (image above left). As a result of the clusters’ recent collision, the distribution of stars and galaxies is spatially separated from the hot X-ray-emitting gas (which constitutes the majority of the baryonic mass in this system). A comparison of the weak lensing and X-ray maps of the bullet cluster clearly reveals that the mass in this system does not trace the distribution of baryons. Another source of gravitational potential, such as that provided by dark matter, must instead dominate the mass of this system.

Following these observations of the bullet cluster and similar systems, many researchers expected that this would effectively bring the MOND hypothesis to an end. This did not happen, although the bullet cluster and other increasingly precise cosmological measurements on the scale of galaxy clusters, as well as the observed properties of the CMB, have been difficult to reconcile with all proposed versions of MOND. It is currently unclear whether other theories of modified gravity, in some yet-unknown form, might be compatible with these observations. Until we have a conclusive detection of dark-matter particles, however, the possibility that dark matter is a manifestation of a new theory of gravity remains open.

Today, the idea that most of the mass in the universe is made up of cold and non-baryonic particles is not only the leading paradigm, but is largely accepted among astrophysicists and particle physicists alike. Although dark-matter’s particle nature continues to elude us, a rich and active experimental programme is striving to detect and characterise dark-matter’s non-gravitational interactions, ultimately allowing us to learn the identity of this mysterious substance. It has been more than a century since the first pioneering attempts to measure the amount of dark matter in the universe. Perhaps it will not be too many more years before we come to understand what that matter is.

Physics at its limits

Since Democritus, humans have wondered what happens as we slice matter into smaller and smaller parts. After the discovery almost 50 years ago that protons are made of quarks, further attempts to explore smaller distances have not revealed tinier substructures. Instead, we have discovered new, heavier elementary particles, which although not necessarily present in everyday matter are crucial components of nature’s fundamental make-up. The arrangement of the elementary particles and the interactions between them is now well described by the Standard Model (SM), but furthering our understanding of the basic laws of nature requires digging even deeper.

Quantum physics gives us two alternatives to probe nature at smaller scales: high-energy particle collisions, which induce short-range interactions or produce heavy particles, and high-precision measurements, which can be sensitive to the ephemeral influence of heavy particles enabled by the uncertainty principle. The SM was built from these two approaches, with a variety of experiments worldwide during the past 40 years pushing both the energy and the precision frontiers. The discovery of the Higgs boson at the LHC is a perfect example: precise measurements of Z-boson decays at previous lepton machines such as CERN’s Large Electron–Positron (LEP) collider pointed indirectly but unequivocally to the existence of the Higgs. But it was the LHC’s proton–proton collisions that provided the high energy necessary to produce it directly. With exploration of the Higgs fully under way at the LHC and the machine set to operate for the next 20 years, the time is ripe to consider what tool should come next to continue our journey.

Aiming at a high-energy collider with a clean collision environment, CERN has for several years been developing an e+e linear collider called CLIC. With an energy up to 3 TeV, CLIC would combine the precision of an e+e collider with the high-energy reach of a hadron collider such as the LHC. But with the lack so far of any new particles at the LHC beyond the Higgs, evidence is mounting that even higher energies may be required to fully explore the next layer of phenomena beyond the SM. Prompted by the outcome of the 2013 European Strategy for Particle Physics, CERN has therefore undertaken a five-year study for a Future Circular Collider (FCC) facility built in a new 100 km-circumference tunnel (see image below).

Such a tunnel could host an e+e collider (called FCC-ee) with an energy and intensity much higher than LEP, improving by orders of magnitude the precision of Higgs and other SM measurements. It could also house a 100 TeV proton–proton collider (FCC-hh) with a discovery potential more than five times greater than the 27 km-circumference LHC. An electron–proton collider (FCC-eh), furthermore, would allow the proton’s substructure to be measured with unmatchable precision. Further opportunities include the collision of heavy ions in FCC-hh and FCC-eh, and fixed-target experiments using the injector complex. The earliest that such a machine could enter operation is likely to be the mid 2030s, when the LHC comes to the end of its operational lifetime, but the long lead times for collider projects demand that we start preparing now (see timeline below). A Conceptual Design Report (CDR) for a 100 km collider is expected to be completed by the end of 2018 and hundreds of institutions have joined the international FCC study since its launch in 2014. An independent study for a similar facility is also under way in China.

The CDR will document the accelerator, infrastructures and experiments, as well as a plethora of physics studies proving FCC’s ability to match the long-term needs of global high-energy-physics programmes. The first FCC physics workshop took place at CERN in January to review the status of these studies and discuss the complementarity between the three FCC modes.

The post-LHC landscape

To chart the physics landscape of future colliders, we must first imagine what questions may or may not remain at the end of the LHC programme in the mid-2030s. At the centre of this, and perhaps the biggest guaranteed physics goal of the FCC programme, is our understanding of the Higgs boson. While there is no doubt that the Higgs was the last undiscovered piece of the SM, it is not the closing chapter of the millennia-old reductionist paradigm. The Higgs is the first of its kind – an elementary scalar particle – and it therefore raises deep theoretical questions that beckon a new era of exploration (figure 1, p39).

Consider its mass. In the SM there is no symmetry that protects the Higgs mass from large quantum corrections that drag it up to the mass scale of the particles it interacts with. You might conclude that the relatively low mass of the Higgs implies that it simply does not interact with other heavy particles. But there is good, if largely theoretical, evidence to the contrary. We know that at energies 16 orders of magnitude above the Higgs mass where general relativity fails to provide a consistent quantum description of matter, there must exist a full quantum theory that includes gravity. The fact that the Higgs is so much lighter than this scale is known as the hierarchy problem, and many candidate theories (such as supersymmetry) exist that require new heavy particles interacting with the Higgs. By comparing precise measurements of the Higgs boson with precision SM predictions, we are indirectly searching for evidence of these theories. The SM provides an uncompromising script for the Higgs interactions and any deviation from it would demand its extension.

Even setting to one side grandiose theoretical ideas such as quantum gravity, there are other physical reasons why the Higgs may provide a window to undiscovered sectors. As it carries no spin and is electrically neutral, the Higgs may have so-called “relevant” interactions with new neutral scalar particles. These interactions, even if they take place only at very high energies, remain relevant at low energies – contrary to interactions between new neutral scalars and the other SM particles. The possibility of new hidden sectors already has strong experimental support: although we understand the SM very well, it does not account for roughly 80% of all the matter in the universe. We call the missing mass dark matter, and candidate theories abound. Given the importance of the puzzle, searches for dark-matter particles will continue to play a central role at the LHC and certainly at future colliders.

Furthermore, the SM cannot explain the origin of the matter–antimatter asymmetry that created enough matter for us to exist, otherwise known as baryogenesis. Since the asymmetry was created in the early universe when temperatures and energies were high, we must explore higher energies to uncover the new particles responsible for it. With the LHC we are only at the beginning of this search. Another outstanding question lies in the origin of the neutrino masses, which the SM alone cannot account for. As with dark matter, there are numerous theories for neutrino masses, such as those involving “sterile” neutrinos that are in the reach of lepton and hadron colliders. These and other outstanding questions might also imply the existence of further spatial dimensions, or larger symmetries that unify leptons and quarks or the known forces. The LHC’s findings notwithstanding, future colliders like the FCC are needed to explore these fundamental mysteries more deeply, possibly revealing the need for a paradigm shift.

Electron–positron potential

The capabilities of circular e+e colliders are well illustrated by LEP, which occupied the LHC tunnel from 1989 to 2000. Its point-like collisions between electrons and positrons and precisely known beam energy allowed the four LEP experiments to test the SM to new levels of precision. Putting such a machine in a 100 km tunnel and taking advantage of advances in accelerator technology such as superconducting radio-frequency cavities would offer even greater levels of precision on a larger number of processes. We would be able to change the collision energy in the range 91–350 GeV, for example, allowing data to be collected at the Z pole, at the WW production threshold, at the peak of ZH production, and at the top–antitop quark threshold. Controlling the beam energy at the 100 keV level would allow exquisite measurements of the Z- and W-boson masses, while the high luminosity of FCC-ee will lead to samples of up to 1013 Z and 108 W bosons, not to mention several million Higgs bosons and top-quark pairs. The experimental precision would surpass any previous experiment and challenge cutting-edge theory calculations.

FCC-ee would quite literally provide a quantum leap in our understanding of the Higgs. Like the W and Z gauge bosons, the Higgs receives quantum electroweak corrections typically measuring a few per cent in magnitude due to fluctuations of massive particles such as the top quark. This aspect of the gauge bosons was successfully explored at LEP, but now it is the turn of the Higgs – the keystone in the electroweak sector of the SM. The millions of Higgs bosons produced by FCC-ee, with its clinically precise environment, would push the accuracy of the measurements to the per-mille level, accessing the quantum underpinnings of the Higgs and probing deep into this hitherto unexplored frontier. In the process, e+e→ HZ, the mass recoiling against the Z has a sharp peak that allows a unique and absolute determination of the Higgs decay width and production cross-section. This will provide an absolute normalisation for all Higgs measurements performed at the FCC, enabling exotic Higgs decays to be measured in a model-independent manner.

The high statistics promised by the FCC-ee programme go far beyond precision Higgs measurements. Other signals of new physics could arise from the observation of flavour-changing neutral currents or lepton-flavour-violating decays by the precise measurements of the Z and H invisible decay widths, or by direct observation of particles with extremely weak couplings such as right-handed neutrinos and other exotic particles. Given the particular energy and luminosity of a 100 km e+e machine, the precision of the FCC-ee programme on electroweak measurements would allow new physics effects to be probed at scales as high as 100 TeV. If installed before FCC-hh, it would therefore anticipate what the hadron machine must focus on.

The energy frontier

The future proton–proton collider FCC-hh would operate at seven times the LHC energy, and collect about 10 times more data. The discovery reach for high-mass particles – such as Z´ or W´ gauge bosons corresponding to new fundamental forces, or gluinos and squarks in supersymmetric theories – will increase by a factor five or more, depending on the luminosity. The production rate of particles already within the LHC reach, such as top quarks or Higgs bosons, will increase by even larger factors. During its planned 25 years of data-taking, more than 1010 Higgs bosons will be created by FCC-hh, which is 10,000 times more than collected by the LHC so far and 100 times more than will be available by the end of LHC operations. These additional statistics will enable the FCC-hh experiments to improve the separation of Higgs signals from the huge backgrounds that afflict most LHC studies, overcoming some of the dominant systematics that limit the precision attainable from the LHC.

While the ultimate precision on most Higgs properties can only be achieved with FCC-ee, several demand complementary information from FCC-hh. For example, the direct measurement of the coupling between the Higgs and the top quark necessitates that they be produced together, requiring an energy beyond the reach of the FCC-ee. At 100 TeV, almost 109 of the 1012 produced top quarks will radiate a Higgs boson, allowing the top-Higgs interaction to be measured with a statistical precision at the 1% level – a factor 10 improvement over what is hoped for from the LHC. Similar precision can be reached for Higgs decays that are too rare to be studied in detail at FCC-ee, such as those to muon pairs or to a Z and a photon. All of these measurements will be complementary to those obtained with FCC-ee, and will use them as reference inputs to precisely correlate the strength of the signals obtained through various production and decay modes.

One respect in which a 100 TeV proton–proton collider would come to the fore is in revealing how the Higgs behaves in private. The Higgs is the only particle in the SM that interacts with itself. As the Higgs scalar potential defines the potential energy contained in a fluctuation of the Higgs field, these self-interactions are neatly defined as the derivatives of the scalar electroweak potential. With the Higgs boson being an excitation about the minimum of this potential, we know that its first derivative is zero. The second derivative of the potential is simply the Higgs mass, which is already known to sub-per-cent accuracy. But the third and fourth derivatives are unknown, and unless we gain access to Higgs self-interactions they could remain so. The rate of Higgs pair-production events, which in some part occur through Higgs self-interactions, would grow precipitously at FCC-hh and enable this unique property of the Higgs to be measured with an accuracy of 5% per cent. Among many other uses, such a measurement would comprehensively explore classes of baryogenesis models that rely on modifying the Higgs potential, and thus help us to understand the origin of matter.

FCC-hh would also allow an exhaustive exploration of new TeV-scale phenomena. Indirect evidence for new physics can emerge from the scattering of W bosons at high energy, from the production of Higgs bosons at very large transverse momentum, or by testing the far “off-shell” nature of the Z boson via the measurement of lepton pairs with invariant masses in the multi-TeV region. The plethora of new particles predicted by most models of symmetry-breaking alternative to the SM can be searched for directly, thanks to the immense mass reach of 100 TeV collisions. The search for dark matter, for example, will cover the possible space of parameters of many theories relying on weakly interacting massive particles, guaranteeing a discovery or ruling them out. Theories that address the hierarchy problem will also be conclusively tested. For supersymmetry, the mass reach of FCC-hh pushes beyond the regions motivated by this puzzle alone. For composite Higgs theories, the precision Higgs coupling measurements and searches for new heavy resonances will fully cover the motivated territory. A 100 TeV proton collider will even confront exotic scenarios such as the twin Higgs, which are nightmarishly difficult to test. These theories predict very rare or exotic Higgs decays, possibly visible at FCC-hh thanks to its enormous Higgs production rates.

Beyond these examples, a systematic effort is ongoing to categorise the models that can be conclusively tested, and to find the loopholes that might allow some models to escape detection. This work will influence the way detectors for the new collider are designed. Work is already starting in earnest to define the features of these detectors, and efforts in the FCC CDR study will focus on comprehensive simulations of the most interesting physics signals. The experimental environment of a proton–proton collider is difficult due to the large number of background sources and the additional noise caused by the occurrence of multiple interactions among the hundreds of billions of protons crossing each other at the same time. This pile-up of events will greatly exceed those observed at the LHC, and will pose a significant challenge to the detectors’ performance and to the data-acquisition systems. The LHC experience is of immense value for projecting the scale of the difficulties that will have to be met by FCC-hh, but also for highlighting the increasing role of proton colliders in precision physics beyond their conventional role of discovery machines.

Asymmetric collisions

Smashing protons into electrons opens up a whole different type of physics, which until now has only been explored in detail by a single machine: the HERA collider at DESY in Germany. FCC-eh would collide a 60 GeV electron beam from a linear accelerator, external and tangential to the main FCC tunnel, with a 50 TeV proton beam. It would collect factors of thousands more luminosity than HERA while exhibiting the novel concept of synchronous, symbiotic operation alongside the pp collider. The facility would serve as the most powerful, high-resolution microscope to examine the substructure of matter ever built, with high-energy electron–proton collisions providing precise information on the quark and gluon structure of the proton.

This unprecedented facility would enhance Higgs studies, including the study of the coupling to the charm quark, and broaden the new-physics searches also performed at FCC-hh and FCC-ee. Unexpected discoveries such as quark substructure might also arise. Uniquely, in electron–proton collisions new particles can be created in lepton–quark fusion processes or may be radiated in the exchange of a photon or other vector boson. FCC-eh could also provide access to Higgs self-interactions and extended Higgs sectors, including scenarios involving dark matter. If neutrino oscillations arise from the existence of heavy sterile neutrinos, direct searches at the FCC-eh would have great discovery prospects in kinematic regions complementary to FCC-hh and FCC-ee, giving the FCC complex a striking potential to shine light on the origin of neutrino masses.

Unknown unknowns

In principle, the LHC could have – and still could provide – answers to many of these outstanding questions in particle physics. That no new particles beyond the Higgs have yet been found, or any significant deviations from theory detected, does not mean that these questions have somehow evaporated. Rather, it shows that any expectations for early discoveries beyond the SM at the LHC – often based on theoretical, and in some cases aesthetic, arguments – were misguided. In times like this, when theoretical guidance is called into question, we must pursue experimental answers as vigorously as possible. The combination of accelerators that are being considered for the FCC project offer, by their synergies and complementarities, an extraordinary tool for investigating these questions (figure 2).

There are numerous instances in which the answer nature has offered was not a reply to the question first posed. For example, Michelson and Morley’s experiment designed to study the properties of the ether ended up disproving the existence of the ether and led to Einstein’s theory of special relativity. The Kamiokande experiment in Japan, originally built to observe proton decays, instead discovered neutrino masses. The LHC itself could have disproven the SM by discovering that the Higgs boson is not an elementary but a composite particle – and may still do so, with its future more precise measurements.

The possibility of unknown unknowns does not diminish the importance of an experiment’s scientific goals. On the contrary, it demonstrates that the physics goals for future colliders can play the crucial role of getting a new facility off the ground, even if a completely unanticipated discovery results. This is true of all expeditions into the unknown. We should not forget that Columbus set sail to find a westerly passage to Asia. Without this goal, he would not have discovered the Americas.

Doubting darkness

What is wrong with the theory of gravity we have?

The current description of gravity in terms of general relativity has various shortcomings. Perhaps the most important is that we cannot simply apply Einstein’s laws at a subatomic level without generating notorious infinities. There are also conceptual puzzles related to the physics of black holes that indicate that general relativity is not the final answer to gravity, and important lessons learnt from string theory suggesting gravity is emergent. Besides these theoretical issues, there is also a strong experimental motivation to rethink our understanding of gravity. The first is the observation that our universe is experiencing accelerated expansion, suggesting it contains an enormous amount of additional energy. The second is dark matter: additional gravitating but non-luminous mass that explains anomalous galaxy dynamics. Together these entities account for 95 per cent of all the energy in the universe.

Isn’t the evidence for dark matter overwhelming?

It depends who you ask. There is a lot of evidence that general relativity works very well at length scales that are long compared to the Planck scale, but when we apply general relativity at galactic and cosmological scales we see deviations. Most physicists regard this as evidence that there exists an additional form of invisible matter that gravitates in the same way as normal matter, but this assumes that gravity itself is still described by general relativity. Furthermore, although the most direct evidence for the existence of dark matter comes from the study of galaxies and clusters, not all astronomers are convinced that what they observe is due to particle dark matter – for example, there appears to be a strong correlation between the amount of ordinary baryonic matter and galactic rotation velocities that is hard to explain with particle dark matter. On the other hand, the physicists who are carrying out numerical work on particle dark matter are trying to explain these correlations by including complicated baryonic feedback mechanisms and tweaking the parameters that go into their models. Finally, there is a large community of experimental physicists who simply take the evidence for dark matter as a given.

Is your theory a modification of general relativity, or a rewrite?

The aim of emergent gravity is to derive the equations that govern gravity from a microscopic quantum, while using ingredients from quantum-information theory. One of the main ideas is that different parts of space–time are glued together via quantum entanglement. This is due to van Raamsdonk and has been extended and popularised by Maldacena and Susskind with the slogan “EPR = ER”, where EPR is a reference to Einstein–Podolsky–Rosen and ER refers to the Einstein–Rosen bridge: a “wormhole” that connects the two parts of the black-hole geometry on opposite sides of the horizon. These ideas are being developed by many theorists, in particular in the context of the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence. The goal is then to derive the Einstein equations from this microscopic-quantum perspective. The first step in this programme was already made before my work, but until now most results were derived for AdS space, which describes a universe with a negative cosmological constant and therefore differs from our own. In my recent paper [arXiv:1611.02269] I extended these ideas to de Sitter space, which contains a positive dark energy and has a cosmological horizon. My insight has been that, due to the presence of positive dark energy, the derivation of the Einstein equations breaks down precisely in the circumstances where we observe the effects of “dark matter”.

How did the idea emerge?

The idea of emergent gravity from thermodynamics has been lingering around since the discovery by Hawking and Bekenstein of black-hole entropy and the laws of black-hole thermodynamics in the 1970s. Ted Jacobson made an important step in 1996 by deriving the Einstein equations from assuming the Bekenstein–Hawking formula, which expresses the microscopic entropy in terms of the area of the horizon measured in Planck units. In my 2010 paper [arXiv:1001.0785] I clarified the origin of the inertia force and its relation to the microscopic entropy in space, assuming that this is given by the area of an artificial horizon. After this work I started thinking about cosmology, and learnt about the observations that indicate a close connection between the acceleration scale in galaxies and the acceleration at the cosmological horizon, which is determined by the Hubble parameter. I immediately realised that this implied a relation between the observed phenomena associated with dark matter and the presence of dark energy.

Your paper is 50 pages long. Can you summarise it here?

The idea is that gravity emerges by applying an analogue of the laws of thermodynamics to the entanglement entropy in the vacuum. Just like the normal laws of thermodynamics can be understood from the statistical treatment of microscopic molecules, gravity can be derived from the microscopic units that make up space–time. These “space–time molecules” are units of quantum information (qubits) that are entangled with each other, with the amount of entanglement measured by the entanglement entropy. I realised that in a universe with a positive dark energy, there is a contribution to the entanglement entropy that grows in proportion to area. This leads to an additional force on top of the usual gravity law, because the dark energy “pushes back” like an elastic medium and results in the phenomena that we currently attribute to dark matter. In short, the laws of gravity differ in the low-acceleration regime that occurs in galaxies and other cosmological structures.

How did the community react to the paper?

Submitting work that goes against a widely supported theory requires some courage, and the fact that I have already demonstrated serious work in string theory helped. Nevertheless, I do experience some resistance – mainly from researchers who have been involved in particle dark-matter research. Some string theorists find my work interesting and exciting, but most of them take a “wait and see” attitude. I am dealing with a number of different communities with different attitudes and scientific backgrounds. A lot of it is driven by sociology and past investments.

How often do you work on the idea?

Emergent gravity from quantum entanglement is now an active field worldwide, and I have worked on the idea for a number of years. I mostly work in the evening for around three hours and perhaps one hour in the morning. I also discuss these ideas with my PhD students, colleagues and visitors. In the Netherlands we have quite a large community working on gravity and quantum entanglement, and recently we received a grant together with theorists from the universities of Groningen, Leiden, Utrecht and Amsterdam, to work on this topic.

Within a month of your paper, Brouwer et al. published results supporting your idea. How significant is this?

My theory predicts that the gravitational force due to a centralised mass exhibits a certain scaling relation. This relation was already known to hold for galaxy rotation curves, but these can only be measured up to distances of about 100 kilo-parsec because there are no visible stars beyond this distance. Brouwer and her collaborators used weak gravitational lensing to determine the gravitational force due to a massive galaxy up to distances of one mega-parsec and confirmed that the same relation still holds. Particle dark-matter models can also explain these observations, but they do so by adjusting a free parameter to fit the data. My prediction has no free parameters and hence I find this more convincing, but more observations are needed before definite conclusions can be drawn.

Is there a single result that would rule your theory in or out?

If a dark-matter particle would be discovered that possesses all the properties to explain all the observations, then my idea would be proven to be false. Personally I am convinced this will not happen, although I am still developing the theory further to be able to address important dynamical situations such as the Bullet Cluster (see “How dark matter became a particle”) and the acoustic oscillations that explain the power spectrum of the cosmic microwave background. One of the problems is that particle dark-matter models are so flexible and can therefore easily be made consistent with the data. By improving and extending the observations of gravitational phenomena that are currently attributed to dark matter, we can make better comparisons with the theory. I am hopeful that within the next decade the precision of the observations will have improved and the theory will be developed to a level that decisive tests can be performed.

How would emergent gravity affect the rest of physics?

Our perspective on the building blocks of nature would change drastically. We will no longer think in terms of elementary particles and fundamental forces, but units of quantum information. Hence, the gauge forces responsible for the electroweak and strong interactions will also be understood as being emergent, and this is the way that the forces of nature will become unified. In this sense, all of our current laws of nature will be seen as emergent.

Birth of the high-energy network

With over 60 years of history and currently more than 13,000 users from all over the world, CERN clearly has great potential to bring together a varied alumni community. Today, CERN alumni are distributed around the world, pursuing their careers and passions across many fields including industry, economics, information technology, medicine and finance. Several have gone on to launch successful start-ups, some of them directly applying CERN-inspired technologies.

Setting up and nurturing this important network is a strategic objective for CERN management. Following 12 months of careful preparation, the new CERN Alumni Programme will be launched in June this year.

The new community, united by a shared pride in having contributed to CERN’s scientific endeavours, will provide an opportunity for alumni to maintain links with the Organization. It will allow them to continue to share CERN’s values and support its activities, and serve as a valuable resource for members of personnel in the transition to work outside the laboratory. Physicists, in particular, often consider CERN as a “prime environment” that comes just after academia. The prospect of having to leave CERN may be daunting, with no guarantee that one’s professional future will offer a similar environment and possibilities. However, preliminary statistics on the CERN alumni community demonstrate that professional experience at CERN nurtures skills and talents that are highly sought after by employers and can aid the development of alumni careers in many different fields.

The CERN Alumni Programme has been purposely designed to be inclusive. Former users, associates, students, fellows, staff, and any member of personnel who has held a contract of either employment or association with CERN, may join the alumni community simply by registering. Current members of personnel will also be able to register and interact with the alumni, as well as partner companies. The final objective is to establish a dynamic, long-lasting and high-energy network of engaged members.

Since November 2016 it has been possible for previous and current members of personnel who wish to become members of the network to leave their contact details on the alumni webpage (see below), which CERN will use to contact them once the new web platform is up and running. Registered members of the CERN alumni community will have access to dedicated editorial content, opportunities to exchange experiences and establish contacts with other alumni, in addition to career development opportunities. The aim is to gather a large number of members, whether they are former colleagues still working in academia, have set up their own businesses, have moved into completely different professional environments, or have retired but wish to stay connected.

CERN alumni will themselves be actively involved in building the community, which will evolve with them. The advisory board of the new programme will include representatives from the community as well as members of CERN management. Alumni will be able to set up thematic groups within the community based on factors such as regional interests and scientific topics. A mobile app will help them to stay connected with news, events and networking activities that are published by the community.

The CERN Alumni Programme kick-off event will be held on 2–3 February 2018. In addition to offering unique networking opportunities, it will be possible to visit the LHC and its experiments as well as experimental areas that are usually not accessible to the public. Inspiring seminars and several panel sessions will complete the programme. The event is designed to be a valuable experience for all different types of alumni, from young scientists who have recently left CERN to those with long-standing careers in different fields, and many others.

We are aware that it will be a challenge to reach all of our alumni spread across the planet over such a long period. If you are one of them, do not hesitate to leave your contact details at https://alumni.cern/. It is the best way to show your interest, join the new community and stay connected with CERN. We also invite you to get in touch with any questions by emailing alumni.relations@cern.ch. We will be very happy to welcome you back to CERN again!

bright-rec iop pub iop-science physcis connect