The electromagnetic field of the highly charged lead ions in the LHC beams provides a very intense flux of high-energy quasi-real photons that can be used to probe the structure of the proton in lead–proton collisions. The exclusive photoproduction of a J/ψ vector meson is of special interest because it samples the gluon density in the proton. Previous measurements by ALICE have shown that this process could be measured in a wide range of centre-of-mass energies of the photon–proton system (Wγp), enlarging the kinematic reach by more than a factor of two with respect to that of calculations performed at the former HERA collider.
Recently, the ALICE collaboration has performed a measurement of exclusive photoproduction of J/ψ mesons off protons in proton–lead collisions at a centre-of-mass energy of 5.02 TeV at the LHC using two new configurations. In both cases, the J/ψ meson is reconstructed from its decay into a lepton pair. In the first case, the leptons are measured at mid-rapidity using ALICE’s central-barrel detectors. The excellent particle-identification capabilities of these detectors allow the measurement of both the e+e– and μ+μ– channels. The second configuration combines a muon measured with the central-barrel detectors with a second muon measured by the muon spectrometer located at forward rapidity. By this clever use of the detector configuration, we were able to significantly extend the coverage of the J/ψ measurement.
The energy of the photon–proton collisions, Wγp, is determined by the rapidity (which is a function of the polar angle) of the produced J/ψ with respect to the beam axis. Since the direction of the proton and the lead beams was inverted halfway through the data-taking period, ALICE covers both backward and forward rapidities using a single-arm spectrometer.
These two configurations, plus the one used previously where both muons were measured in the muon spectrometer, allow ALICE to cover – in a continuous way – the range in Wγp from 20 to 700 GeV. The typical momentum at which the structure of the proton is probed is conventionally given as a fraction of the beam momentum, x, and the new measurements extend over three orders of magnitude in x from 2 × 10–2 to 2 × 10–5. The measured cross section for this process as a function of Wγp is shown in figure 1 and compared with previous measurements and models based on different assumptions such as the validity of DGLAP evolution (JMRT), the vector-dominance model (STARlight), next-to-leading order BFKL, the colour–glass condensate (CGC), and the inclusion of fluctuating sub-nucleonic degrees-of-freedom (CCT). The last two models include the phenomenon of saturation, where nonlinear effects reduce the gluon density in the proton at small x.
The new measurements are compatible with previous HERA data where available, and all models agree reasonably well with the data. Nonetheless, it is seen that at the largest energies, or equivalently the smallest x, some of the models predict a slower growth of the cross section with energy. This is being studied by ALICE with data taken in 2016 in p–Pb collisions at a centre-of-mass energy of 8.16 TeV, allowing exploration of the Wγp energy range up to 1.5 TeV, potentially shedding new light on the question of gluon saturation.
In 2007, while studying archival data from the Parkes radio telescope in Australia, Duncan Lorimer and his student David Narkevic of West Virginia University in the US found a short, bright burst of radio waves. It turned out to be the first observation of a fast radio burst (FRB), and further studies revealed additional events in the Parkes data dating from 2001. The origin of several of these bursts, which were slightly different in nature, was later traced back to the microwave oven in the Parkes Observatory visitors centre. After discarding these events, however, a handful of real FRBs in the 2001 data remained, while more FRBs were being found in data from other radio telescopes.
The cause of FRBs has puzzled astronomers for more than a decade. But dedicated searches under way at the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Australian Square Kilometre Array Pathfinder (ASKAP), among other activities, are intensifying the search for their origin. Recently, while still in its pre-commissioning phase, CHIME detected no less than 13 new FRBs – one of them classed as a “repeater” on account of its regular radio output – setting the field up for an exciting period of discovery.
Dispersion
All FRBs have one thing in common: they last for a period of several milliseconds and have a relatively broad spectrum where the radio waves with the highest frequencies arrive first followed by those with lower frequencies. This dispersion feature is characteristic of radio waves travelling through a plasma in which free electrons delay lower frequencies more than the higher ones. Measuring the amount of dispersion thus gives an indication of the number of free electrons the pulse has traversed and therefore the distance it has travelled. In the case of FRBs, the measured delay cannot be explained by signals travelling within the Milky Way alone, strongly indicating an extragalactic origin.
The size of the emission region responsible for FRBs can be deduced from their duration. The most likely sources are compact km-sized objects such as neutron stars or black holes. Apart from their extragalactic origin and their size, not much more is known about the 70 or so FRBs that have been detected so far. Theories about their origin range from the mundane, such as pulsar or black-hole emission, to the spectacular – such as neutron stars travelling through asteroid belts or FRBs being messages from extraterrestrials.
For one particular FRB, however, its location was precisely measured and found to coincide with a faint unknown radio source within a dwarf galaxy. This shows clearly that the FRB was extragalactic. The reason this FRB could be localised is that it was one of several to come from the same source, allowing more detailed studies and long-term observations. For a while, it was the only FRB found to do so, earning it the title “The Repeater”. But the recent detection by CHIME has now doubled the number of such sources. The detection of repeater FRBs could be seen as evidence that FRBs are not the result of a cataclysmic event, since the source must survive in order to repeat. However, another interpretation is that there are actually two classes of FRBs: those that repeat and those that come from cataclysmic events.
Until recently the number of theories on the origin of FRBs outnumbered the number of detected FRBs, showing how difficult it is to constrain theoretical models based on the available data. Looking at the experience of a similar field – that of gamma-ray burst (GRB) research, which aims to explain bright flashes of gamma rays discovered during the 1960s – an increase in the number of detections and searches for counterparts in other wavelengths or in gravitational waves will enable quick progress. As the number of detected GRBs started to go into the thousands, the number of theories (which initially also included those with extraterrestrial origins) decreased rapidly to a handful. The start of data taking by ASKAP and the increasing sensitivity of CHIME means we can look forward to an exponential growth of the number of detected FRBs, and an exponential decrease in the number of theories on their origin.
One of the most fascinating particles studied at the LHC is the top quark. As the heaviest elementary particle to date, the top quark lives for less than a trillionth of a trillionth of a second (10–24 s) and decays long before it can form hadrons. It is also the only quark that provides the possibility to study a bare quark. This allows physicists to explore its spin, which is related to the quark’s intrinsic quantum angular momentum. The spin of the top quark can be inferred from the particles it decays into: a bottom quark and a W boson, which subsequently decays into leptons or quarks.
The CMS collaboration has analysed proton–proton collisions in which pairs of top quarks and antiquarks are produced. The Standard Model (SM) makes precise predictions for the frequency at which the spin of the top quark is aligned with (or correlated to) the spin of the top antiquark. A measure of this correlation is thus a highly sensitive test of the SM. If, for example, an exotic heavier Higgs boson were to exist in addition to the one discovered in 2012 at the LHC, it could decay into a pair of top quarks and antiquarks and change their spin correlation significantly. A high-precision measurement of the spin correlation therefore opens a window to explore physics beyond our current knowledge.
The CMS collaboration studied more than one million top-quark–antiquark pairs in dilepton final states recorded in 2016. To study all the spin and polarisation effects accessible in top-quark–antiquark pair production, nine event quantities sensitive to top-quark spin and correlations, and three quantities sensitive to the top-quark polarisation were measured. The measured observables were corrected for experimental effects (“unfolded”) and directly compared to precise theoretical predictions.
The observables studied in this analysis show good agreement between data and theory, for example showing no angular dependence for unpolarised top quarks (see figure 1, left). A moderate discrepancy is seen in one of the measured distributions sensitive to spin (the azimuthal opening angle between two leptons), with respect to one of the Monte Carlo simulations (POWHEGv2+PYTHIA). This discrepancy is consistent with an observation made by the ATLAS collaboration last year, although CMS finds that other simulations (“MG5_aMC@NLO”) and calculations that should give similar results agree with the data within the uncertainties.
In summary, a good agreement with the SM prediction is observed in CMS data, except for the case of one particular but commonly used observable, suggesting further input from theory calculations is probably necessary. The full Run-2 data set already recorded by CMS contains four times more top quarks than were used for this result. This larger sample will allow an even more precise measurement, increasing the chances for a first glimpse of new physics.
In our current understanding of the energy content of the universe, there are two major unknowns: the nature of a non-luminous component of matter (dark matter) and the origin of the accelerating expansion of the universe (dark energy). Both are supported by astrophysical and cosmological measurements but their nature remains unknown. This has motivated a myriad of theoretical models, most of which assume dark matter to be a weakly interacting massive particle (WIMP).
WIMPs may be produced in high-energy proton collisions at the LHC, and are therefore intensively searched for by the LHC experiments. Since dark matter is not expected to interact with the detectors, its production leaves a signature of missing transverse momentum (ETmiss). It can be detected if the dark-matter particles recoil against a visible particle X, which could be a quark or gluon, a photon, or a W, Z or Higgs boson. These are commonly known as X + ETmiss signatures. To interpret these searches, a variety of simplified models are used that describe dark-matter production kinematics with a minimal number of free parameters. These models introduce new spin-0 or spin-1 mediator particles that propagate the interaction between the visible and the dark sectors. Because the mediators must couple to Standard Model (SM) particles in order to be produced in the proton–proton collisions, the mediators can also be directly searched for through their decays to jets, top-quark pairs and potentially even leptons. For certain model parameters, these direct searches can be more sensitive than the X + ETmiss ones.
However, simplified models are not full theories like, for example, supersymmetry. Recent theoretical work has therefore focused on developing more complete, renormalisable models of dark matter, such as two-Higgs doublet models (2HDM) with an additional mediator particle. These models introduce a larger number of free parameters, allowing for a richer phenomenology.
Similarly, for dark energy, effective field theory implementations may introduce a stable and non-interacting scalar field that universally couples to matter. This also leads to a characteristic ETmiss signature at the LHC.
ATLAS has recently released a summary gathering the results from more than 20 experimental searches for dark matter and a first collider search for dark energy. The wide range of analyses gives good coverage for the different dark-matter models studied. For new models, such as 2HDM with an additional pseudoscalar mediator, multiple regions of the parameter space are explored to probe the interplay between the masses, mixing angles and vacuum expectation values. For the 2HDM with an additional vector mediator, the resulting exclusion limits are further improved by combining the ETmiss + Higgs analyses where the Higgs boson decays to a pair of photons or b-quarks. For the dark-energy models, two operators at the lowest order effective Lagrangian allow for interactions between SM particles and the new scalar particles. These operators are proportional to the mass or momenta of the SM particles, making them most sensitive to the ETmiss + top–antitop or the ETmiss + jet final states.
To date, no significant excess over the SM backgrounds has been observed in any of the ATLAS searches for dark matter or dark energy. Limits on the simplified models are set on the mediator-versus- dark-matter masses (figure 1), which can also be compared to those obtained by direct detection experiments. For the 2HDM with a pseudoscalar mediator, limits are placed on the heavy pseudoscalar versus the mediator masses, highlighting the complementarity of different channels in different regions of the parameter space (figure 2). Finally, collider limits on the scalar dark energy model (see Colliders join the hunt for dark energy) are also set and for the models studied improve over the limits obtained from astronomical observation and lab measurements by several orders of magnitude. With the full dataset of LHC collisions collected by ATLAS during Run 2, the sensitivity to these models will continue to improve.
Colloquially, a theory is natural if its underlying parameters are all of the same size in appropriate units. A more precise definition involves the notion of an effective field theory – the idea that a given quantum field theory might only describe nature at energies below a certain scale, or cutoff. The Standard Model (SM) is an effective field theory because it cannot be valid up to arbitrarily high energies even in the absence of gravity. An effective field theory is natural if all of its parameters are of order unity in units of the cutoff. Without fine-tuning, a parameter can only be much smaller than this if setting it to zero increases the symmetry of the theory. All couplings and scales in a quantum theory are connected by quantum effects unless symmetries distinguish them, making it generic for them to coincide.
When did naturalness become a guiding force in particle physics?
We typically trace it back to Eddington and Dirac, though it had precedents in the cosmologies of the Ancient Greeks. Dirac’s discomfort with large dimensionless ratios in observed parameters – among others, the ratio of the gravitational and electromagnetic forces between protons and electrons, which amounts to the smallness of the proton mass in units of the Planck scale – led him to propose a radical cosmology in which Newton’s constant varied with the age of the universe. Dirac’s proposed solutions were readily falsified, but this was a predecessor of the more refined notion of naturalness that evolved with the development of quantum field theory, which drew on observations by Gell-Mann, ’t Hooft, Veltman, Wilson, Weinberg, Susskind and other greats.
Does the concept appear in other disciplines?
There are notions of naturalness in essentially every scientific discipline, but physics, and particle physics in particular, is somewhat unique. This is perhaps not surprising, since one of the primary goals of particle physics is to infer the laws of nature at increasingly higher energies and shorter distances.
Isn’t naturalness a matter of personal judgement?
One can certainly come up with frameworks in which naturalness is mathematically defined – for example, quantifying the sensitivity of some parameter in the theory to variations of the other parameters. However, what one does with that information is a matter of personal judgement: we don’t know how nature computes fine-tuning (i.e. departure from naturalness), or what amount of fine-tuning is reasonable to expect. This is highlighted by the occasional abandonment of mathematically defined naturalness criteria in favour of the so-called Potter Stewart measure: “I know it when I see it.” The element of judgement makes it unproductive to obsess over minor differences in fine-tuning, but large fine-tunings potentially signal that something is amiss. Also, one can’t help but notice that the degree of fine-tuning that is considered acceptable has changed over time.
What evidence is there that nature is natural?
Dirac’s puzzle, the smallness of the proton mass, is a great example: we understand it now as a consequence of the asymptotic freedom of the strong interaction. A natural (of order-unity) value of the QCD gauge coupling at high energies gives rise to an exponentially smaller mass scale on account of the logarithmic evolution of the gauge coupling. Another excellent example, relevant to the electroweak hierarchy problem, is the mass splitting of the charged and neutral pions. From the perspective of an effective field theorist working at the energies of these pions, their mass splitting is only natural if the cutoff of the theory is around 800 MeV. Lo and behold, going up in energy from the pions, the rho meson appears at 770 MeV, revealing the composite nature of the pions and changing the picture in precisely the right way to render the mass splitting natural.
Which is the most troublesome observation for naturalness today?
The cosmological-constant (CC) problem, which is the disagreement by 120 orders of magnitude between the observed and expected value of the vacuum energy density. We understand the SM to be a valid effective field theory for many decades above the energy scale of the observed CC, which makes it very hard to believe that the problem is solved in a conventional way without considerable fine-tuning. Contrast that with the SM hierarchy problem, which is a statement about the naturalness of the mass of the Higgs boson. Data so far show that the cutoff of the SM as an effective field theory might not be too far above the Higgs mass, bringing naturalness within reach of experiment. On the other hand, the CC is only a problem in the context of the SM coupled to gravity, so perhaps its resolution lies in yet-to-be-understood features of quantum gravity.
What about the tiny values of the neutrino masses?
Neutrino masses are not remotely troublesome for naturalness. A parameter can be much smaller than the natural expectation if setting it to zero increases the symmetry of the theory (we call such parameters “technically natural”). For the neutrino, as for any SM fermion, there is an enhanced symmetry when neutrino masses are set to zero. This means that your natural expectation for the neutrino masses is zero, and if they are non-zero, quantum corrections to neutrino masses are proportional to the masses themselves. Although the SM features many numerical hierarchies, the majority of them are technically-natural ones that could be explained by physics at inaccessibly high energies. The most urgent problems are the hierarchies that aren’t technically natural, like the CC problem and the electroweak hierarchy problem.
Has applying the naturalness principle led directly to a discovery?
It’s fair to say that Gaillard and Lee predicted the charm-quark mass by applying naturalness arguments to the mass-splitting of neutral kaons. Of course, the same arguments were also used to (incorrectly) predict a wildly different value of the weak scale! This is a reminder that naturalness principles can point to a problem in the existing theory, and a scale at which the theory should change, but they don’t tell you precisely how the problem is resolved. The naturalness of the neutral kaon mass splitting, or the charged-neutral pion mass splitting, suggests to me that it is more useful to refer to naturalness as a strategy, rather than as a principle.
A slightly more flippant example is the observation of neutrinos from Supernova 1987A. This marked the beginning of neutrino astronomy and opened the door to unrelated surprises, yet the large water-Cherenkov detectors that detected these neutrinos were originally constructed to look for proton decay predicted by grand unified theories (which were themselves motivated by naturalness arguments).
While it would be great if naturalness-based arguments successfully predict new physics, it’s also worthwhile if they ultimately serve only to draw experimental attention to new places.
What has been the impact of the LHC results so far on naturalness?
There have been two huge developments at the LHC. The first is the discovery of the Higgs boson, which sharpens the electroweak hierarchy problem: we seem to have found precisely the sort of particle whose mass, if natural, points to a significant departure from the SM around the TeV scale. The second is the non-observation of new particles predicted by the most popular solutions to the electroweak hierarchy problem, such as supersymmetry. While evidence for these solutions could lie right around the corner, its absence thus far has inspired both a great deal of uncertainty about the naturalness of the weak scale and a lively exploration of new approaches to the problem. The LHC null results teach us only about specific (and historically popular) models that were inspired by naturalness. It is therefore an ideal time to explore naturalness arguments more deeply. The last few years have seen an explosion of original ideas, but we’re really only at the beginning of the process.
The situation is analogous to the search for dark matter, where gravitational evidence is accumulating at an impressive rate despite numerous null results in direct-detection experiments. These null results haven’t ruled out dark matter itself; they’ve only disfavoured certain specific and historically popular models.
How can we settle the naturalness issue once and for all?
The discovery of new particles around the TeV scale whose properties suggest they are related to the top quark would very strongly suggest that nature is more or less natural. In the event of non-discovery, the question becomes thornier – it could be that the SM is unnatural; it could be that naturalness arguments are irrelevant; or it could be that there are signatures of naturalness that we haven’t recognised yet. Kepler’s symmetry-based explanation of the naturalness of planetary orbits in terms of platonic solids ultimately turned out to be a red herring, but only because we came to realise that the features of specific planetary orbits are not deeply related to fundamental laws.
Without naturalness as a guide, how do theorists go beyond the SM?
Naturalness is but one of many hints at physics beyond the SM. There are some incredibly robust hints based on data – dark matter and neutrino masses, for example. There are also suggestive hints, such as the hierarchical structure of fermion masses, the preponderance of baryons over antibaryons and the apparent unification of gauge couplings. There is also a compelling argument for constructing new-physics models purely motivated by anomalous data. This sort of “ambulance chasing” does not have a stellar reputation, but it’s an honest approach which recognises that the discovery of new physics may well come as another case of “Who ordered that?” rather than the answer to a theoretical problem.
What sociological or psychological aspects are at work?
If theoretical considerations are primarily shaping the advancement of a field, then sociology inevitably plays a central role in deciding what questions are most pressing. The good news is that the scales often tip, and data either clarify the situation or pose new questions. As a field we need to focus on lucidly articulating the case for (and against) naturalness as a guiding principle, and let the newer generations make up their minds for themselves.
Deep in a mine in Greater Sudbury, Ontario, Canada, you will find the deepest flush toilets in the world. Four of them, actually, ensuring the comfort of the staff and users of SNOLAB, an underground clean lab with very low levels of background radiation that specialises in neutrino and dark-matter physics.
Toilets might not be the first thing that comes to mind when discussing a particle-physics laboratory, but they are one of numerous logistical considerations when hosting 60 people per day at a depth of 2 km for 10 hours at a time. SNOLAB is the world’s deepest cleanroom facility, a class-2000 cleanroom (see panel below) the size of a shopping mall situated in the operational Vale Creighton nickel mine. It is an expansion of the facility that hosted the Sudbury Neutrino Observatory (SNO), a large, heavy-water detector designed to detect neutrinos from the Sun. In 2001, SNO contributed to the discovery of neutrino oscillations, leading to the joint award of the 2015 Nobel Prize in Physics to SNO spokesperson Arthur B McDonald and Super-Kamiokande spokesperson Takaaki Kajita.
Initially, there were no plans to maintain the infrastructure beyond the timeline of SNO, which was just one experiment and not a designated research facility. However, following the success of the SNO experiment, there was increased interest in low-background detectors for neutrino and dark-matter studies.
Building on SNO’s success
The SNO collaboration was first formed in 1984, with the goal of solving the solar neutrino problem. This problem surfaced during the 1960s, when the Homestake experiment in the Homestake Mine at Lead, South Dakota, began looking for neutrinos created in the early stages of solar fusion. This experiment and its successors, using different target materials and technologies, consistently observed only 30–50% of the neutrinos predicted by the standard solar model. A seemingly small nuisance posed a large problem, which required a large-scale solution.
SNO used a 12 m-diameter spherical vessel containing 1000 tonnes of heavy water to count solar neutrino interactions. Canada had vast reserves of heavy water for use in its nuclear reactors, making it an ideal location for such a detector. The experiment also required an extreme level of cleanliness, so that the signals physicists were searching for would not be confused with background events coming from dust, for instance. The SNO collaboration also had to develop new techniques to measure the inherent radioactivity of their detector materials and the heavy water itself.
Using heavy water gave SNO the ability to observe three different neutrino reactions: one reaction could only happen with electron neutrinos; one was sensitive to all neutrino flavours (electron, muon and tau); and the third provided the directionality pointing back to the Sun. These three complementary interactions let the team test the hypothesis that solar neutrinos were changing flavour as they travelled to Earth. In contrast to previous experiments, this approach allowed SNO to make a measurement of the parameters describing neutrino oscillations that didn’t depend on solar models. SNO’s data confirmed what previous experiments had seen and also verified the predictions of theories, implying that neutrinos do indeed oscillate during their Sun–Earth journey. The experiment ran for seven years and produced 178 papers accumulating more than 275 authors.
In 2002, the Canadian community secured funding to create an extended underground laboratory with SNO as the starting point. Construction of SNOLAB’s underground facility was completed in 2009 and two years later the last experimental hall entered “cleanroom” operation. Some 30 letters of interest were received from different collaborations proposing potential experiments, helping to define the requirements of the new lab.
SNOLAB’s construction was made possible by capital funds totalling CAD$73 million, with more than half coming from the Canada Foundation for Innovation through the International Joint Venture programme. Instead of a single giant cavern, local company Redpath Mining excavated several small and two large halls to hold experiments. The smaller halls helped the engineers manage the enormous stress placed on the rock in larger underground cavities. Bolts 10 m long stabilise the rock in the ceilings of the remaining large caverns, and throughout the lab the rock is covered with a 10 cm-thick layer of spray-on concrete for further stability, with an additional hand-troweled layer to help keep the walls dust-free. This latter task was carried out by Béton Projeté MAH, the same company that finished the bobsled track in the 2010 Vancouver Winter Olympics.
In addition to the experimental halls, SNOLAB is equipped with a chemistry laboratory, a machine shop, storage areas, and a lunchroom. Since the SNO experiment was still running when new tunnels and caverns were excavated, the connection between the new space and the original clean lab area was completed late in the project. The dark-matter experiments DEAP-1 and PICASSO were also already running in the SNO areas before construction of SNOLAB was completed.
Dark matter, neutrinos, and more
Today, SNOLAB employs a staff of over 100 people, working on engineering design, construction, installation, technical support and operations. In addition to providing expert and local support to the experiments, SNOLAB research scientists undertake research in their own right as members of the collaborations.
With so much additional space, SNOLAB’s physics programme has expanded greatly during the past seven years. SNO has evolved into SNO+, in which a liquid scintillator replaces the heavy water to increase the detector’s sensitivity. The scintillator will be doped with tellurium, making SNO+ sensitive to the hypothetical process of neutrinoless double-beta decay. Two of tellurium’s natural isotopes (128Te and 130Te) are known to undergo conventional double-beta decay, making them good candidates to search for the long-sought neutrinoless version. Detecting this decay would violate lepton-number conservation, proving that the neutrino is its own antiparticle (a Majorana particle). SNO+ is one of several experiments currently hunting this process down.
Another active SNOLAB experiment is the Helium and Lead Observatory (HALO), which uses 76 tons of lead blocks instrumented with 128 helium-3 neutron detectors to capture the intense neutrino flux generated when the core of a star collapses at the early stages of a supernova. Together with similar detectors around the world, HALO is part of a supernova early-warning system, which allows astronomers to orient their instruments to observe the phenomenon before it is visible in the sky.
With no fewer than six active projects, dark-matter searches comprise a large fraction of SNOLAB’s physics programme. Many different technologies are employed to search for the dark-matter candidate of choice: the weakly interacting massive particle (WIMP). The PICASSO and COUPP collaborations were both using bubble chambers to search for WIMPS, and merged into the very successful PICO project. Through successive improvements, PICO has endeavoured to enhance the sensitivity to WIMP spin-dependent interactions by an order of magnitude every couple of years. Its sensitivity is best for WIMP masses around 20 GeV/c2. Currently the PICO collaboration is developing a much larger version with up to 500 litres of active-mass material.
DEAP-3600, successor to DEAP-1, is one of the biggest dark-matter detectors ever built, and it has been taking data for almost two years now. It seeks to detect spin-independent interactions between WIMPs and 3300 kg of liquid argon contained in a 1.7 m-diameter acrylic vessel. The best sensitivity will be achieved for a WIMP mass of 100 GeV/c2. Using a different technology, the DAMIC (Dark Matter In CCDs) experiment employs CCD sensors, which have low intrinsic noise levels, and is sensitive to WIMP masses as low as 1 GeV/c2.
Although the science at SNOLAB primarily focuses on neutrinos and dark matter, the low-background underground environment is also useful for biology experiments. REPAIR explores how low radiation levels affect cell development and repair from DNA damage. One hypothesis is that removing background radiation may be detrimental to living systems. REPAIR can help determine whether this hypothesis is correct and characterise any negative impacts. Another experiment, FLAME, studies the effect of prolonged time spent underground on living organisms using fruit flies as a model. The findings from this research could be used by mining companies to support
a healthier workforce.
Future research
There are many exciting new experiments under construction at SNOLAB, including several dark-matter experiments. While the PICO experiment is increasing its detector mass, other experiments are using several different technologies to cover a wide range of possible WIMP masses. The SuperCDMS experiment and CUTE test facility use solid-state silicon and germanium detectors kept at temperatures near absolute zero to search for dark matter, while the NEWS-G experiment will use gasses such as hydrogen, helium and neon in a 1.4 m-diameter copper sphere.
SNOLAB still has space available for additional experiments requiring a deep underground cleanroom environment. The Cryopit, the largest remaining cavern, will be used for a next-generation double-beta-decay experiment. Additional spaces outside the large experimental halls can host several small-scale experiments. While the results of today’s experiments will influence future detectors and detector technologies, the astroparticle physics community will continue to demand clean underground facilities to host the world’s most sensitive detectors. From an underground cavern carved out to host a novel neutrino detector to the deepest cleanroom facility in the world, SNOLAB will continue to seek out and host world-class physics experiments to unravel some of the universe’s deepest mysteries.
The CMS collaboration has published the first direct observation of the coupling between the Higgs boson and the top quark, offering an important probe of the consistency of the Standard Model (SM). In the SM, the Higgs boson interacts with fermions via a Yukawa coupling, the strength of which is proportional to the fermion mass. Since the top quark is the heaviest particle in the SM, its coupling to the Higgs boson is expected to be the largest and thus the dominant contribution to many loop processes, making it a sensitive probe of hypothetical new physics.
The associated production of a Higgs boson with a top quark–antiquark pair (ttH) is the best direct probe of the top-Higgs Yukawa coupling with minimal model dependence, and thus a crucial element to verify the SM nature of the Higgs boson. However, its small production rate – constituting only about 1% of the total Higgs production cross-section – makes the ttH measurement a considerable challenge.
The CMS and ATLAS collaborations reported first evidence for the process last year, based on LHC data collected at a centre-of-mass energy of 13 TeV (CERN Courier May 2017 p49 and December 2017 p12). The first observation, constituting statistical significance above five standard deviations, is based on an analysis of the full 2016 CMS dataset recorded at an energy of 13 TeV and by combining these results with those collected at lower energies.
The ttH process gives rise to a wide variety of final states, and the new CMS analysis combines results from a number of them. Top quarks decay almost exclusively to a bottom quark (b) and a W boson, the latter subsequently decaying either to a quark and an antiquark or to a charged lepton and its associated neutrino. The Higgs-boson decay channels include the decay to a bb quark pair, a τ+τ– lepton pair, a photon pair, and combinations of quarks and leptons from the decay of intermediate on- or off-shell W and Z bosons. These five Higgs-boson decay channels were analysed by CMS using sophisticated methods, such as multivariate techniques, to separate signal from background events. Each channel poses different experimental challenges: the bb channel has the largest rate but suffers from a large background of events containing a top-quark pair and jets, while the photon and Z-boson pair channels offer the highest signal-to-background ratio at a very small rate.
CMS observed an excess of events with respect to the background-only hypothesis at a significance of 5.2 standard deviations. The measured values of the signal strength in the considered channels are consistent with each other, and a combined value of 1.26 +0.31/–0.26 times the SM expectation is obtained (see figure). The measured production rate is thus consistent with the SM prediction within one standard deviation. The result establishes the direct Yukawa coupling of the Higgs boson to the top quark, marking an important milestone in our understanding of the properties of the Higgs boson.
For hundreds of years, discoveries in astronomy were all made in the visible part of the electromagnetic spectrum. This changed in the past century when new objects started being discovered at both longer wavelengths, such as radio, and shorter wavelengths, up to gamma-ray wavelengths corresponding to GeV energies. The 21st century then saw another extension of the range of astronomical observations with the birth of TeV astronomy.
The High Energy Stereoscopic System (HESS) – an array of five telescopes located in Namibia in operation since 2002 – was the first large ground-based telescope capable of measuring TeV photons (followed shortly afterwards by the MAGIC observatory in the Canary Islands and, later, VERITAS in Arizona). To celebrate its 15th anniversary, the HESS collaboration has published its largest set of scientific results to date in a special edition of Astronomy and Astrophysics. Among them is the detection of three new candidates for supernova remnants that, despite being almost the size of the full Moon on the sky, had thus far escaped detection.
Supernova remnants are what’s left after massive stars die. They are the prime suspect for producing the bulk of cosmic rays in the Milky Way and are the means by which chemical elements produced by supernovae are spread in the interstellar medium. They are therefore of great interest for different fields in astrophysics.
HESS observes the Milky Way in the energy range 0.03–100 TeV, but its telescopes do not directly detect TeV photons. Rather, they measure the Cherenkov radiation produced by showers of particles generated when these photons enter Earth’s atmosphere. The energy and direction of the primary TeV photons can then be determined from the shape and direction of the Cherenkov radiation.
Detections by HESS demonstrate the power of TeV astronomy to identify new objects
Using the characteristics of known TeV-emitting supernova remnants, such as their shell-like shape, the HESS search revealed three new objects at gamma-ray wavelengths, prompting the team to search for counterparts of these objects in other wavelengths. Only one, called HESS J1534-571 (figure, left), could be connected to a radio source and thus be classified as a supernova remnant. For the two other sources, HESS J1614-518 and HESS J1912+101, no clear counterparts were found. These objects thus remain candidates for supernova remnants.
The lack of an X-ray counterpart to these sources could have implications for cosmic-ray acceleration mechanisms. The cosmic rays thought to originate from supernova remnants should be directly connected to the production of high-energy photons. If the emission of TeV photons is a result of low-energy photons being scattered by high-energy cosmic-ray electrons originating from a supernova remnant (as described by leptonic emission models), soft X-rays would also be produced while such electrons travelled through magnetic fields around the remnant. The lack of detection of such X-rays could therefore indicate that the TeV photons are not linked to such scattering but are instead associated with the decay of high-energy cosmic-ray pions produced around the remnant, as described by hadronic emission models. Searches in the X-ray band with more sensitive instruments than those available today are required to confirm this possibility and bring deeper insight into the link between supernova remnants and cosmic rays.
The new supernova-remnant detections by HESS demonstrate the power of TeV astronomy to identify new objects. The latest findings increase the anticipation for a range of discoveries from the future Cherenkov Telescope Array (CTA). With more than 100 telescopes, CTA will be more sensitive to TeV photons than HESS, and it is expected to substantially increase the number of detected supernova remnants in the Milky Way.
Decays of the Higgs boson to vector bosons (WW, ZZ, γγ) provide precise measurements of the boson’s coupling strength to other Standard Model (SM) particles. In new analyses, ATLAS has measured these decays for different production modes using the full 2015 and 2016 LHC datasets recorded at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 36.1 fb–1.
With a predicted branching fraction of 21%, the Higgs-boson decay to two W bosons (H → WW) is the second most common decay mode after its decay to two b quarks. The new analysis follows a similar strategy to the earlier ones carried out using the LHC datasets recorded at 7 and 8 TeV. It focuses on the gluon-gluon fusion (ggF) and vector-boson fusion (VBF) production modes, with the subsequent decay to an electron, a muon and two neutrinos (H → WW → eνμν). The main backgrounds come from SM production of W and top-quark pairs; other backgrounds involve Z →ττ with leptonic τ decays and single-W production with misidentified leptons from associated jets.
Events are classified according to the number of jets they contain: events with zero or one jet are used to probe ggF production, while events with two or more jets are used to target VBF production. Due to the spin-zero nature of the Higgs boson, the electron and muon are preferentially emitted in the same direction. The ggF analysis exploits this and other kinematic information via a sequence of selection requirements, while the VBF analysis combines lepton and jet variables in a boosted decision tree to separate the Higgs-boson signal from background processes.
The transverse mass of the selected events from the zero and one-jet signal regions is shown in the left figure, with red denoting the expectation from the Higgs boson and other colours representing background processes. These events are combined with those from the two-jet signal region to derive cross sections times branching fractions for ggF and VBF production of 12.3 +2.3–2.1 pb and 0.50+0.30–0.29 pb, respectively, to be compared to the SM predictions of 10.4 ± 0.6 pb and 0.81 ± 0.02 pb.
ATLAS also performed a combination of inclusive and differential cross-section measurements using Higgs-boson decays to two photons and two Z bosons, where each Z decays to a pair of oppositely charged electrons or muons. The combination of the two channels allows the study of Higgs-boson production rates versus event properties with unprecedented precision. For example, the measurement of the Higgs-boson rapidity distribution can provide information about the underlying parton density functions. The transverse momentum distribution (figure) is sensitive to the coupling between the Higgs boson and light quarks at low transverse momentum, and to possible couplings to non-SM particles at high values. The measured cross sections are found to be consistent with SM predictions.
In the 1920s, Edwin Hubble discovered that the universe is expanding by showing that more distant galaxies recede faster from Earth than nearby ones. Hubble’s measurements of the expansion rate, now called the Hubble constant, had relatively large errors, but astronomers have since found ways of measuring it with increasing precision. One way is direct and entails measuring the distance to far-away galaxies, whereas another is indirect and involves using cosmic microwave background (CMB) data. However, over the last decade a mismatch between the values derived from the two methods has become apparent. Adam Riess from the Space Telescope Science Institute in Baltimore, US, and colleagues have now made a more precise direct measurement that reinforces the mismatch and could signal new physics.
Riess and co-workers’ new value relies on improved measurements of the distances to distant galaxies, and builds on previous work by the team. The measurements are based on more precise measurements of type Ia supernovae within the galaxies. Such supernovae have a known luminosity profile, so their distances from Earth can be determined from how bright they are observed to be. But their luminosity needs to be calibrated – a process that requires an exact measurement of their distance, which is typically rather large.
To calibrate their luminosity, Riess and his team used Cepheid stars, which are closer to Earth than type Ia supernovae. Cepheids have an oscillating apparent brightness, the period of which is directly related to their luminosity, and so their apparent brightness can also be used to measure their distance. Riess and colleagues measured the distance to Cepheids in the Milky Way using parallax measurements from the Hubble Space Telescope, which determine the apparent shift of the stars against the background sky as the Earth moves to the other side of the Sun. The researchers measured this minute shift for several Cepheids, giving a direct measurement of their distance. The team then used this measurement to estimate the distance to distant galaxies containing such stars, which in turn can be used to calibrate the luminosity of supernovae in those galaxies. Finally, they used this calibration to determine the distance to even more distant galaxies with supernovae. Using such a “distance ladder”, the team obtained a value for the Hubble constant of73.5 ± 1.7 km s–1 Mpc–1. This value is more precise than the 73.2 ± 1.8 km s–1 Mpc–1 value obtained by the team in 2016, and it is 3.7 sigma away from the 66.9 ± 0.6 km s–1 Mpc–1 value derived from CMB observations made by the Planck satellite.
Future data could also potentially help to identify the source of the discrepancy
Reiss and colleagues’ results therefore reinforce the discrepancy between the results obtained through the two methods. Although each method is complex and may thus be subject to error, the discrepancy is now at a level that a coincidence seems unlikely. It is difficult to imagine that systematic errors in the distance-ladder method are the root cause of the tension, says the team. Figuring out the nature of the discrepancy is pivotal because the Hubble constant is used to calculate several cosmological quantities, such as the age of the universe. If the discrepancy is not due to errors, explaining it will require new physics beyond the current standard model of cosmology. But future data could also potentially help to identify the source of the discrepancy. Upcoming Cepheid data from ESA’s Gaia satellite could reduce the uncertainty in the distance-ladder value, and new measurements of the expansion rate using a third method based on observations of gravitational waves could throw new light on the problem.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.