The electroweak mixing angle, θW, is a fundamental parameter of the Standard Model; it quantifies the relative strengths of electromagnetism and the weak force, and governs the Z-boson couplings to fermions. It is also something of a puzzle. The two most accurate determinations of the angle, carried out at LEP and SLD, are some three standard deviations different. More recent determinations at the Tevatron experiments, and by ATLAS and CMS at the LHC, have started to probe the difference. Now, LHCb has published a measurement based on LHC data taken in the forward region.
LHCb has measured the asymmetry in the angular distribution of muons in dimuon final states, AFB, as a function of dimuon mass. The asymmetry depends on the squared sine function of the electroweak mixing angle, sin2θW, and can be used to determine a value for it once the directions of the interacting quark and antiquark, needed to define the sign of the asymmetry, are known. LHCb’s unique kinematic region benefits the analysis; dilution of the asymmetry is reduced as the incoming quark direction can be identified correctly 90% of the time, and theoretical uncertainties due to parton-density functions are lower than in the central region. In addition, LHCb’s ability to swap the direction of its magnetic field allows many valuable cross-checks to be performed.
An example of the angular asymmetry, for data taken at 8 TeV centre-of-mass energies, is shown in figure 1 as measurement points compared with a (shaded) Standard Model prediction. The effective electroweak mixing angle is found by comparing this asymmetry distribution with a series of Standard Model templates, corresponding to a range of values of angle, and choosing the one that best matches data. The analysis is performed on both the 7 and 8 TeV data sets, and the results are combined. The corresponding value of sin2θeffW is determined to be 0.23142±0.00073 (stat.)±0.00052 (sys.)±0.00056 (theory).
The value is one of the most precise measurements obtained at a hadron collider. Its accuracy is limited currently by statistics, and does not allow yet for a final word to be said on previous results from LEP, SLD, Tevatron and the LHC. In LHC Run 2 and beyond, there is scope to not just increase the number of events that can be analysed, but for improved parton-density functions (which dominate the theoretical error) to become available. The measurement should improve much further.
A new radiolabelled molecule obtained by the association of a 177Lu isotope and a somatostatin-analogue peptide is showing potential as a cancer killer for certain types of tumour. It is being developed by Advanced Accelerator Applications (AAA), a radiopharmaceutical company that was set up in 2002 by Stefano Buono, a former CERN scientist. With its roots in the nuclear-physics expertise acquired at CERN, AAA started its commercial activity with the production of radiotracers for medical imaging. The successful commercial activity made it possible for AAA to invest in nuclear research to produce innovative radiopharmaceuticals.
177Lu emits both a β particle, which can kill cancerous cells, and a γ ray, which can be useful for SPECT (Single-Photon Emission Computed Tomography) imaging. Advanced neuroendocrine tumours can be inoperable, and for many patients there are no therapeutic options. However, about 80% of all neuroendocrine tumours overexpress somatostatin receptors, and the radiolabelled molecule is able to selectively target those receptors. The new radiopharmaceutical acts by releasing the high-energy electrons after internalization in the tumour cells through the receptors. The tumour cells are destroyed by the radiation, and the drug is rapidly cleared from the body via urine. A complete treatment consists of only four injections, one every six to eight weeks.
The radiolabelled molecule is currently being used for the treatment of all neuroendocrine tumours on compassionate-use and named-patient basis in 10 European countries, and is seeking approval in both the EU and the US. A phase-III clinical trial (the NETTER-1 clinical study) conducted in 51 clinical centres in the US and Europe, is testing the product in patients with inoperable, progressive, somatostatin-receptor-positive, mid-gut neuroendocrine tumours. The results of this trial were presented on 27 September in a prestigious Presidential Session at the European Cancer Congress in Vienna, Austria. The NETTER-1 trial demonstrated that there is a statistically significant and clinically meaningful increase in progression-free survival in patients treated with the radiolabelled molecule, compared with patients treated under the current standard of care. The median progression-free survival (PFS) was not reached during the duration of the trial in the Lutathera arm and was 8.4 months in the comparative group (p < 0.0001, hazard ratio: 0.21).
Another labelling radionuclide, the 68Ga positron emitter, is a good candidate in the production of a novel radiotracer to be used in the precise diagnosis and follow-up of the family of diseases using PET (positron emission tomography).
Nearly 400 days of data taken by the XENON collaboration were used to look for the telltale signature of dark matter, an event rate that varies periodically over the course of a year.
The null result of this search – the first of its kind using a liquid-xenon detector – strongly challenges dark-matter interpretations of the annual modulation observed by the DAMA/LIBRA experiments. Both subterranean experiments are operated at the Laboratori Nazionali del Gran Sasso (LNGS).
An annually varying flux of dark matter through the Earth is expected due to the Earth’s orbital motion around the Sun, which results in a change of relative velocity between the Earth and the dark-matter halo thought to encompass the Milky Way. The observation of such an annual modulation is considered to be a crucial aspect of the direct detection of dark matter.
The DAMA/LIBRA experiments have observed an annual modulation of the residual rate in their sodium-iodide detectors since 1998. However, previous null results from several experiments searching for dark-matter-induced nuclear recoils, including XENON100, have challenged such an interpretation of the DAMA/LIBRA signal.
An alternative explanation, that the DAMA/LIBRA signal is instead due to dark-matter interactions with electrons, is challenged strongly by the new results from XENON100. In studies recently published in Science and Physical Review Letters, three models that predict dark-matter interactions with electrons were considered. The very low rate of electronic recoils in XENON100 allowed these models to be ruled out with high probability.
The studies highlight the overall stability and low background of XENON100, a landmark performance achieved with this type of technology so far. Liquid-xenon detectors continue to lead the field of direct dark-matter detection in terms of their sensitivity to these rare processes. The commissioning of the next generation of XENON experiments at the underground site in LNGS is nearing completion. The detector, XENON1T, is expected to be 100 times more sensitive than its predecessor, and will hopefully shed more light on the elusive nature of dark matter.
In July, the Borexino collaboration reported a geoneutrino signal from the Earth’s mantle with 98% C.L. Geoneutrinos are electron antineutrinos produced by β decays of 238U and 232Th chains, and 40K. These isotopes are naturally present in the interior of the Earth and have lifetimes compatible with the age of the planet. Their radioactive decays contribute significantly to the heat released by the planet. Therefore, the detection of antineutrinos can give geophysicists key information about the relative distribution of the various components in specific layers of the Earth’s interior (crust and mantle).
In Borexino, geoneutrinos are detected in the 278 tonnes of ultra-pure organic liquid scintillator via the inverse β-decay process, νe + p → e+ + n, with a threshold in the neutrino energy of 1.806 MeV. Data reported in the recent publication were collected between 15 December 2007 and 8 March 2015 for a total of 2055.9 days before any selection cut. In this data set, the total geoneutrino signal (from the crust and mantle) has been measured for the first time at more than 5σ.
The signal disentanglement from background is obtained by applying selection cuts based on the properties of the interaction process. The combined efficiency of the cuts, determined by Monte Carlo techniques, is estimated to be (84.2±1.5)%. A total of 77 antineutrino candidates survived the cuts. They include signals from the Earth and background events. The latter are mainly composed of antineutrinos coming from the nuclear reactors. Their signal, corresponding to some 53 events, has been calculated and based on the data from the International Atomic Energy Agency. From previous studies, the contribution from the crust is estimated to be (23.4±2.8) terrestrial neutrino units (TNU), corresponding to 13 events. To estimate the significance of a positive signal from the mantle, the collaboration has determined the likelihood of Sgeo(mantle) = Sgeo – Sgeo(crust) using the experimental likelihood profile of Sgeo and a Gaussian approximation for the crust contribution. This approach gives a signal from the mantle equal to Sgeo(mantle) = 20.9+15.1–10.3 TNU (corresponding to 11 events), with the null hypothesis rejected at 98% C.L.
Although limited by the detection volume and the exposure time, the Borexino researchers could also perform spectroscopy studies (figure 1) that show how their detection technique allows separation of the contributions from uranium (the dark-blue area) and thorium (the light-blue area).
After a spectacular launch from the Tanegashima Space Center on 19 August on board the Japanese H2-B rocket operated by the Japan Aerospace Exploration Agency (JAXA), the CALorimetric Electron Telescope (CALET) docked on the International Space Station on 24 August (EDT). From its privileged position at 400 km altitude, CALET will perform long-duration observations of high-energy charged particles and photons coming from space.
CALET is a space mission led by JAXA, with the participation of the Italian Space Agency and NASA. It is a CERN-recognised experiment and the second high-energy astroparticle experiment installed on the International Space Station (ISS) after AMS-02, which has been taking data since 2011. After berthing with the ISS, CALET was extracted by a robotic arm from the Japanese H-II transfer vehicle and installed on the external platform JEM-EF of the Japanese module. The instrument is now completing its check-out phase. Dedicated calibration runs will precede the start of the science data-taking period, which is expected to continue for several years.
CALET is a space observatory designed to identify electrons, nuclei and γ rays coming from space, and to measure their energies. A high-resolution measurement of the energy is provided by a deep, homogeneous calorimeter preceded by a high-granularity pre-shower calorimeter with imaging capabilities. To ensure very accurate calibration of the calorimetric instruments, the CALET collaboration has carried out several calibration tests at CERN, the most recent one in February 2015.
CALET’s science programme includes measurement of the detailed shape of the electron spectrum above 1 TeV. High-energy electrons are expected to originate less than a few thousand light-years from Earth, because they are known to lose energy quickly when travelling in space. Their detection might be able to reveal the presence of nearby astronomical source(s) where electrons are accelerated. The high end of the spectrum will be particularly interesting to scientists because it will help to resolve the interpretation of the electron and positron spectra reported by AMS-02, and could provide a clue to possible signatures of dark matter.
Thanks to its excellent energy resolution and ability to identify cosmic nuclei from hydrogen to beyond iron, CALET will also be able to study the hadronic component of cosmic rays. The collaboration will investigate the deviation from a pure power law that has been observed recently in the energy spectra of light nuclei, extending the present data to higher energies and measuring accurately the curvature of the spectrum as a function of energy. CALET will also measure the abundance ratio of secondary to primary nuclei, an important ingredient to understand cosmic-ray propagation in the Galaxy.
The Daya Bay Reactor Neutrino Experiment has recently published a new measurement of the disappearance of electron antineutrinos emitted by nuclear reactors. The observation improves the precision of the mixing angle θ13 and the associated mass-squared difference |Δm2ee| by almost a factor of two.
This is the first measurement obtained with the completed Daya Bay detector configuration consisting of eight modular antineutrino detectors, providing a total target mass of 160 tonnes. The gadolinium-doped organic liquid scintillator detects electron antineutrinos via inverse beta decay (νe + p → e+ + n). Oscillation converts some of the νe to νμ and ντ, reducing the νe flux. Six commercial pressurised-water nuclear reactors (17.4 GW of thermal power in total) of the Daya Bay Nuclear Power Complex are an intense source, producing about 1021 electron antineutrinos per second. Four detectors located around 300 to 500 m from the reactors measure the initial νe rate from the reactors, while four detectors at around 1.6 km from the reactors observe the subsequent disappearance.
This result builds on previous measurements by the Daya Bay and RENO experiments, which provided the first proof that θ13 is nonzero. The improved statistical precision came from a 3.6 times increase in exposure, generating a data sample of 1.2 million νe interactions. The systematic uncertainties were also reduced through improved characterisation of the detectors and reduction of background.
The analysis found sin2(2θ13) = 0.084±0.005 from the amplitude of anti-νe disappearance, while the energy dependence of this disappearance provided a measurement of oscillation frequency expressed in terms of the effective mass-squared difference |Δm2ee| = (2.42±0.11) × 10–3 eV2 (see figure 1). This is actually related to the two almost-equal neutrino mass-squared differences |Δm231| and |Δm231| = |Δm232 + Δm221|. One measure of how far neutrino physics has progressed is that the interpretation of this mixing parameter is now a step closer to being sensitive to the neutrino mass hierarchy. If the mass hierarchy is normal, then |Δm232| = (2.37±0.11) × 10–3 eV2, while if it is inverted, |Δm232| = (2.47±0.11) × 10–3 eV2.
The Daya Bay Reactor Neutrino Experiment continues to collect data, and aims at achiving a further factor of two improvement in precision by 2017.
Earlier this year, astronomers discovered what appeared to be a pair of supermassive black holes (SMBHs) circling towards a collision, which would send out a burst of gravitational waves. A new study of the periodic signal from the quasar PG 1302-102 seems to confirm this interpretation by showing that it could naturally arise due to relativistic Doppler boosting.
Black-hole binaries are expected to be common in large elliptical galaxies, because they most likely form by the merger of spiral galaxies, each hosting a central SMBH. A way to find binary systems in quasars is to search for a period signal repeating over several years. This is quite challenging, owing to the erratic variability of these distant active galactic nuclei. Until recently, only one rather peculiar object, called OJ287, was clearly identified as a double black-hole system, with a smaller black hole plunging twice through the extended accretion disc of the primary black hole along its inclined, eccentric 12 year-long orbit.
In January, a team led by Matthew Graham, a computational astronomer at the California Institute of Technology, designed an algorithm to detect sinusoidal intensity variations from 247,000 quasars monitored by optical telescopes in Arizona and Australia. Of the 20 pairs of black-hole candidates discovered, they focused on the most compelling bright quasar – PG 1302-102. They showed that PG 1302-102 appeared to brighten by 14 per cent every five years, suggesting the pair was less than a tenth of a light-year apart.
In a new study, also published in Nature, Daniel D’Orazio and his group at the Columbia University interpret the sinusoidal modulation as due to relativistic Doppler boosting. They find that the signal is consistent with a model where most of the optical emission comes from a smaller black hole orbiting a heavier one at nearly a tenth of the speed of light. At that speed – via the relativistic Doppler beaming effect – the smaller black hole would appear to slightly brighten as it approaches the Earth and fade as it moves away on its orbit. They note that hydrodynamical simulations do, indeed, suggest that the smaller black hole should be the brightest one.
According to this new interpretation, the observed quasi-sinusoidal signal in the optical emission of the quasar should also be seen in the ultraviolet (UV). Analysing archival UV observations collected by NASA’s Hubble and GALEX space telescopes, D’Orazio and colleagues found the same period with a 2–3 times higher amplitude. The stronger signal corresponds precisely to the model expectations, by taking into account the difference of spectral slope measured from the optical to the UV.
By estimating the combined and relative mass of the black holes in PG 1302-102, they narrow down the predicted time until the black holes coalesce to between 20,000 and 350,000 years from now, with a best estimate of 100,000 years – a very long time for humans but not in the life of stars and black holes. If confirmed by more observations in the years to come, this discovery and that of other binary black-hole candidates will improve the chances of witnessing a merger and the gravitational waves predicted, but not yet detected, by the theory of general relativity laid down by Einstein 100 years ago.
There are now quite a few discrepancies, or “tensions”, between laboratory experiments and the predictions of the Standard Model (SM) of particle physics. All of them are of the 2–3σ variety, exactly the kind that physicists learn not to take seriously early on. But many have shown up in a series of related measurements, and this is what has attracted physicists’ attention.
In this article, I will concentrate on two sets of discrepancies, both associated with data taken at √s = 7 and 8 TeV in LHC’s Run 1:
1. Using 3 fb–1 of data, LHCb has reported discrepancies with more or less precise SM predictions, all relating to the rare semileptonic transitions b → sl+l−, particularly with l = μ. If real, they would imply the presence of new lepton non-universal (LNU) interactions at an energy scale ΛLNU ≳ 1 TeV, well above the scale of electroweak symmetry breaking. Especially enticing, such effects would suggest lepton flavour violation (LFV) at rates much larger than expected in the Standard Model.
2. Using 20 fb–1 of data, ATLAS and CMS have reported 2–3σ excesses near 2 TeV in the invariant mass of dibosons VV = WW, WZ, ZZ and VH = WH, ZH, where H is the 125 GeV Higgs boson discovered in Run 1. To complicate matters, there is also a ~3σ excess near 2 TeV in a CMS search for a right-handed-coupling WR decaying to l+l−jet jet (for l = e, but not μ), and a 2.3σ excess near Mjj = 1.9 TeV in dijet production. (Stop! I hear you say, and I can’t blame you!)
If either set of discrepancies were to be confirmed in Run 2, the Standard Model would crack wide open, with new particles and their new interactions providing high-energy experimentalists and theorists with many years of exciting exploration and discovery. If both should be confirmed, Katy bar the door!
But first, I want to tip my hat to one of the longest-standing of all such SM discrepancies: the 2004 measurement of g−2 for the muon is 2.2–2.7σ higher than calculated. For a long time, this has been down-played by many, including me. After all, who pays attention to 2.5σ? (Answer: more than 1000 citations!) But now other things are showing up and, for LHCb, muons seem to be implicated. Maybe there’s something there. We should know in a few years. The new muon g-2 experiment, E989 at Fermilab, is expected to have first results in 2017–2018.
b → sµ+µ– at LHCb
Features of LHCb’s measurements of B-meson decays involving b → sl+l− transitions hint consistently at a departure from the SM:
1. The measured ratio, RK, of branching ratios of B+ → K+μ+μ− to B+ → K+e+e− is 25% lower than the SM prediction, a 2.6σ departure.
2. In an independent measurement, the branching ratio of B+ → K+μ+μ− is 30% lower than the SM prediction, a 2σ deficit. This suggests that the discrepancy is in muons, rather than electrons. LHCb’s muon measurement is more robust than for electrons. However, all indications on the electron mode, including earlier results from Belle and BaBar, are that B → K(*)e+e− is consistent with the SM.
3. The quantity P’5 in B0 → K*0μ+μ− angular distributions exhibits a 2.9σ discrepancy in each of two bins. The size of the theoretical error is being questioned, however.
4. CMS and LHCb jointly measured the branching ratio of Bs → μ+μ−. The result is consistent with the SM prediction but, interestingly, its central value is also 25% lower (at 1σ).
The RK and other measurements suggest lepton non-universality in b → sl+l− transitions, and with a strength not very different from that of these rare SM processes. This prospect has inspired an avalanche of theoretical proposals of new LNU physics above the electroweak scale, all involving the exchange of multi-TeV particles such as leptoquarks or Z’ bosons.
As a very exciting consequence, LNU interactions at high energy are, in general, accompanied by lepton flavour-violating interactions, unless the leptons involved are chosen to be mass eigenstates. But, as we know from the mismatch between the gauge and mass eigenstates of quarks in the charged weak-interaction currents, there is no reason to make such a choice. Further, that choice makes no sense at ΛLNU, far above the electroweak scale where those masses are generated. Therefore, if the LHCb anomalies were to be confirmed in Run 2, LFV decays such as B → K(*)μe/μτ and Bs → μe/μτ should occur at rates much larger than expected in the SM. (Note that LNU and LFV processes do occur in the SM but, being due to neutrino-mass differences, they are tiny.)
LHCb is searching for b → sμe and sμτ in Run 1 data, and will continue in Run 2 with much more data. The μe modes are easier targets experimentally than μτ. However, the simplest hypothesis for LNU is that it occurs in the third-generation gauge eigenstates, e.g., a b’b’τ’τ’ interaction. Then, through the usual mass-matrix diagonalisation, the lighter generations get involved, with LFV processes suppressed by mixing matrix elements that are analogous to the familiar CKM elements. In this case, b → sμτ likely will be the largest source of LFV in B-meson decays.
A final note: there are slight hints of the LFV decay H → μτ. CMS and ATLAS have reported small branching ratios that amount to 2.4σ and 1.2σ, respectively. These are tantalizing, and certainly will be clarified in Run 2.
Diboson excesses at ATLAS and CMS
I will limit this discussion to diboson, VV and VH, excesses near 2 TeV, even though the WR → l+l−jet jet and dijet excesses are of similar size and should not be forgotten. ATLAS and CMS measured high-invariant-mass VV (V = W, Z) in non-leptonic events in which both highly boosted V decay into qq’ (also called “fat” V-jets) and semi-leptonic events in which one V decays into l±ν or l+l−. In the ATLAS non-leptonic data, a highly boosted V-jet is called a W (Z) if its mass MV is within 13 GeV of 82.4 (92.8) GeV. In its semi-leptonic data, V = W or Z if 65 < MV < 105 GeV. In the non-leptonic events, ATLAS observed excesses in all three invariant-mass “pots”, MWW, MWZ and MZZ, although there may be as much as 30% overlap between neighbouring pots. Each of the three excesses amount to 5–10 events. The largest excess is in MWZ. It is centred at 2 TeV, with a 3.4σ local, 2.5σ global significance. ATLAS’s WZ data and exclusion plot are in figure 1. The WZ excess has been estimated to correspond to a cross-section times branching ratio of about 10 fb. ATLAS observed no excesses near 2 TeV in its semileptonic data. Given the low statistics of the non-leptonic excesses, this is not yet an inconsistency.
In its non-leptonic data, CMS defined a V-jet to be a W or Z candidate if its mass is between 70 and 100 GeV. The exclusion plot for this data shows a ~1.5σ excess over the expected limit near MVV = 1.9 TeV. In the semi-leptonic data, the V-jet is called a W if 65 < MV < 105 GeV or a Z if 70 < MV < 110 GeV – a quite substantial overlap. There is a 2σ excess over the expected background near 1.8 TeV in the l+l− V-jet but less than 1σ in the l±ν V-jet. When the semi-leptonic and non-leptonic data are combined, there is still a 1.5–2σ excess near 1.8 TeV. The CMS exclusion plots are in figure 2.
ATLAS and CMS also searched for resonant structure in VH production. ATLAS looked in the channels lν/l+l−/νν +bb with one and two b-tags. Exclusion plots up to 1.9 TeV show no deviation greater that 1σ from the expected background. CMS looked in non-leptonic and semi-leptonic channels. The observed non-leptonic exclusion curves look like a sine wave of amplitude 1σ on the expected falling background with, as luck would have it, a maximum at 1.7 TeV and a minimum at 2 TeV. On the other hand, a search for WH → lνbb has a 2σ excess centred at 1.9 TeV in the electron, but not the muon, data.
Many will look at these 2–3σ effects and say they are to be expected when there is so much data and so many analyses; indeed, something would be wrong if there were not. Others, including many theorists, will point to the number, proximity and variety of these fluctuations in both experiments at about the same mass, and say something is going on here. After all, physics beyond the SM and its Higgs boson has been expected for a long time and for good theoretical reasons.
It is no surprise, then, that a deluge of more than 60 papers has appeared since June, vying to explain the 2 TeV bumps. The two most popular explanations are (1) a new weakly coupled W’, Z’ triplet that mixes slightly with the familiar W, Z, and (2) a triplet of ρ-like vector bosons heralding new strong interactions associated with H being a composite Higgs boson. A typical motivation for the W’ scenario is the restoration of right–left symmetry in the weak interactions. The composite Higgs is a favourite of “naturalness theorists” trying to understand why H is so light. The new interactions of both scenarios have an “isospin” SU(2) symmetry. The new isotriplets X are produced at the femtobarn level, mainly in the Drell–Yan process of qq annihilation. Their main decay modes are X± → W±L ZL and X0 → W+L W–L, where VL is a longitudinally polarised weak boson. Generally, the W’, Z’ and the ρ (or its parity partner, an a1-like triplet) can also decay to WL, ZL plus H itself. It follows that the diboson excess attributed to ZZ would really have to be WZ and, possibly, WW. The W, Z-polarisation and the absence of real ZZ are important early tests of these models. (A possibility not considered in the composite Higgs papers, is the production of an f0-like I = 0 scalar, also at 2 TeV, which decays to W+LW–L and ZLZL.)
Although the most likely explanation of the 2 TeV bumps may well be statistics, we should have confirmation soon. The resonant cross-sections are five or more times larger at 13 TeV than at 8 TeV. Thus, the expected LHC running this year and next will produce as much or more diboson data as all of Run 1.
What if both lepton flavour violation and the VV and VH bumps were to be discovered in Run 2? Both would suggest new interactions at or above a few TeV. Surely they would have to be related, but how? New weak interactions could be flavour non-universal (but, then, not right–left symmetric). New strong interactions of Higgs compositeness could easily be flavour non-universal. The possibilities seem endless. So do the prospects for discovery. Stay tuned!
Developed in the late 1990s, the OPERA detector design was based on a hybrid technology, using both real-time detectors and nuclear emulsions. The construction of the detector at the Gran Sasso underground laboratory in Italy started in 2003 and was completed in 2007 – a giant detector of around 4000 tonnes, with 2000 m3 volume and nine million photographic films, arranged in around 150,000 target units, the so-called bricks. The emulsion films in the bricks act as tracking devices with micrometric accuracy, and are interleaved with lead plates acting as neutrino targets. The longitudinal size of a brick is around 10 radiation lengths, allowing for the detection of electron showers and the momentum measurement through the detection of multiple Coulomb scattering. The experiment took data for five years, from June 2008 until December 2012, integrating 1.8 × 1020 protons on target.
The aim of the experiment was to perform the direct observation of the transition from muon to tau neutrinos in the neutrino beam from CERN. The distance from CERN to Gran Sasso and the SPS beam energy were just appropriate for tau-neutrino detection. In 1999, intense discussions took place between CERN management and Council delegations about the opportunity of building the CERN Neutrino to Gran Sasso (CNGS) beam facility and the way to fund it. The Italian National Institute for Nuclear Physics (INFN) was far-sighted in offering a sizable contribution. Many delegations supported the idea, and the CNGS beam was approved in December 1999. Commissioning was performed in 2006, when OPERA (at that time not fully equipped yet) detected the first muon-neutrino interactions.
With the CNGS programme, CERN was joining the global experimental effort to observe and study neutrino oscillations. The first experimental hints of neutrino oscillations were gathered from solar neutrinos in the 1970s. According to theory, neutrino oscillations originate from the fact that mass and weak-interaction eigenstates do not coincide and that neutrino masses are non-degenerate. Neutrino mixing and oscillations were introduced by Pontecorvo and by the Sakata group, assuming the existence of two sorts (flavours) of neutrinos. Neutrino oscillations with three flavours including CP and CPT violation were discussed by Cabibbo and by Bilenky and Pontecorvo, after the discovery of the tau lepton in 1975. The mixing of the three flavours of neutrinos can be described by the 3 × 3 Pontecorvo–Maki–Nakagawa–Sakata matrix with three angles – that have since been measured – and a CP-violating phase, which remains unknown at present. Two additional parameters (mass-squared differences) are needed to describe the oscillation probabilities.
Several experiments on solar, atmospheric, reactor and accelerator neutrinos have contributed to the understanding of neutrino oscillations. In the atmospheric sector, the strong deficit of muon neutrinos reported by the Super-Kamiokande experiment in 1998 was the first compelling observation of neutrino oscillations. Given that the deficit of muon neutrinos was not accompanied by an increase of electron neutrinos, the result was interpreted in terms of νμ → ντ oscillations, although in 1998 the tau neutrino had not yet been observed. The first direct evidence for tau neutrinos was announced by Fermilab’s DONuT experiment in 2000, with four reported events. In 2008, the DONuT collaboration presented its final results, reporting nine observed events and an expected background of 1.5. The Super-Kamiokande result was later confirmed by the K2K and MINOS experiments with terrestrial beams. However, for an unambiguous confirmation of three-flavour neutrino oscillations, the appearance of tau neutrinos in νμ → ντ oscillations was required.
OPERA comes into play
OPERA reported the observation of the first tau-neutrino candidate in 2010. The tau neutrino was detected by the production and decay of a τ– in one of the lead targets, where τ– → ρ–ντ. A second candidate, in the τ– → π–π+π–ντ channel, was found in 2012, followed in 2013 by a candidate in the fully leptonic τ– → μ−νμντ decay. A fourth event was found in 2014 in the τ– → h–ντ channel (where h– is a pion or a kaon), and a fifth one was reported a few months ago in the same channel. Given the extremely low expected background of 0.25±0.05 events, the direct transition from muon to tau neutrinos has now been measured with the 5σ statistical precision conventionally required to firmly establish its observation, confirming the oscillation mechanism.
The extremely accurate detection technique provided by OPERA relies on the micrometric resolution of its nuclear emulsions, which are capable of resolving the neutrino-interaction point and the vertex-decay location of the tau lepton, a few hundred micrometres away. The tau-neutrino identification is first topological, then kinematical cuts are applied to suppress the residual background, thus giving a signal-to-noise ratio larger than 10. In general, the detection of tau neutrinos is extremely difficult, due to two conflicting requirements: a huge, massive detector and the micrometric accuracy. The concept of the OPERA detector was developed in the late 1990s with relevant contributions from Nagoya – the emulsion group led by Kimio Niwa – and from Naples, under the leadership of Paolo Strolin, who led the initial phase of the project.
The future of nuclear emulsions
Three years after the end of the CNGS programme, the OPERA collaboration – about 150 physicists from 26 research institutions in 11 countries – is finalising the analysis of the collected data. After the discovery of the appearance of tau neutrinos from the oscillation of muon neutrinos, the collaboration now plans to further exploit the capability of the emulsion detector to observe all of the three neutrino flavours at once. This unique feature will allow OPERA to constrain the oscillation matrix by measuring tau and electron appearance together with muon-neutrino disappearance.
An extensive development of fully automated optical microscopes for the scanning of nuclear-emulsion films was carried out along with the preparation and running of the OPERA experiment. These achievements pave the way for using the emulsion technologies in forthcoming experiments, including SHiP (Search for Hidden Particles), a new facility that was recently proposed to CERN. If approved, SHiP will not only search for hidden particles in the GeV mass range, but also study tau-neutrino physics and perform the first direct observation of tau antineutrinos. The tau-neutrino detector of the SHiP apparatus is designed to use nuclear emulsions similar to those used by OPERA. The detector will be able to identify all three neutrino flavours, while the study of muon-neutrino scattering with large statistics is expected to provide additional insights into the strange-quark content of the proton, through the measurement of neutrino-induced charmed hadron production.
Currently, the R&D work on emulsions continues mainly in Italy and Japan. Teams at Nagoya University have successfully produced emulsions with AgBr crystals of about 40 nm diameter – one order of magnitude smaller than those used in OPERA. In parallel, significant developments of fully automated optical-scanning systems, carried out in Italy and Japan with innovative analysis technologies, have overcome the intrinsic optical limit and achieved the unprecedented position resolution of 10 nm. Both achievements make it possible to use emulsions for the detection of sub-micrometric tracks, such as those left by nuclear recoils induced by dark-matter particles (Weakly Interacting Massive Particles, WIMPs). This paves the way for the first large-scale dark-matter experiment with directional information. The NEWS experiment (Nuclear Emulsions for WIMP Search) plans to carry out this search at the Gran Sasso underground laboratory.
Thanks to their extreme accuracy and capability of identifying particles, nuclear emulsions are also successfully employed in fields beyond particle physics. Exploiting the cosmic-ray muon radiography technique, sandwiches of OPERA-like emulsion films and passive materials were used to image the shallow-density structure beneath the Asama Volcano in Japan and, more recently, to image the crater structure of the Stromboli volcano in Italy. Detectors based on nuclear emulsions are also used in hadron therapy to characterize the carbon-ion beams and their secondary interactions in human tissues. The high detection accuracy provided by emulsions allows experts to better understand the secondary effects of radiation, and to monitor the released dose with the aim of optimizing the planning of medical treatments.
The Canfranc Underground Laboratory (LSC) in Spain is one of four European deep-underground laboratories, together with Gran Sasso (Italy), Modane (France) and Boulby (UK). The laboratory is located at Canfranc Estación, a small town in the Spanish Pyrenees situated about 1100 m above sea level. Canfranc is known for the railway tunnel that was inaugurated in 1928 to connect Spain and France. The huge station – 240 m long – was built on the Spanish side, and still stands as proof of the history of the place, although the railway operation was stopped in 1970.
In 1985, Angel Morales and his collaborators from the University of Zaragoza started to use the abandoned underground space to carry out astroparticle-physics experiments. In the beginning, the group used two service cavities, currently called LAB780. In 1994, during the excavation of the 8 km-long road tunnel (Somport tunnel), an experimental hall of 118 m2 was built 2520 m away from the Spanish entrance. This hall, called LAB2500, was used to install a number of experiments carried out by several international collaborations. In 2006, two additional larger halls – hall A and hall B, collectively called LAB2400 – were completed and ready for use. The LSC was born.
Today, some 8400 m3 are available to experimental installations at Canfranc in the main underground site (LAB2400), and a total volume of about 10,000 m3 on a surface area of 1600 m2 is available among the different underground structures. LAB2400 has about 850 m of rock overburden with a residual cosmic muon flux of about 4 × 10–3 m–2 s–1. The radiogenic neutron background (< 10 MeV) and the gamma-ray flux from natural radioactivity in the rock environment at the LSC are determined to be of the order of 3.5 × 10–6 n/(cm2 s) and 2γ/(cm2 s), respectively. The neutron flux is about 30 times less intense than on the surface. The radon level underground is kept in the order of 50–80 Bq/m3 by a ventilation system with fresh-air input of about 19,600 m3/h and 6300 m3/h for hall A and B, respectively. To reduce the natural levels of radioactivity, a new radon filtering system and a radon detector with a sensitivity of mBq/m3 will be installed in hall A in 2016, to be used by the experiments.
The underground infrastructure also includes a clean room to support detector assembly and to maintain the high level of cleanliness required for the most important components. A low-background screening facility, equipped with seven high-purity germanium γ-spectrometers, is available to experiments that need to select components with low radioactivity for their detectors. The screening facility has recently been used by the SuperKGd collaboration to measure the radiopurity of gadolinium salts for the Super-Kamiokande gadolinium project.
A network of 18 optical fibres, each equipped with humidity and temperature sensors, is installed in the main halls to monitor the rock stability. The sensitivity of the measurement is at the micrometer level; so far, across a timescale of four years, changes of 0.02% have been measured over 10 m scale lengths.
The underground infrastructure is complemented by a modern 1800 m2 building on the surface, which houses offices, a chemistry and an electronics laboratory, a workshop and a warehouse. Currently, some 280 scientists from around the world use the laboratory’s facilities to carry out their research.
The scientific programme at the LSC focuses on searches for dark matter and neutrinoless double beta decay, but it also includes experiments on geodynamics and on life in extreme environments.
Neutrinoless double beta decay
Unlike the two-neutrino mode observed in a number of nuclear decays (ββ2ν, e.g. 136Xe → 136Ba + 2e– + 2–νe), the neutrinoless mode of double beta decay (ββ0ν, e.g. 136Xe → 136Ba + 2e–) is as yet unobserved. The experimental signature of a neutrinoless double beta decay would be two electrons with total energy equal to the energy released in the nuclear transition. Observing this phenomenon would demonstrate that the neutrino is its own antiparticle, and is one of the main challenges in physics research carried out in underground laboratories. The NEXT experiment at the LSC aims to search for those experimental signatures in a high-pressure time projection chamber (TPC), using xenon enriched in 136Xe. The NEXT TPC is designed with a plane of photomultipliers on the cathode and a plane of silicon photomultipliers behind the anode. This set-up allows the collaboration to determine the energy and the topology of the event, respectively. In this way, background from natural radioactivity and from the environment can be accurately rejected. In its final configuration, NEXT will use 100 kg of 136Xe and 15 bar pressure. A demonstrator of the TPC with 10 kg of Xe, named NEW, is currently being commissioned at the LSC.
The Canfranc Laboratory also hosts R&D programmes in support of projects that will be carried out in other laboratories. An example is BiPo, a high-sensitivity facility that measures the radioactivity on thin foils for planar detectors. Currently, BiPo is performing measurements for the SuperNEMO project proposed at the Modane laboratory. SuperNEMO aims to make use of 100 kg of 82Se in thin foils to search for ββ0ν signatures. These foils must have very low contamination from other radioactive elements. In particular, the contamination must be less than 10 μBq/kg for 214Bi from the 238U decay chain, and less than 2 μBq/kg for 208Tl from the 232Th decay chain. These levels of radioactivity are too small to be measured with standard instruments. The BiPo experiment provides a technical solution to perform this very accurate measurement using a thin 82Se foil (40 mg/cm2) that is inserted between two detection modules equipped with scintillators and photomultipliers to tag 214Bi and 208Tl.
Dark matter
The direct detection of dark matter is another typical research activity of underground laboratories. At the LSC, two projects are in operation for this purpose: ANAIS and ArDM. In its final configuration, ANAIS will be an array of 20 ultrapure NaI(Tl) crystals aiming at investigating the annual modulation signature of dark-matter particles coming from the galactic halo. Each 12.5 kg crystal is put inside a high-purity electroformed copper shielding made at the LSC chemistry laboratory. Roman lead of 10 cm thickness, plus other lead structures totalling 20 cm thickness, are installed around the crystals, together with an active muon veto and passive neutron shielding. In 2016, the ANAIS detector will be in operation with a total of 112 kg high-purity NaI(Tl) crystals.
A different experimental approach is adopted by the ArDM detector. ArDM makes use of two tonnes of liquid argon to search for WIMP interactions in a two-phase TPC. The TPC is viewed by two arrays of 12 PMTs and can operate in single phase (liquid only) or double phase (liquid and gas). The single-phase operation mode was successfully tested up to summer 2015, and the collaboration will be starting the two-phase mode by the end of 2015.
In recent decades, the scientific community has shown growing interest in measuring cross-sections of nuclear interactions taking place in stars. At the energy of interest (that is, the average energy of particles at the centre of the stars), the expected interaction rates are very small. The signal is so small that the measurement can only be performed in underground laboratories where the levels of background are reduced. For this reason, a project has been proposed at the LSC: the Canfranc Underground Nuclear Astrophysics (CUNA) facility. CUNA would require a new experimental hall to host a linear accelerator and the detectors. A feasibility study has been carried out and further developments are expected in the coming years.
Geodynamics
The geodynamic facility at the LSC aims to study local and global geodynamic events. The installation consists of a broadband seismometer, an accelerometer and two laser strainmeters underground, and two GPS stations on the surface in the surroundings of the underground laboratory. This facility allows seismic events to be studied over a wide spectrum, from seismic waves to tectonic deformations. The laser interferometer consists of two orthogonal 70 m-long strainmeters. Non-linear shallow water tides have been observed with this set-up and compared with predictions. This was possible because of the excellent signal-to-noise ratio for strain data at the LSC.
Life in extreme environments
In the 1990s, it became evident that life on Earth extends into the deep subsurface and extreme environments. Underground facilities can be an ideal laboratory for scientists specialising in astrobiology, environmental microbiology or other similar disciplines. The GOLLUM project proposed at the LSC aims to study micro-organisms inhabiting rocks underground. The project plans to sample the rock throughout the length of the railway tunnel and characterize microbial communities living at different depths (metagenomics) by DNA extraction.
Currently operating mainly in the field of dark matter and the search for rare decays, the LSC has the potential to grow as a multidisciplinary underground research infrastructure. Its large infrastructure equipped with specialized facilities allows the laboratory to host a variety of experimental projects. For example, the space previously used by the ROSEBUD experiment is now available to collaborations active in the field of direct dark-matter searches or exotic phenomena using scintillating bolometers or low-temperature detectors. A hut with exceptionally low acoustic and vibrational background, equipped with a 3 × 3 × 4.8 m3 Faraday cage, is available in hall B. This is a unique piece of equipment in an underground facility that, among other things, could be used to characterize new detectors for low-mass dark-matter particles. Moreover, some 100 m2 are currently unused in hall A. New ideas and proposals are welcome, and will be evaluated by the LSC International Scientific Committee.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.