The Daya Bay Reactor Neutrino Experiment has recently published a new measurement of the disappearance of electron antineutrinos emitted by nuclear reactors. The observation improves the precision of the mixing angle θ13 and the associated mass-squared difference |Δm2ee| by almost a factor of two.
This is the first measurement obtained with the completed Daya Bay detector configuration consisting of eight modular antineutrino detectors, providing a total target mass of 160 tonnes. The gadolinium-doped organic liquid scintillator detects electron antineutrinos via inverse beta decay (νe + p → e+ + n). Oscillation converts some of the νe to νμ and ντ, reducing the νe flux. Six commercial pressurised-water nuclear reactors (17.4 GW of thermal power in total) of the Daya Bay Nuclear Power Complex are an intense source, producing about 1021 electron antineutrinos per second. Four detectors located around 300 to 500 m from the reactors measure the initial νe rate from the reactors, while four detectors at around 1.6 km from the reactors observe the subsequent disappearance.
This result builds on previous measurements by the Daya Bay and RENO experiments, which provided the first proof that θ13 is nonzero. The improved statistical precision came from a 3.6 times increase in exposure, generating a data sample of 1.2 million νe interactions. The systematic uncertainties were also reduced through improved characterisation of the detectors and reduction of background.
The analysis found sin2(2θ13) = 0.084±0.005 from the amplitude of anti-νe disappearance, while the energy dependence of this disappearance provided a measurement of oscillation frequency expressed in terms of the effective mass-squared difference |Δm2ee| = (2.42±0.11) × 10–3 eV2 (see figure 1). This is actually related to the two almost-equal neutrino mass-squared differences |Δm231| and |Δm231| = |Δm232 + Δm221|. One measure of how far neutrino physics has progressed is that the interpretation of this mixing parameter is now a step closer to being sensitive to the neutrino mass hierarchy. If the mass hierarchy is normal, then |Δm232| = (2.37±0.11) × 10–3 eV2, while if it is inverted, |Δm232| = (2.47±0.11) × 10–3 eV2.
The Daya Bay Reactor Neutrino Experiment continues to collect data, and aims at achiving a further factor of two improvement in precision by 2017.
Earlier this year, astronomers discovered what appeared to be a pair of supermassive black holes (SMBHs) circling towards a collision, which would send out a burst of gravitational waves. A new study of the periodic signal from the quasar PG 1302-102 seems to confirm this interpretation by showing that it could naturally arise due to relativistic Doppler boosting.
Black-hole binaries are expected to be common in large elliptical galaxies, because they most likely form by the merger of spiral galaxies, each hosting a central SMBH. A way to find binary systems in quasars is to search for a period signal repeating over several years. This is quite challenging, owing to the erratic variability of these distant active galactic nuclei. Until recently, only one rather peculiar object, called OJ287, was clearly identified as a double black-hole system, with a smaller black hole plunging twice through the extended accretion disc of the primary black hole along its inclined, eccentric 12 year-long orbit.
In January, a team led by Matthew Graham, a computational astronomer at the California Institute of Technology, designed an algorithm to detect sinusoidal intensity variations from 247,000 quasars monitored by optical telescopes in Arizona and Australia. Of the 20 pairs of black-hole candidates discovered, they focused on the most compelling bright quasar – PG 1302-102. They showed that PG 1302-102 appeared to brighten by 14 per cent every five years, suggesting the pair was less than a tenth of a light-year apart.
In a new study, also published in Nature, Daniel D’Orazio and his group at the Columbia University interpret the sinusoidal modulation as due to relativistic Doppler boosting. They find that the signal is consistent with a model where most of the optical emission comes from a smaller black hole orbiting a heavier one at nearly a tenth of the speed of light. At that speed – via the relativistic Doppler beaming effect – the smaller black hole would appear to slightly brighten as it approaches the Earth and fade as it moves away on its orbit. They note that hydrodynamical simulations do, indeed, suggest that the smaller black hole should be the brightest one.
According to this new interpretation, the observed quasi-sinusoidal signal in the optical emission of the quasar should also be seen in the ultraviolet (UV). Analysing archival UV observations collected by NASA’s Hubble and GALEX space telescopes, D’Orazio and colleagues found the same period with a 2–3 times higher amplitude. The stronger signal corresponds precisely to the model expectations, by taking into account the difference of spectral slope measured from the optical to the UV.
By estimating the combined and relative mass of the black holes in PG 1302-102, they narrow down the predicted time until the black holes coalesce to between 20,000 and 350,000 years from now, with a best estimate of 100,000 years – a very long time for humans but not in the life of stars and black holes. If confirmed by more observations in the years to come, this discovery and that of other binary black-hole candidates will improve the chances of witnessing a merger and the gravitational waves predicted, but not yet detected, by the theory of general relativity laid down by Einstein 100 years ago.
There are now quite a few discrepancies, or “tensions”, between laboratory experiments and the predictions of the Standard Model (SM) of particle physics. All of them are of the 2–3σ variety, exactly the kind that physicists learn not to take seriously early on. But many have shown up in a series of related measurements, and this is what has attracted physicists’ attention.
In this article, I will concentrate on two sets of discrepancies, both associated with data taken at √s = 7 and 8 TeV in LHC’s Run 1:
1. Using 3 fb–1 of data, LHCb has reported discrepancies with more or less precise SM predictions, all relating to the rare semileptonic transitions b → sl+l−, particularly with l = μ. If real, they would imply the presence of new lepton non-universal (LNU) interactions at an energy scale ΛLNU ≳ 1 TeV, well above the scale of electroweak symmetry breaking. Especially enticing, such effects would suggest lepton flavour violation (LFV) at rates much larger than expected in the Standard Model.
2. Using 20 fb–1 of data, ATLAS and CMS have reported 2–3σ excesses near 2 TeV in the invariant mass of dibosons VV = WW, WZ, ZZ and VH = WH, ZH, where H is the 125 GeV Higgs boson discovered in Run 1. To complicate matters, there is also a ~3σ excess near 2 TeV in a CMS search for a right-handed-coupling WR decaying to l+l−jet jet (for l = e, but not μ), and a 2.3σ excess near Mjj = 1.9 TeV in dijet production. (Stop! I hear you say, and I can’t blame you!)
If either set of discrepancies were to be confirmed in Run 2, the Standard Model would crack wide open, with new particles and their new interactions providing high-energy experimentalists and theorists with many years of exciting exploration and discovery. If both should be confirmed, Katy bar the door!
But first, I want to tip my hat to one of the longest-standing of all such SM discrepancies: the 2004 measurement of g−2 for the muon is 2.2–2.7σ higher than calculated. For a long time, this has been down-played by many, including me. After all, who pays attention to 2.5σ? (Answer: more than 1000 citations!) But now other things are showing up and, for LHCb, muons seem to be implicated. Maybe there’s something there. We should know in a few years. The new muon g-2 experiment, E989 at Fermilab, is expected to have first results in 2017–2018.
b → sµ+µ– at LHCb
Features of LHCb’s measurements of B-meson decays involving b → sl+l− transitions hint consistently at a departure from the SM:
1. The measured ratio, RK, of branching ratios of B+ → K+μ+μ− to B+ → K+e+e− is 25% lower than the SM prediction, a 2.6σ departure.
2. In an independent measurement, the branching ratio of B+ → K+μ+μ− is 30% lower than the SM prediction, a 2σ deficit. This suggests that the discrepancy is in muons, rather than electrons. LHCb’s muon measurement is more robust than for electrons. However, all indications on the electron mode, including earlier results from Belle and BaBar, are that B → K(*)e+e− is consistent with the SM.
3. The quantity P’5 in B0 → K*0μ+μ− angular distributions exhibits a 2.9σ discrepancy in each of two bins. The size of the theoretical error is being questioned, however.
4. CMS and LHCb jointly measured the branching ratio of Bs → μ+μ−. The result is consistent with the SM prediction but, interestingly, its central value is also 25% lower (at 1σ).
The RK and other measurements suggest lepton non-universality in b → sl+l− transitions, and with a strength not very different from that of these rare SM processes. This prospect has inspired an avalanche of theoretical proposals of new LNU physics above the electroweak scale, all involving the exchange of multi-TeV particles such as leptoquarks or Z’ bosons.
As a very exciting consequence, LNU interactions at high energy are, in general, accompanied by lepton flavour-violating interactions, unless the leptons involved are chosen to be mass eigenstates. But, as we know from the mismatch between the gauge and mass eigenstates of quarks in the charged weak-interaction currents, there is no reason to make such a choice. Further, that choice makes no sense at ΛLNU, far above the electroweak scale where those masses are generated. Therefore, if the LHCb anomalies were to be confirmed in Run 2, LFV decays such as B → K(*)μe/μτ and Bs → μe/μτ should occur at rates much larger than expected in the SM. (Note that LNU and LFV processes do occur in the SM but, being due to neutrino-mass differences, they are tiny.)
LHCb is searching for b → sμe and sμτ in Run 1 data, and will continue in Run 2 with much more data. The μe modes are easier targets experimentally than μτ. However, the simplest hypothesis for LNU is that it occurs in the third-generation gauge eigenstates, e.g., a b’b’τ’τ’ interaction. Then, through the usual mass-matrix diagonalisation, the lighter generations get involved, with LFV processes suppressed by mixing matrix elements that are analogous to the familiar CKM elements. In this case, b → sμτ likely will be the largest source of LFV in B-meson decays.
A final note: there are slight hints of the LFV decay H → μτ. CMS and ATLAS have reported small branching ratios that amount to 2.4σ and 1.2σ, respectively. These are tantalizing, and certainly will be clarified in Run 2.
Diboson excesses at ATLAS and CMS
I will limit this discussion to diboson, VV and VH, excesses near 2 TeV, even though the WR → l+l−jet jet and dijet excesses are of similar size and should not be forgotten. ATLAS and CMS measured high-invariant-mass VV (V = W, Z) in non-leptonic events in which both highly boosted V decay into qq’ (also called “fat” V-jets) and semi-leptonic events in which one V decays into l±ν or l+l−. In the ATLAS non-leptonic data, a highly boosted V-jet is called a W (Z) if its mass MV is within 13 GeV of 82.4 (92.8) GeV. In its semi-leptonic data, V = W or Z if 65 < MV < 105 GeV. In the non-leptonic events, ATLAS observed excesses in all three invariant-mass “pots”, MWW, MWZ and MZZ, although there may be as much as 30% overlap between neighbouring pots. Each of the three excesses amount to 5–10 events. The largest excess is in MWZ. It is centred at 2 TeV, with a 3.4σ local, 2.5σ global significance. ATLAS’s WZ data and exclusion plot are in figure 1. The WZ excess has been estimated to correspond to a cross-section times branching ratio of about 10 fb. ATLAS observed no excesses near 2 TeV in its semileptonic data. Given the low statistics of the non-leptonic excesses, this is not yet an inconsistency.
In its non-leptonic data, CMS defined a V-jet to be a W or Z candidate if its mass is between 70 and 100 GeV. The exclusion plot for this data shows a ~1.5σ excess over the expected limit near MVV = 1.9 TeV. In the semi-leptonic data, the V-jet is called a W if 65 < MV < 105 GeV or a Z if 70 < MV < 110 GeV – a quite substantial overlap. There is a 2σ excess over the expected background near 1.8 TeV in the l+l− V-jet but less than 1σ in the l±ν V-jet. When the semi-leptonic and non-leptonic data are combined, there is still a 1.5–2σ excess near 1.8 TeV. The CMS exclusion plots are in figure 2.
ATLAS and CMS also searched for resonant structure in VH production. ATLAS looked in the channels lν/l+l−/νν +bb with one and two b-tags. Exclusion plots up to 1.9 TeV show no deviation greater that 1σ from the expected background. CMS looked in non-leptonic and semi-leptonic channels. The observed non-leptonic exclusion curves look like a sine wave of amplitude 1σ on the expected falling background with, as luck would have it, a maximum at 1.7 TeV and a minimum at 2 TeV. On the other hand, a search for WH → lνbb has a 2σ excess centred at 1.9 TeV in the electron, but not the muon, data.
Many will look at these 2–3σ effects and say they are to be expected when there is so much data and so many analyses; indeed, something would be wrong if there were not. Others, including many theorists, will point to the number, proximity and variety of these fluctuations in both experiments at about the same mass, and say something is going on here. After all, physics beyond the SM and its Higgs boson has been expected for a long time and for good theoretical reasons.
It is no surprise, then, that a deluge of more than 60 papers has appeared since June, vying to explain the 2 TeV bumps. The two most popular explanations are (1) a new weakly coupled W’, Z’ triplet that mixes slightly with the familiar W, Z, and (2) a triplet of ρ-like vector bosons heralding new strong interactions associated with H being a composite Higgs boson. A typical motivation for the W’ scenario is the restoration of right–left symmetry in the weak interactions. The composite Higgs is a favourite of “naturalness theorists” trying to understand why H is so light. The new interactions of both scenarios have an “isospin” SU(2) symmetry. The new isotriplets X are produced at the femtobarn level, mainly in the Drell–Yan process of qq annihilation. Their main decay modes are X± → W±L ZL and X0 → W+L W–L, where VL is a longitudinally polarised weak boson. Generally, the W’, Z’ and the ρ (or its parity partner, an a1-like triplet) can also decay to WL, ZL plus H itself. It follows that the diboson excess attributed to ZZ would really have to be WZ and, possibly, WW. The W, Z-polarisation and the absence of real ZZ are important early tests of these models. (A possibility not considered in the composite Higgs papers, is the production of an f0-like I = 0 scalar, also at 2 TeV, which decays to W+LW–L and ZLZL.)
Although the most likely explanation of the 2 TeV bumps may well be statistics, we should have confirmation soon. The resonant cross-sections are five or more times larger at 13 TeV than at 8 TeV. Thus, the expected LHC running this year and next will produce as much or more diboson data as all of Run 1.
What if both lepton flavour violation and the VV and VH bumps were to be discovered in Run 2? Both would suggest new interactions at or above a few TeV. Surely they would have to be related, but how? New weak interactions could be flavour non-universal (but, then, not right–left symmetric). New strong interactions of Higgs compositeness could easily be flavour non-universal. The possibilities seem endless. So do the prospects for discovery. Stay tuned!
Developed in the late 1990s, the OPERA detector design was based on a hybrid technology, using both real-time detectors and nuclear emulsions. The construction of the detector at the Gran Sasso underground laboratory in Italy started in 2003 and was completed in 2007 – a giant detector of around 4000 tonnes, with 2000 m3 volume and nine million photographic films, arranged in around 150,000 target units, the so-called bricks. The emulsion films in the bricks act as tracking devices with micrometric accuracy, and are interleaved with lead plates acting as neutrino targets. The longitudinal size of a brick is around 10 radiation lengths, allowing for the detection of electron showers and the momentum measurement through the detection of multiple Coulomb scattering. The experiment took data for five years, from June 2008 until December 2012, integrating 1.8 × 1020 protons on target.
The aim of the experiment was to perform the direct observation of the transition from muon to tau neutrinos in the neutrino beam from CERN. The distance from CERN to Gran Sasso and the SPS beam energy were just appropriate for tau-neutrino detection. In 1999, intense discussions took place between CERN management and Council delegations about the opportunity of building the CERN Neutrino to Gran Sasso (CNGS) beam facility and the way to fund it. The Italian National Institute for Nuclear Physics (INFN) was far-sighted in offering a sizable contribution. Many delegations supported the idea, and the CNGS beam was approved in December 1999. Commissioning was performed in 2006, when OPERA (at that time not fully equipped yet) detected the first muon-neutrino interactions.
With the CNGS programme, CERN was joining the global experimental effort to observe and study neutrino oscillations. The first experimental hints of neutrino oscillations were gathered from solar neutrinos in the 1970s. According to theory, neutrino oscillations originate from the fact that mass and weak-interaction eigenstates do not coincide and that neutrino masses are non-degenerate. Neutrino mixing and oscillations were introduced by Pontecorvo and by the Sakata group, assuming the existence of two sorts (flavours) of neutrinos. Neutrino oscillations with three flavours including CP and CPT violation were discussed by Cabibbo and by Bilenky and Pontecorvo, after the discovery of the tau lepton in 1975. The mixing of the three flavours of neutrinos can be described by the 3 × 3 Pontecorvo–Maki–Nakagawa–Sakata matrix with three angles – that have since been measured – and a CP-violating phase, which remains unknown at present. Two additional parameters (mass-squared differences) are needed to describe the oscillation probabilities.
Several experiments on solar, atmospheric, reactor and accelerator neutrinos have contributed to the understanding of neutrino oscillations. In the atmospheric sector, the strong deficit of muon neutrinos reported by the Super-Kamiokande experiment in 1998 was the first compelling observation of neutrino oscillations. Given that the deficit of muon neutrinos was not accompanied by an increase of electron neutrinos, the result was interpreted in terms of νμ → ντ oscillations, although in 1998 the tau neutrino had not yet been observed. The first direct evidence for tau neutrinos was announced by Fermilab’s DONuT experiment in 2000, with four reported events. In 2008, the DONuT collaboration presented its final results, reporting nine observed events and an expected background of 1.5. The Super-Kamiokande result was later confirmed by the K2K and MINOS experiments with terrestrial beams. However, for an unambiguous confirmation of three-flavour neutrino oscillations, the appearance of tau neutrinos in νμ → ντ oscillations was required.
OPERA comes into play
OPERA reported the observation of the first tau-neutrino candidate in 2010. The tau neutrino was detected by the production and decay of a τ– in one of the lead targets, where τ– → ρ–ντ. A second candidate, in the τ– → π–π+π–ντ channel, was found in 2012, followed in 2013 by a candidate in the fully leptonic τ– → μ−νμντ decay. A fourth event was found in 2014 in the τ– → h–ντ channel (where h– is a pion or a kaon), and a fifth one was reported a few months ago in the same channel. Given the extremely low expected background of 0.25±0.05 events, the direct transition from muon to tau neutrinos has now been measured with the 5σ statistical precision conventionally required to firmly establish its observation, confirming the oscillation mechanism.
The extremely accurate detection technique provided by OPERA relies on the micrometric resolution of its nuclear emulsions, which are capable of resolving the neutrino-interaction point and the vertex-decay location of the tau lepton, a few hundred micrometres away. The tau-neutrino identification is first topological, then kinematical cuts are applied to suppress the residual background, thus giving a signal-to-noise ratio larger than 10. In general, the detection of tau neutrinos is extremely difficult, due to two conflicting requirements: a huge, massive detector and the micrometric accuracy. The concept of the OPERA detector was developed in the late 1990s with relevant contributions from Nagoya – the emulsion group led by Kimio Niwa – and from Naples, under the leadership of Paolo Strolin, who led the initial phase of the project.
The future of nuclear emulsions
Three years after the end of the CNGS programme, the OPERA collaboration – about 150 physicists from 26 research institutions in 11 countries – is finalising the analysis of the collected data. After the discovery of the appearance of tau neutrinos from the oscillation of muon neutrinos, the collaboration now plans to further exploit the capability of the emulsion detector to observe all of the three neutrino flavours at once. This unique feature will allow OPERA to constrain the oscillation matrix by measuring tau and electron appearance together with muon-neutrino disappearance.
An extensive development of fully automated optical microscopes for the scanning of nuclear-emulsion films was carried out along with the preparation and running of the OPERA experiment. These achievements pave the way for using the emulsion technologies in forthcoming experiments, including SHiP (Search for Hidden Particles), a new facility that was recently proposed to CERN. If approved, SHiP will not only search for hidden particles in the GeV mass range, but also study tau-neutrino physics and perform the first direct observation of tau antineutrinos. The tau-neutrino detector of the SHiP apparatus is designed to use nuclear emulsions similar to those used by OPERA. The detector will be able to identify all three neutrino flavours, while the study of muon-neutrino scattering with large statistics is expected to provide additional insights into the strange-quark content of the proton, through the measurement of neutrino-induced charmed hadron production.
Currently, the R&D work on emulsions continues mainly in Italy and Japan. Teams at Nagoya University have successfully produced emulsions with AgBr crystals of about 40 nm diameter – one order of magnitude smaller than those used in OPERA. In parallel, significant developments of fully automated optical-scanning systems, carried out in Italy and Japan with innovative analysis technologies, have overcome the intrinsic optical limit and achieved the unprecedented position resolution of 10 nm. Both achievements make it possible to use emulsions for the detection of sub-micrometric tracks, such as those left by nuclear recoils induced by dark-matter particles (Weakly Interacting Massive Particles, WIMPs). This paves the way for the first large-scale dark-matter experiment with directional information. The NEWS experiment (Nuclear Emulsions for WIMP Search) plans to carry out this search at the Gran Sasso underground laboratory.
Thanks to their extreme accuracy and capability of identifying particles, nuclear emulsions are also successfully employed in fields beyond particle physics. Exploiting the cosmic-ray muon radiography technique, sandwiches of OPERA-like emulsion films and passive materials were used to image the shallow-density structure beneath the Asama Volcano in Japan and, more recently, to image the crater structure of the Stromboli volcano in Italy. Detectors based on nuclear emulsions are also used in hadron therapy to characterize the carbon-ion beams and their secondary interactions in human tissues. The high detection accuracy provided by emulsions allows experts to better understand the secondary effects of radiation, and to monitor the released dose with the aim of optimizing the planning of medical treatments.
The Canfranc Underground Laboratory (LSC) in Spain is one of four European deep-underground laboratories, together with Gran Sasso (Italy), Modane (France) and Boulby (UK). The laboratory is located at Canfranc Estación, a small town in the Spanish Pyrenees situated about 1100 m above sea level. Canfranc is known for the railway tunnel that was inaugurated in 1928 to connect Spain and France. The huge station – 240 m long – was built on the Spanish side, and still stands as proof of the history of the place, although the railway operation was stopped in 1970.
In 1985, Angel Morales and his collaborators from the University of Zaragoza started to use the abandoned underground space to carry out astroparticle-physics experiments. In the beginning, the group used two service cavities, currently called LAB780. In 1994, during the excavation of the 8 km-long road tunnel (Somport tunnel), an experimental hall of 118 m2 was built 2520 m away from the Spanish entrance. This hall, called LAB2500, was used to install a number of experiments carried out by several international collaborations. In 2006, two additional larger halls – hall A and hall B, collectively called LAB2400 – were completed and ready for use. The LSC was born.
Today, some 8400 m3 are available to experimental installations at Canfranc in the main underground site (LAB2400), and a total volume of about 10,000 m3 on a surface area of 1600 m2 is available among the different underground structures. LAB2400 has about 850 m of rock overburden with a residual cosmic muon flux of about 4 × 10–3 m–2 s–1. The radiogenic neutron background (< 10 MeV) and the gamma-ray flux from natural radioactivity in the rock environment at the LSC are determined to be of the order of 3.5 × 10–6 n/(cm2 s) and 2γ/(cm2 s), respectively. The neutron flux is about 30 times less intense than on the surface. The radon level underground is kept in the order of 50–80 Bq/m3 by a ventilation system with fresh-air input of about 19,600 m3/h and 6300 m3/h for hall A and B, respectively. To reduce the natural levels of radioactivity, a new radon filtering system and a radon detector with a sensitivity of mBq/m3 will be installed in hall A in 2016, to be used by the experiments.
The underground infrastructure also includes a clean room to support detector assembly and to maintain the high level of cleanliness required for the most important components. A low-background screening facility, equipped with seven high-purity germanium γ-spectrometers, is available to experiments that need to select components with low radioactivity for their detectors. The screening facility has recently been used by the SuperKGd collaboration to measure the radiopurity of gadolinium salts for the Super-Kamiokande gadolinium project.
A network of 18 optical fibres, each equipped with humidity and temperature sensors, is installed in the main halls to monitor the rock stability. The sensitivity of the measurement is at the micrometer level; so far, across a timescale of four years, changes of 0.02% have been measured over 10 m scale lengths.
The underground infrastructure is complemented by a modern 1800 m2 building on the surface, which houses offices, a chemistry and an electronics laboratory, a workshop and a warehouse. Currently, some 280 scientists from around the world use the laboratory’s facilities to carry out their research.
The scientific programme at the LSC focuses on searches for dark matter and neutrinoless double beta decay, but it also includes experiments on geodynamics and on life in extreme environments.
Neutrinoless double beta decay
Unlike the two-neutrino mode observed in a number of nuclear decays (ββ2ν, e.g. 136Xe → 136Ba + 2e– + 2–νe), the neutrinoless mode of double beta decay (ββ0ν, e.g. 136Xe → 136Ba + 2e–) is as yet unobserved. The experimental signature of a neutrinoless double beta decay would be two electrons with total energy equal to the energy released in the nuclear transition. Observing this phenomenon would demonstrate that the neutrino is its own antiparticle, and is one of the main challenges in physics research carried out in underground laboratories. The NEXT experiment at the LSC aims to search for those experimental signatures in a high-pressure time projection chamber (TPC), using xenon enriched in 136Xe. The NEXT TPC is designed with a plane of photomultipliers on the cathode and a plane of silicon photomultipliers behind the anode. This set-up allows the collaboration to determine the energy and the topology of the event, respectively. In this way, background from natural radioactivity and from the environment can be accurately rejected. In its final configuration, NEXT will use 100 kg of 136Xe and 15 bar pressure. A demonstrator of the TPC with 10 kg of Xe, named NEW, is currently being commissioned at the LSC.
The Canfranc Laboratory also hosts R&D programmes in support of projects that will be carried out in other laboratories. An example is BiPo, a high-sensitivity facility that measures the radioactivity on thin foils for planar detectors. Currently, BiPo is performing measurements for the SuperNEMO project proposed at the Modane laboratory. SuperNEMO aims to make use of 100 kg of 82Se in thin foils to search for ββ0ν signatures. These foils must have very low contamination from other radioactive elements. In particular, the contamination must be less than 10 μBq/kg for 214Bi from the 238U decay chain, and less than 2 μBq/kg for 208Tl from the 232Th decay chain. These levels of radioactivity are too small to be measured with standard instruments. The BiPo experiment provides a technical solution to perform this very accurate measurement using a thin 82Se foil (40 mg/cm2) that is inserted between two detection modules equipped with scintillators and photomultipliers to tag 214Bi and 208Tl.
Dark matter
The direct detection of dark matter is another typical research activity of underground laboratories. At the LSC, two projects are in operation for this purpose: ANAIS and ArDM. In its final configuration, ANAIS will be an array of 20 ultrapure NaI(Tl) crystals aiming at investigating the annual modulation signature of dark-matter particles coming from the galactic halo. Each 12.5 kg crystal is put inside a high-purity electroformed copper shielding made at the LSC chemistry laboratory. Roman lead of 10 cm thickness, plus other lead structures totalling 20 cm thickness, are installed around the crystals, together with an active muon veto and passive neutron shielding. In 2016, the ANAIS detector will be in operation with a total of 112 kg high-purity NaI(Tl) crystals.
A different experimental approach is adopted by the ArDM detector. ArDM makes use of two tonnes of liquid argon to search for WIMP interactions in a two-phase TPC. The TPC is viewed by two arrays of 12 PMTs and can operate in single phase (liquid only) or double phase (liquid and gas). The single-phase operation mode was successfully tested up to summer 2015, and the collaboration will be starting the two-phase mode by the end of 2015.
In recent decades, the scientific community has shown growing interest in measuring cross-sections of nuclear interactions taking place in stars. At the energy of interest (that is, the average energy of particles at the centre of the stars), the expected interaction rates are very small. The signal is so small that the measurement can only be performed in underground laboratories where the levels of background are reduced. For this reason, a project has been proposed at the LSC: the Canfranc Underground Nuclear Astrophysics (CUNA) facility. CUNA would require a new experimental hall to host a linear accelerator and the detectors. A feasibility study has been carried out and further developments are expected in the coming years.
Geodynamics
The geodynamic facility at the LSC aims to study local and global geodynamic events. The installation consists of a broadband seismometer, an accelerometer and two laser strainmeters underground, and two GPS stations on the surface in the surroundings of the underground laboratory. This facility allows seismic events to be studied over a wide spectrum, from seismic waves to tectonic deformations. The laser interferometer consists of two orthogonal 70 m-long strainmeters. Non-linear shallow water tides have been observed with this set-up and compared with predictions. This was possible because of the excellent signal-to-noise ratio for strain data at the LSC.
Life in extreme environments
In the 1990s, it became evident that life on Earth extends into the deep subsurface and extreme environments. Underground facilities can be an ideal laboratory for scientists specialising in astrobiology, environmental microbiology or other similar disciplines. The GOLLUM project proposed at the LSC aims to study micro-organisms inhabiting rocks underground. The project plans to sample the rock throughout the length of the railway tunnel and characterize microbial communities living at different depths (metagenomics) by DNA extraction.
Currently operating mainly in the field of dark matter and the search for rare decays, the LSC has the potential to grow as a multidisciplinary underground research infrastructure. Its large infrastructure equipped with specialized facilities allows the laboratory to host a variety of experimental projects. For example, the space previously used by the ROSEBUD experiment is now available to collaborations active in the field of direct dark-matter searches or exotic phenomena using scintillating bolometers or low-temperature detectors. A hut with exceptionally low acoustic and vibrational background, equipped with a 3 × 3 × 4.8 m3 Faraday cage, is available in hall B. This is a unique piece of equipment in an underground facility that, among other things, could be used to characterize new detectors for low-mass dark-matter particles. Moreover, some 100 m2 are currently unused in hall A. New ideas and proposals are welcome, and will be evaluated by the LSC International Scientific Committee.
August 1959 saw the first issue of CERN Courier – “the long-expected internal bulletin” and idea of Cornelis Bakker, who was then CERN’s Director-General. The goals stated on the first page included the aim to “maintain the ideal of European co-operation and the team spirit which are essential to the achievement of our final aim: scientific research on an international scale” (CERN Courier July/August 2009 p30).
From that very first issue, the Courier contained news about other labs – “Other people’s atoms” – and the cover soon dropped the tag line “Published monthly for CERN staff members” as outside interest grew rapidly. Following a readership survey that showed a thirst for “more news from other laboratories”, the magazine’s 10th anniversary year saw the introduction of the laboratory correspondents – a concept that was formalised further in 1975, after a meeting on “Perspectives in High Energy Physics” in New Orleans, attended by lab directors and senior scientists from Europe, Japan, the US and the USSR.
One topic at the meeting concerned international communication in high-energy physics, and here CERN proposed that the Courier could do more, with the help of more active participation from the other labs plus local distribution in several countries. The issue for January 1976 saw the subtitle “Journal of High-Energy Physics” discreetly positioned inside the front cover above the list of distribution centres and lab correspondents. Five years later, an editorial advisory panel was named for the first time, and the subtitle extended to “International Journal of the High-Energy Physics Community”.
Changing times
That was 35 years ago, and since then CERN Courier has developed through mainly incremental changes to its content. Book reviews, opinion pieces (“Viewpoint”), “Astrowatch”, “Sciencewatch” and an archive page have become regular items, and feature articles, in particular, are signed by the authors. The “look and feel” of the magazine has also changed, from being predominantly black and white to being full colour since IOP Publishing took charge of production. But the basic aim has remained the same, as the Courier has continued to serve an international high-energy readership, with the help of enthusiastic support from the worldwide community.
Over the same period of time, high-energy physics has seen many remarkable developments. The discoveries of the gluon at DESY, of the W and Z bosons at CERN, and of the top quark at Fermilab provided essential pieces of the Standard Model, with the new boson observed at the LHC in 2012 revealing the final keystone associated with the Brout–Englert–Higgs mechanism for giving mass to elementary particles. Meanwhile, the centre-of-gravity of the field has moved slowly but surely from the US to Europe and CERN, with the LHC currently exploring and extending the high-energy frontier.
In addition, the way that scientists communicate has changed dramatically, largely as a result of the internet, the World Wide Web instigated at CERN by Tim Berners-Lee, and arXiv – the electronic preprint repository created by Paul Ginsparg, which became accessible through the web in 1993. Of course, this has been only part of a communication revolution in which information – and, indeed, mis-information – is today transmitted almost immediately, in formats varying from official press releases to informal blogs and tweets.
A new world
These developments have also transformed the way that new results are communicated. Even results in a journal with strict embargoes, such as Nature, are flashed around the world the instant the embargo lifts, quickly propagating through science news channels and social media. Against this background, news in CERN Courier – and, as is increasingly the case, results presented at conferences – can be “old hat”. So where does that leave this magazine?
When I started as editor in 2003, I had a dream to be able to say “you read it first in CERN Courier” – an idea that was really already dead. Today, a more realistic goal would be to say “for the story behind the headlines, read CERN Courier“. ArXiv and open-access publishing make preprints and papers readily accessible to anyone who savours the details of a specific piece of research; nevertheless, there will always be other people who would like a simpler but authoritative summary.
In Physics in the 20th Century, CERN’s former Director-General Victor Weisskopf wrote “…it is beneficial to the scientist to attempt seriously to explain scientific work to a layman or even to a scientist in another field. Usually, if one can not explain one’s work to an outsider, one has not really understood it.” This is, in my opinion, just as true for specialities within a field such as high-energy physics, so it seems to me that CERN Courier should long continue, and so “maintain the ideal of European co-operation and…achievement of our final aim: scientific research on an international scale”.
By Yorikiyo Nagashima Wiley
Hardback: £105 €131.30
E-book: £94.99 €118.80
Also available at the CERN bookshop
This comprehensive presentation of modern particle physics provides a store of background knowledge of the big open questions that go beyond the Standard Model, concerning, for example, the existence of the Higgs boson or the nature of dark matter and dark energy. For each topic, the author introduces key ideas and derives basic formulas needed to understand the phenomenological outcomes. Experimental techniques used in detection are also explained. Finally, the most recent data and future prospects are reviewed. The book can be used to provide a quick look at specialized topics, both to high-energy and theoretical physicists and to astronomers and graduate students.
By Ashok Das and Susumo Okubo World Scientific
Hardback: £63
E-book: £24
Ashok Das and Susumo Okubo, colleagues at the University of Rochester, are theoretical high-energy particle physicists from different generations. Okubo’s name is probably best known for the mass formula for mesons and baryons that he and Murray Gell-Mann derived independently through the application of the SU(3) Lie group in the quark model, while Das works on questions related to symmetry. Their book is intended for graduate students of theoretical physics (with a background in quantum mechanics) as well as researchers interested in applications of Lie group theory and Lie algebras in physics. The emphasis is on the inter-relations of representation theories of Lie groups and the corresponding Lie algebras.
By Belal E Baaquieet al. (eds) World Scientific
Hardback: £57
Paperback: £29
As the title of this collection of essays on the work of Kenneth Wilson (1936–2013) indicates, his impact on physics was enormous, transforming both high-energy and condensed-matter physics. He also foresaw much of the modern impact of computers and networking, and I can feel that influence even as I type this review.
This is a long book, comprising 385 pages with 21 essays by many of today’s most influential physicists. It should be made clear that while it includes plenty of biographical material, this is, for the most part, a combination of personal reminiscences and highly technical articles. A non-physicist, or even a physicist without a fairly deep understanding of modern quantum field theory, would probably find much of it almost completely impenetrable, with equations and figures that are really only accessible to the cognoscenti.
That said, a reading of selected parts sheds interesting light on a variety of complex topics in ways that are perhaps not so easily found in modern textbooks. I would not hesitate to suggest such a strategy to a philosopher or historian of science, or an undergraduate or graduate student in physics. The chapters are all well written, and whatever fraction is understood will prove valuable.
Some of the most interesting parts are quotations from Wilson himself. A particularly striking example is from Paul Ginsparg’s essay: “I go to graduate school in physics, and I take the first course in quantum field theory, and I’m totally disgusted with the way it’s related. They’re discussing something called renormalization group, and it’s a set of recipes, and I’m supposed to accept that these recipes work – no way. I made a resolution, I would learn to do the problems that they assigned, I would learn how to turn in answers that they would expect, holding my nose all the time, and some day I was going to understand what was really going on.”
He did, and now thanks to him, we do too. This represents just a fraction of the impact that Wilson has had on our field. The book is long, and not an easy read, but well worth the effort and I highly recommend it.
By N N Bogolubov, Jr (ed.) World Scientific
Hardback: £57
E-book: £43
Nicolai Bogolubov (1909–1992) was well known in the world of high-energy physics as one of the founders of JINR, Dubna, and the first director of the Laboratory Theoretical Physics, now named after him. He was also well known in the wider community for his many contributions to quantum field theory and to statistical mechanics. Part I of this book, which is edited by his son, contains some of the elder Bogolubov’s papers on quantum statistical mechanics, a field in which he obtained a number of fundamental results, in particular in relation to superfluidity and superconductivity. Superfluidity was discovered in Russia in 1938 by Kapitza, and in 1947 Bogolubov published his theory of the phenomenon based on the correlated interaction of pairs of particles. This later led him to a microscopic theory for superconductivity, which helped to set the Bardeen–Cooper–Schrieffer theory on firm ground. Part II is devoted to methods for studying model Hamiltonians for problems in quantum statistical mechanics, and is based on seminars and lectures that Bogolubov gave at Moscow State University.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.