Comsol -leaderboard other pages

Topics

New results cast light on semileptonic Bs asymmetry

The LHCb experiment has made the most precise measurement to date of the asymmetry assl, which is a measure of a flavour-specific matter–antimatter asymmetry in B-mesons and a test for physics beyond the Standard Model.

In 2010, and with an update in 2011, the Fermilab DØ collaboration reported an asymmetry in the semileptonic decays of B mesons decay into muons, which they observed in the number of events containing same-sign dimuons. The most recent result, using almost the full DØ data sample of 9 fb–1, gives an asymmetry of about –1%, and differs by 3.9 σ from the tiny value predicted within the framework of the Standard Model (Abazov et al. 2011). If confirmed, it would indicate the presence of new physics.

CCnew8_07_12

Same-sign dimuons can be produced from the decay of pairs of neutral B mesons, which can mix between their particle and antiparticle states. Owing to the inclusive nature of the DØ measurement, the asymmetry, denoted Absl, is a sum of contributions from the individual asymmetries in the Bd and Bs meson systems, adsl and assl respectively. It is shown as the diagonal band in the plane of those asymmetries in the figure. The individual asymmetries characterize CP-violation in B-meson mixing, similar to the parameter εK in the neutral kaon system.

One of the highest priorities in flavour physics has been to measure adsl and assl separately to establish if there is a disagreement with the Standard Model – and, if so, whether it occurs in the Bd or Bs system. Previous measurements of adsl by the BaBar and Belle collaborations working at the ϒ(4S) resonance and of assl in an independent analysis by DØ have not been sufficiently precise to answer this question.

The new result from LHCb, based on the full 2011 data sample of 1.0 fb–1, and first presented at ICHEP2012 (p53), provides the most precise measurement to date of assl. The analysis uses B0s→Dsμ+X (and charge conjugate) decays, with Ds→φπ and relies on excellent control of asymmetries in the μ± trigger and reconstruction. The result, assl = (–0.24 ± 0.54 ± 0.33)%, which is shown as the horizontal blue band in the figure, is consistent with the Standard Model prediction (LHCb collaboration 2012). Updated results from DØ on both adsl and assl, which were also presented at ICHEP2012, continue to leave the situation unclear; more precise measurements are needed (Stone 2012). With the recently announced extension of proton running at the LHC for 2012, the LHCb collaboration expects to more than triple its data sample, so updates on this topic will be most exciting.

A surprising asymmetry and more excited states

The flavour-changing neutral-current decays B → K(K*+μ provide important channels in searching for new physics, as they are highly suppressed in the Standard Model. The predictions in these channels suffer from relatively large theoretical uncertainties but these can be overcome by measuring asymmetries in which the uncertainties cancel. One example is the isospin asymmetry AI, which compares the decays: B0 → K0(K*0+μ and B+ → K+(K*++μ. In the Standard Model, AI is predicted to be small, around –1%, for the decays to the excited K*, and while there is no precise prediction for the decays to the K, a similar value is expected.

CCnew4_06_12

LHCb has measured AI for these decays as a function of the dimuon mass (q2), using data corresponding to an integrated luminosity of 1.0 fb–1, with a surprising result. While the measurements for B → K*μ+μ are consistent with the prediction of negligible isospin asymmetry, the value for B → Kμ+μ is non zero. In particular, in the two q2 bins below 4.3 GeV/c2 and in the highest bin above 16 GeV/c2 the isospin asymmetry is negative in the B → Kμ+μ channel. These q2 regions are furthest from the charmonium regions and cleanly predicted theoretically. The measured asymmetry is dominated by the deficit observed in B0 → K0μ+μ. Integrated over the dimuon mass range, the result for AI deviates from zero by more than 4σ.

These results were obtained with the full data sample for 2011, which should more than double by the end of 2012. In the meantime, theorists will analyse this puzzling result to establish whether this effect can be accommodated in the framework of the Standard Model – or whether its explanation requires new physics.

In a different study, LHCb observed two Λb excited states for the first time, as predicted within the context of the quark model. The excited states (see figure) were reconstructed in three steps. First, Λc+ particles were reconstructed through their decay Λc+ → pKπ+; then the Λc particles were combined with π to look for Λb particles; finally the Λb particles were combined with π+π pairs. In this way the team found about 16 Λb(5912)→Λbπ+π decays (4.9σ significance) and about 50 Λb(5920)→Λbπ+π decays (10.1σ) among some 6 × 1013 proton–proton collisions detected during 2011.

Seeing bosons in heavy-ion collisions

Studies of heavy-ion collisions at the LHC are challenging and refining ideas on how to probe QCD – the theory of the strong interaction – at high temperature and density. From precision analyses of particle “flow” that clearly distinguish pre-collision effects from post-collision effects, to the observation of jet quenching, the ATLAS collaboration is releasing many new results. Several of these observations are surprising and unexpected, such as the occurrence of strong jet quenching with almost no jet broadening; and complete explanations are currently lacking. One new set of results, however, spectacularly confirms expectations: photons and the heavy W and Z bosons are unaffected by the hot dense QCD medium.

CCnew6_06_12

Direct measurements of energetic photon production released by the collaboration recently show that the number of photons produced is just as would be expected from ordinary proton–proton collisions when extrapolated to the multiple collisions within the heavy-ion interactions. This effect is truly independent of the “centrality” of the collision, the parameter that distinguishes head-on (central) collisions from grazing collisions. Similar observations have been made at much lower energies. However, by taking advantage not only of the LHC beam energy but also the capacity of the ATLAS calorimeters to make precision measurements and reject background events, this new study extends the results to energies 10 times higher for central collisions.

ATLAS has also released new measurements of Z-boson production, which show that, like photons, Zs are unaffected by the heavy-ion environment; the number produced is exactly what would be expected from “binary scaling”, i.e. scaling up to the number of nucleon collisions. The Z bosons were measured through their decays both to two muons, using the ATLAS muon spectrometer, and to electron–positron pairs, with the ATLAS calorimeters. The observation of binary scaling not only shows that the Zs are unaffected by the medium, but it reveals that the electrons, positrons and muons produced are also unaffected, as expected.

These results open up a long dreamt of possibility in this field: the study of jet-boson correlations. Because the bosons are unaffected by the hot dense medium, they can be used as a “control” to study precisely the suppression of jets. ATLAS is already making prototype measurements of this kind and high precision should be attainable in future LHC runs.

• For more information, see https://twiki.cern.ch/twiki/bin/view/AtlasPublic.

EXO, MINOS and OPERA reveal new results

CCnew7_06_12

The first results from the Enriched Xenon Observatory 200 (EXO-200) on the search for neutrinoless double beta decay show no evidence for this hypothesised process, which would shed new light on the nature of the neutrino. Located in the US Department of Energy’s Waste Isolation Pilot Plant in New Mexico, EXO-200 is a large beta-decay detector. In 2011 it was the first to measure two-neutrino double beta decay in 136Xe; now it has set a lower limit for neutrinoless double beta decay for the same isotope.

Double beta decay, first observed in 1986, occurs when a nucleus is energetically unable to decay via single beta decay, but can instead lose energy through the conversion of two neutrons to protons, with the emission of two electrons and two antineutrinos. The related process without the emission of antineutrinos is theoretically possible but only if the neutrino is a “Majorana” particle, i.e. it is its own antiparticle.

EXO-200 uses 200 kg of 136Xe to search for double beta decay. Xenon can be easily purified and reused, and it can be enriched in the 136Xe isotope using Russian centrifuges, which makes processing large quantities feasible. It also has a decay energy – Q-value – of 2.48 MeV, high enough to be above many of the uranium emission lines. Using 136Xe as a scintillator gives excellent energy resolution through the collection both of ionization electrons and of scintillation light. Finally, using xenon allows for complete background elimination through tagging of the daughter barium ion. This tagging, combined with the detector’s location more than 650 m underground and the use of materials selected and screened for radiopurity, ensures that other traces of radioactivity and cosmic radiation are eliminated or kept to a minimum. The latest results reflect this low background activity and high sensitivity – as only one event was recorded in the region where neutrinoless double beta decay was expected.

In the latest result, no signal for neutrinoless double beta decay was observed for an exposure of 32.5 kg/y, with a background of about 1.5 × 10–3 kg–1y–1keV–1. This sets a lower limit on the half-life of neutrinoless double beta decay in 136Xe to greater than 1.6 × 1025 y, corresponding to effective Majorana masses of less than 140–380 meV, depending on details of the calculation (Auger et al. 2012).

CCnew8_06_12

The EXO collaboration announced the results at Neutrino 2012, the 25th International Conference on Neutrino Physics and Astrophysics, held in Kyoto, on 3–9 June. This dedicated conference for the neutrino community provided the occasion for many neutrino experiments to publicize their latest results. In the case of the MINOS collaboration, these included the final results from the first phase of the experiment, which studies oscillations between neutrino types.

In 2010 the MINOS collaboration caused a stir when it announced the observation of a surprising difference between neutrinos and antineutrinos. Measurements of a key parameter used in the study of oscillations – Δm2, the difference in the squares of the masses of two oscillating types – gave different values for neutrinos and antineutrinos. In 2011, additional statistics brought the values closer together and, with twice as much antineutrino data collected since then, the gap has now closed. From a total exposure of 2.95 × 1020 protons on target, a value was found for muon antineutrinos of Δm2 = 2.62+0.31–0.28(stat.)±0.09(syst.) and the antineutrino “atmospheric” mixing angle was constrained with sin22θ greater than 0.75 at 90% confidence level (Adamson et al. 2012). These values are in agreement with those measured for muon neutrinos.

Since its debut in 2006, the OPERA experiment in the Gran Sasso National Laboratory has been searching for neutrino oscillations in which muon-neutrinos transform into τ-neutrinos as they travel the 730 km of rock between CERN, where they originate, and the laboratory in Italy. At the conference, the OPERA collaboration announced the observation of their second τ-neutrino, after the first observation two years ago. This new event is an important step towards the accomplishment of the final goal of the experiment.

Results on the time of flight of neutrinos from CERN to the Gran Sasso were also presented by CERN’s director for research and scientific computing, Sergio Bertolucci, on behalf of four experiments. All four – Borexino, ICARUS, LVD and OPERA – measure a neutrino time of flight that is consistent with the speed of light. The indications are that a measurement by OPERA announced last September can be attributed to a faulty element of the experiment’s fibre-optic timing system.

Elements 114 and 116 receive official names

IUPAC has officially approved the names “flerovium” (Fl) for the element with atomic number 114 and “livermorium” (Lv), for the one with atomic number 116. The names were proposed by the collaboration from the Joint Institute for Nuclear Research (JINR), Dubna, and the Lawrence Livermore National Laboratory in California, led by JINR’s Yuri Oganessian. Scientists from the two laboratories share the priority for the discovery of these new elements at the facilities in Dubna.

The name flerovium is in honour of the Flerov Laboratory of Nuclear Reactions, where these superheavy elements were synthesized. Georgy Flerov (1913–1990) was a pioneer in heavy-ion physics and founder of the JINR Laboratory of Nuclear Reactions in 1957, which has borne his name since 1991. Flerov is also known for his fundamental work in fields of physics that resulted in the discovery of new phenomena in properties and interactions of atomic nuclei.

The name livermorium honours the Lawrence Livermore National Laboratory. A group of researchers from Livermore took part in the work carried out in Dubna on the synthesis of superheavy elements, including element 116. Over the years, researchers at the laboratory have been involved in many areas of nuclear science and investigation of chemical properties of the heaviest elements.

The discoverers of flerovium and livermorium have submitted their claims for the discovery of further heavy elements, with atomic numbers 113, 115, 117 and 118 to the Joint Working Party of independent experts drawn from the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics.

Lead collisions in the LHC top the bill in Cagliari

Hard Probes 2012 – the 5th International Conference on Hard and Electromagnetic Probes of Nuclear Collisions – took place in Cagliari on 27 May – 1 June. The most important topical meeting to focus on the study of hard processes in ultra-relativistic heavy-ion collisions, this was the first time that the LHC collaborations presented their results based on lead–lead data. The main focus was undoubtedly on the wealth of new high-quality results from ALICE, ATLAS and CMS, complemented with significant contributions from the PHENIX and STAR experiments at the Relativistic Heavy Ion Collider (RHIC) in Brookhaven.

Quoting from the inspired opening talk given by Berndt Mueller of Duke University, the hard probes “manifesto” can be summarized as follows: hard probes are essential to resolve and study a medium of deconfined quarks and gluons at short spatial scales, and they have to be developed into as precise a tool as possible. This is accomplished by the study of the production and the propagation in the deconfined medium of heavy quarks, particles with high momentum transfer (pT), jets and quarkonia.

Jet quenching can be addressed by studying the suppression of leading hadrons in nuclear collisions with respect to the proton–proton case. The ALICE and CMS collaborations reported results on the production of open charm and beauty, and results were also presented from the STAR experiment. An important aspect of parton energy-loss in the medium is its mass dependence: the energy loss is expected to be strongest for light hadrons and smaller for heavy quarks. The LHC data shown at the conference are suggestive of such hierarchy, although more statistics are still needed to reach a firm conclusion.

In addition, the high-precision LHC data on light charged hadrons are significantly expanding the kinematic reach. This is fundamental to discriminating among theoretical models, which have been tuned at the lower energy of RHIC.

At the LHC, full reconstruction of high-energy jets has become possible for the first time, allowing ATLAS and CMS to present high-statistics results on jet–jet correlations. The emerging picture is consistent with one in which partons lose a large fraction of their energy while traversing the hot QCD medium – before fragmenting essentially in vacuum. First results on γ-jet correlations were also presented by the CMS and PHENIX collaborations; these allow the tagging of quark jets and give a better estimation of the initial parton energy. During the conference, an intense debate developed on how to exploit fully the information provided by full jet reconstruction.

Quarkonia suppression was another of the striking observables, for which results from LHC had been eagerly awaited. CMS presented the first exciting precision results on the suppression of the ϒ states. These reveal a clear indication of a much larger suppression for more weakly bound ϒ(2S) and ϒ(3S) with respect to the strongly bound ϒ(1S) states, in accordance with the predictions for the observation of colour screening. The ALICE collaboration presented new data on the rapidity and pT dependence of J/ψ suppression. The results show that, despite the higher initial temperatures reached at LHC, the size of the suppression remains significantly smaller than at RHIC. This is an intriguing hint that a regeneration mechanism from the large number of charm quarks present in the deconfined medium may take place at LHC energies.

Part of the conference was devoted to the study of initial-state phenomena. In particular, at high energy peculiar features related to the saturation of the gluon phase-space should emerge, leading to a state called “colour glass condensate”. A discussion took place on the how the existence of this state could be proved or disproved at LHC. The study of initial-state phenomena also came under debate because of its importance in disentangling the effects of cold nuclear matter from genuine final-state effects in hot matter.

With the advent of high-precision data, theory is being increasingly challenged, since the understanding of the bulk properties of the medium produced in heavy-ion collisions is rapidly advancing. As several speakers discussed, significant advances are being made both in the understanding of the parton energy-loss mechanism and in the quarkonia production, for which a quantitative picture is emerging.

Still, as CERN’s Jürgen Schukraft pointed out in his summary talk, there is a need for measurements of even higher precision, as well as a wish list for new measurements: for example, in the heavy-flavour sector, lowering the pT reach to measure the total charm cross-section; and reconstructing charmed and beauty baryons to gain further insight into thermalization of the medium.

On a shorter time scale, the next crucial step is the measurement of effects in cold nuclear matter, which will be possible in the forthcoming proton–nucleus run at the LHC. Based on the experience from the past lower energy measurements, new surprises might be just behind the corner.

The conference was preceded by introductory student lectures covering aspects of quarkonia production and jet quenching. About 40 students were supported by the organization, thanks to generous contributions by several international laboratories (CERN, EMMI, INFN) and, in particular, by the University of Cagliari and by the government of Sardinia. The conference was broadcast to a wider audience worldwide as a webcast.

• For more information, see the conference website www.ca.infn.it/hp12/.

Domenico Pacini and the origin of cosmic rays

CCpac1_06_12

In 1785 Charles-Augustin de Coulomb presented three reports on electricity and magnetism to France’s Royal Academy of Sciences. In the third of these he described his experiments showing that isolated electrified bodies can spontaneously discharge and that this phenomenon was not a result of defective insulation. After dedicated studies by Michael Faraday around 1835, William Crookes observed in 1879 that the speed of discharge decreased when the pressure was reduced: the ionization of air was thus the direct cause. But what was ionizing air? Trying to answer this question paved the way in the early 20th century towards a revolutionary scientific discovery – that of cosmic rays.

Spontaneous radioactivity had been discovered at the end of the 19th century and researchers observed that a charged electroscope promptly discharges in the presence of radioactive material. The discharge rate of an electroscope could then be used to gauge the level of radioactivity. A new era of research into discharge physics opened up, this period being strongly influenced by the discoveries of the electron and positive ions.

CCpac2_06_12

During the first decade of the 20th century, results on ionization phenomena came from several researchers in Europe and North America. Around 1900, Charles Wilson in Scotland and, independently two high-school teachers and good friends in Germany, Julius Elster and Hans Geitel, improved the technique for the careful insulation of electroscopes in a closed vessel, thus improving the sensitivity of the electroscope itself (figure 1). As a result, they could make measurements of the rate of spontaneous discharge. They concluded that ionizing agents were coming from outside the vessel and that part of this radioactivity was highly penetrating: it could ionize the air in an electroscope shielded by metal walls a few centimetres thick. This was confirmed in 1902 by quantitative measurements performed by Ernest Rutherford and Henry Cooke, as well as by John McLennan and F Burton, who immersed an electroscope in a tank filled with water.

The obvious questions concerned the nature of such radiation and whether it was of terrestrial or extra-terrestrial origin. The simplest hypothesis was that its origin was related to radioactive materials in the Earth’s crust, which were known to exist following the studies by Marie and Pierre Curie on natural radioactivity. A terrestrial origin was thus a commonplace assumption – an experimental proof, however, seemed difficult to achieve. In 1901, Wilson made the visionary suggestion that the origin of this ionization could be an extremely penetrating extra-terrestrial radiation. Nikola Tesla in the US even patented in 1901 a power generator based on the fact that “the Sun, as well as other sources of radiant energy, throws off minute particles of matter […which] communicate an electrical charge”. However, Wilson’s investigations in tunnels with solid rock overhead showed no reduction in ionization and so did not support an extra-terrestrial origin. The hypothesis was dropped for many years.

New heights

A review by Karl Kurz summarizes the situation in 1909. The spontaneous discharge observed was consistent with the hypothesis that background radiation did exist even in insulated environments and that this radiation had a penetrating component. There were three possible sources for the penetrating radiation: an extra-terrestrial radiation, perhaps from the Sun; radioactivity from the crust of the Earth; and radioactivity in the atmosphere. Kurz concluded from ionization measurements made in the lower part of the atmosphere that an extra-terrestrial radiation was unlikely and that (almost all of) the radiation came from radioactive material in the crust. Calculations were made of how such radiation should decrease with height but measurements were not easy to perform because the electroscope was a difficult instrument to transport and the accuracy was not sufficient.

Although a large effort to build a transportable electroscope was made by the meteorology group in Vienna (leaders in measurements of air ionization at the time), the final realization of such an instrument was made by Father Theodor Wulf (figure 2, left), a German scientist and Jesuit priest serving in the Netherlands and later in Rome. In Wulf’s electroscope, the two metal leaves were replaced by metalized silicon-glass wires, with a tension spring in between, also made of glass. The instrument could be read by a microscope (figure 2, right). To test the origin of the radiation causing the spontaneous discharge, Wulf checked the variation of radioactivity with height: in 1909 he measured the rate of ionization at the top of the Eiffel Tower in Paris (300 m above ground). Supporting the hypothesis of the terrestrial origin of most of the radiation, he expected to find less ionization at the top of the tower than at ground level. However, the rate of ionization showed too small a decrease to confirm this hypothesis. Instead, he found that the amount of radiation “at nearly 300 m [altitude] was not even half of its ground value”, while with the assumption that radiation emerges from the ground there would remain at the top of the tower “just a few per cent of the ground radiation”.

CCpac3_06_12

Wulf’s observations were puzzling and demanded an explanation. One possible way to solve this puzzle was to make measurements at altitudes higher than the 300 m of the Eiffel tower. Balloon experiments had been widely used for studies of atmospheric electricity for more than a century and it became evident that they might give an answer to the problem of the origin of the penetrating radiation. In a flight in 1909, Karl Bergwitz, a former pupil of Elster and Geitel, found that the ionization at 1300 m altitude had decreased to about 24% of the value on the ground. However, Bergwitz’s results were questioned because his electrometer was damaged during the flight. He later investigated electrometers on the ground and at 80 m, reporting that no significant decrease of the ionization was observed. Other measurements with similar results were obtained around the same time by Alfred Gockel, from Fribourg, Switzerland, who flew up to 3000 m (and first introduced the term “kosmische Strahlung”, or “cosmic radiation”). The general interpretation was that radioactivity was coming mostly from the Earth’s surface, although the balloon results were puzzling.

The meteorologist Franz Linke had, in fact, made 12 balloon flights in 1900–1903 during his PhD studies at Berlin University, carrying an electroscope built by Elster and Geitel to a height of 5500 m. The thesis was not published, but a published report concludes: “Were one to compare the presented values with those on ground, one must say that at 1000 m altitude […] the ionization is smaller than on the ground, between 1 and 3 km the same amount, and above it is larger … with values increasing up to a factor of 4 (at 5500 m). […] The uncertainties in the observations […] only allow the conclusion that the reason for the ionization has to be found first in the Earth.” Nobody later quoted Linke and although he had made the right measurement, he had reached the wrong conclusions.

Underwater measurements

One person to question the conclusion that radioactivity came mostly from the Earth’s crust was an Italian, Domenico Pacini. An assistant meteorologist in Rome, he made systematic studies of ionization on mountains, on the shoreline and at sea between 1906 and 1910. Pacini’s supervisor was the Austrian-born Pietro Blaserna, who had graduated in physics within the electrology group at the University of Vienna. The instruments used in Rome were state of the art and Pacini could reach a sensitivity of one third of a volt.

In 1910 he placed one electroscope on the ground and one out at sea, a few kilometres off the coast, and made simultaneous measurements. He observed a hint of a correlation and concluded that “in the hypothesis that the origin of penetrating radiations is in the soil […] it is not possible to explain the results obtained”. That same year he looked for a possible increase in radioactivity during a passage of Halley’s comet and found no effect.

Pacini later developed an experimental technique for underwater measurements and in June 1911 compared the rate of ionization at sea level and at 3 m below water, at a distance of 300 m from the shore of the Naval Academy of Livorno. He repeated the measurements in October on the Lake of Bracciano. He reported on his measurements, the results – and their interpretation – in a note entitled, “Penetrating radiation at the surface of and in water”, published in Italian in Nuovo Cimento in February 1912. In that paper, Pacini wrote: “Observations carried out on the sea during the year 1910 led me to conclude that a significant proportion of the pervasive radiation that is found in air had an origin that was independent of the direct action of active substances in the upper layers of the Earth’s surface. … [To prove this conclusion] the apparatus … was enclosed in a copper box so that it could be immersed at depth. … Observations were performed with the instrument at the surface, and with the instrument immersed in water, at a depth of 3 m”.

Pacini measured the discharge rate of the electroscope seven times over three hours. The ionization underwater was 20% lower than at the surface, consistent with absorption by water of radiation coming from outside; the significance was larger than 4 σ. He wrote: “With an absorption coefficient of 0.034 for water, it is easy to deduce from the known equation I/I0 = exp(–d/λ), where d is the thickness of the matter crossed, that, in the conditions of my experiments, the activities of the sea-bed and of the surface were both negligible. The explanation appears to be that, owing to the absorbing power of water and the minimum amount of radioactive substances in the sea, absorption of radiation coming from the outside indeed happens, when the apparatus is immersed.” Pacini concluded: “[It] appears from the results of the work described in this note that a sizable cause of ionization exists in the atmosphere, originating from penetrating radiation, independent of the direct action of radioactive substances in the crust.”

Despite Pacini’s conclusions – and the puzzling results of Wulf and Gockel on the dependence of radioactivity on altitude – physicists were reluctant to abandon the hypothesis of a terrestrial origin for the mystery penetrating radiation. The situation was resolved in 1911 an 1912 with the long series of balloon flights by Victor Hess, who established the extra-terrestrial origin of at least part of the radiation causing the observed ionization. However, it was not until 1936 that Hess was rewarded with the Nobel Prize for the discovery of cosmic radiation. By then the importance of this “natural laboratory” was clear, and he shared the prize with Carl Anderson, who had discovered the positron in cosmic radiation four years earlier. Meanwhile, Pacini had died in 1934 – his contributions mainly forgotten through a combination of historical and political circumstances.

LHCf: bringing cosmic collisions down to Earth

CClhc1_06_12

Recent observations of ultra-high-energy cosmic rays (UHECRs) by extensive air-shower arrays have revealed a clear cut-off in the energy spectrum at 1019.5 eV. The results are consistent with the predictions made in the mid-1960s that interactions with the cosmic microwave background would suppress the flux of particles at high energies (Greisen 1966, Zatsepin and Kuz’min 1966). Nevertheless, as the article on page 22 explains, the nature of the cut-off – and, indeed, the origin of the UHECRs – remains unknown.

UHECRs are observed in the large showers of particles created when a high-energy particle (proton or nucleus) interacts in the atmosphere. This means that information about the primary cosmic ray has to be estimated by “interpreting” the observed extensive air shower. Both longitudinal and lateral shower structures measured by the fluorescence and surface detectors, respectively, are used in the interpretation of the energy and species of the primary particle through comparison with the predictions of Monte Carlo simulations. In high-energy hadronic collisions, the energy flow is dominated by the very-forward-emitted particles in which the shower development is determined by the energy balance of baryonic and mesonic particle production. However, the lack of knowledge about hadronic interactions at such high energies, especially in the forward region, means that the interpretations tend to be model-dependent. To constrain the models used in the simulations, measurements of the forward production of particles relevant to air-shower development are indispensable at the highest energies possible.

Into the lab

The most important cross-section for cosmic-ray shower development is for the forward production in hadron collisions of neutral pions (π0), which immediately decay to two forward photons. The highest energies accessed in the laboratory are reached in particle colliders, and until the start-up of the LHC, the only experiment dedicated to forward particle production at a collider was carried out at UA7 at CERN’s SppS collider (Paré et al. 1990). Now, two decades later, members of the UA7 team have formed a new collaboration for the Large Hadron Collider forward (LHCf) experiment (LHCf 2006). This is dedicated to measuring very-forward particle production at the LHC, where running with proton–proton collisions at the full design energy of 14 TeV will correspond to 1017 eV in the laboratory frame and so will be in touch with the UHECR region.

The LHCf experiment consists of two independent calorimeters (Arm1 and Arm2) installed 140 m on either side of the interaction point in the ATLAS experiment. The detectors fit in the instrumentation slots of the target neutral absorbers (TANs), which are located where the vacuum chamber for the beam makes a Y-shaped transition from the single beam pipe that passes through the interaction point to the two separate beam tubes that continue into the arcs of the LHC. Charged particles produced in the collision region in the direction of the TAN are swept aside by an inner beam-separation magnet before they reach it. Consequently, only neutral particles produced at the interaction point enter the TAN and the detectors. This location allows the observation of particles at nearly 0° to the proton beam direction.

Both LHCf detectors contain two sampling and imaging calorimeters, each consisting of 44 radiation lengths of tungsten and 16 sampling layers of 3 mm-thick plastic scintillator for the initial runs. The calorimeters in Arm1 have an area transverse to the beam direction of 20 × 20 mm2 and 40 × 40 mm2, while those in Arm2 have areas of 25 × 25 mm2 and 32 × 32 mm2. Four X-Y layers of position-sensitive sensors are interleaved with the tungsten and scintillator to provide the transverse positions of the showers generated in the calorimeters, employing different technologies in the two detectors: Arm1 uses scintillating fibres and multi-anode photomultiplier tubes (MAPMTs); Arm2 uses silicon-strip sensors. In each case, the sensors are installed in pairs in such a way that two pairs are optimized to detect the maximum of gamma-ray-induced showers, while the other two are for hadronic showers developed deep within the calorimeters. Although the lateral dimensions of these calorimeters are small, the energy resolution is expected to be better than 6% and the position resolution better than 0.2 mm for gamma-rays with energy between 100 GeV and 7 TeV. This has been confirmed by test-beam results at CERN’s Super Proton Synchrotron.

LHCf successfully took data right from the first collision at the LHC in 2009 and finished its first phase of data-taking in mid-July 2010, after collecting enough data in proton–proton collisions at both 900 GeV and 7 TeV in the centre of mass. In 2011, the collaboration reported its measurements of inclusive photon spectra at 7 TeV (Adriani et al. 2011). A comparison of the data with predictions from the hadron-interaction models used in the study of air showers and from PYTHIA 8.145, which is popular in the high-energy-physics community, revealed various discrepancies, with none of the models showing perfect agreement with the data.

Now, LHCf has results for the inclusive π0 production rate at rapidities greater than 8.9 in proton–proton data at 7 TeV in the centre of mass. Using data collected in two runs in May 2010, corresponding to integrated luminosities of 2.53 nb–1 in Arm1 and 1.90 nb–1 in Arm2, the collaboration measured instances where two photons emitted into the very-forward regions could be attributed to π0 decays and obtained the transverse momentum (pT) distributions of the π0s. The criteria for the selection of π0 events were based on the position of the incident photons (within 2 mm of the edge of the calorimeter), the photon energy (above 100 GeV), the number of hits (one in each calorimeter), photon-like particle identification using the energy deposition and, last, an invariant mass corresponding to the π0 mass.

The pT spectra were derived in independent analyses of the two detectors, Arm1 and Arm2, in six rapidity intervals covering the range 8.9–11.0. These spectra, which agree within statistical and systematic errors, were then combined and compared with the predictions from various hadronic interaction models: DPMJET 3.04, QGSJET II-03, SIBYLL 2.1, EPOS 1.99 and PYTHIA 8.145 (default parameter set).

CClhc2_06_12

Figure 1 shows the combined spectrum for one rapidity interval, 9.2 < y < 9.4, compared with the outcome from these models (Adriani et al. 2012). It is clear that DPMJET 3.04 and PYTHIA 8.145 predict the π0 production rates to be higher than the data from LHCf as pT increases. SIBYLL 2.1 also predicts harder pion spectra than are observed in the experimental data, although the expected π0 yield is generally small. On the other hand, QGSJET II-03 predicts π0 spectra that are softer than both the LHCf data and the other model predictions. Among the hadronic interaction models, EPOS 1.99 shows the best overall agreement with the LHCf data.

CClhc3_06_12

In figure 2 the values of average pT (〈pT〉) obtained in this analysis are compared as a function of ylab = ybeam – y with the results from UA7 and with the model predictions. Although the LHCf and UA7 data have limited overlap and the systematic errors for UA7 are relatively large, the values of 〈pT〉 from the two experiments lie mainly along a common curve and there is no evidence of a dependence on collision energy. EPOS 1.99 shows the smallest dependence of 〈pT〉 on the two collision energies among three of the models, and this tendency is consistent with the results from LHCf and UA7. It is also evident from figure 2 that the best agreement with the LHCf data are obtained by EPOS 1.99.

The photon and π0 data from the LHCf experiment can now be used in models to constrain the mesonic part (or electromagnetic part via π0 s) of the air-shower development. The collaboration, meanwhile, is turning to analysis of baryon production, which will provide complementary information on the hadronic interaction. At the same time, work is ongoing towards taking data on proton–lead collisions at the LHC, planned for the end of 2012. Such nuclear collision data are important for understanding the interaction between cosmic rays and the atmosphere. Also other work is under way on replacing the plastic scintillator in the calorimeters – which were removed after the runs in July 2010 – with more radiation-resistant crystal scintillator, so as to be ready for 2014 when the LHC will run at 7 TeV per beam. There are also plans to change the position of the silicon sensors to improve the performance of the experiment in measuring the energy of the interacting particles.

Studies of ultra-high-energy cosmic rays look to the future

CCuhe1_06_12

“Analysis of a cosmic-ray air shower recorded at the MIT Volcano Ranch station in February 1962 indicates that the total number of particles in the shower was 5 × 1010. The total energy of the primary particle that produced the shower was 1.0 × 1020 eV.” Thus begins the 1963 paper in which John Linsley described the first detection of a cosmic ray with a surprisingly high energy. Such ultra-high-energy cosmic rays (UHECRs), which arrive at Earth at rates of less than 1 km–2 a century, have since proved challenging both experimentally and theoretically. The International Symposium on Future Directions in UHECR Physics, which took place at CERN on 13–16 February, aimed to discuss these challenges and look to the next step in terms of a future large-scale detector. Originally planned as a meeting of about 100 experts from the particle- and astroparticle-physics communities, the symposium ended up attracting more than 230 participants from 24 countries, reflecting the strong interest in the current and future prospects for cosmic rays at the highest energies.

Soon after Linsley’s discovery, UHECRs became even more baffling when Arno Penzias and Robert Wilson discovered the cosmic microwave background (CMB) radiation in 1965. The reason for this is twofold: first, astrophysical sources delivering particle energies of 10 to 100 million times the beam energy of the LHC are hard to conceive of; and, second, the universe becomes opaque for protons and nuclei at energies above 5 × 1019 eV because of their interaction with the CMB radiation. In 1966, Kenneth Greisen, and independently Georgy Zatsepin and Vadim Kuz’min, pointed out that protons would suffer pion photoproduction and nuclei photodisintegration in the CMB. These processes limit the cosmic-ray horizon above the so-called “GZK” threshold to less than about 100 Mpc, resulting in strongly suppressed fluxes of protons and nuclei from distant sources.

The HiRes, Pierre Auger and Telescope Array (TA) collaborations recently reported a suppression of just this type at about the expected threshold. Does this mark the long awaited discovery of the GZK effect? At the symposium, not all participants were convinced because the break in the energy spectrum could also be caused by the sources running out of steam. To shed more light on this most important question of astroparticle physics, information about the mass composition and arrival directions, as well as the precise energy spectrum of the highest-energy cosmic rays, is now paramount.

Searching for answers

Three large-scale observatories, each operated by international collaborations, are currently taking data and trying to provide answers: the Pierre Auger Observatory in Argentina, the flagship in the field, which covers 3000 km2; the more recently commissioned TA in Utah, which samples an area of 700 km2; and the smaller Yakutsk Array in Siberia, which now covers about 10 km2. To make progress in understanding the data from these three different observatories new ground was broken in preparing for the symposium. Before the meeting, five topical working groups were formed comprising members from each collaboration. They were given the task of addressing differences between the respective approaches in the measurement and analysis methods, studying their impact on the physics results and delivering a report at the symposium. These working-group reports – on the energy spectrum, mass composition, arrival directions, multimessenger studies and comparisons of air-shower data to simulations – were complemented by invited overview talks, contributed papers and a large number of posters addressing various topics of analyses, new technologies and concepts for future experiments.

In opening the symposium and welcoming the participants, CERN’s director of research, Sergio Bertolucci, emphasized the organization’s interest in astroparticle physics in general and in cosmic rays in particular – the latter being explicitly named in the CERN convention. Indeed, many major astroparticle experiments have been given the status of “recognized experiment” by CERN. Pierre Sokolsky, a key figure in the legendary Fly’s Eye experiment and its successor HiRes, followed with the first talk, a historical review of the research on the most energetic particles in nature. Paolo Privitera of the University of Chicago then reviewed the current status of measurements, highlighting differences in observations and the understanding of systematic uncertainties. Theoretical aspects of acceleration and propagation were also discussed, as well as predictions of the energy and mass spectrum, by Pasquale Blasi of Istituto Nazionale di Astrofisica/Arcetri Astrophysical Observatory and Venya Berezinsky of Gran Sasso National Laboratory.

CCuhe2_06_12

Data from the LHC, particularly those measured in the very forward region, are of prime interest for verifying and optimizing hadronic-interaction event-generators that are employed in the Monte Carlo simulations of extensive air showers (EAS), which are generated by the primary UHECRs. Overviews of recent LHC data by Yoshikata Itow of Nagoya University and, more generally, the connection between accelerator physics and EAS were therefore given prominence at the meeting. Tanguy Pierog of Karlsruhe Institute of Technology demonstrated that the standard repertoire of interaction models employed in EAS simulations not only cover the LHC data reasonably well but also the predicted LHC data better than high-energy physics models, such as PYTHIA or HERWIG. Nonetheless, no perfect model exists and significant muon deficits in the models are seen at the highest air-shower energies. In a keynote talk, John Ellis, now of King’s College London, highlighted UHECRs as being the most extreme environment for studying particle physics – at a production energy of around 1011 GeV and more than 100 TeV in the centre-of-mass – and discussed the potential for exotic physics. In a related talk, Paolo Lipari of INFN Rome La Sapienza discussed the interplay of cross-sections, cosmic-ray composition and interaction properties, highlighting the mutual benefits provided by cosmic rays and accelerator physics.

High-energy photons and neutrinos are directly related to cosmic rays and are different observational probes of the high-energy non-thermal universe. Tom Gaisser of the University of Delaware, Günter Sigl of the University of Hamburg and others addressed this multimessenger aspect and argued that current neutrino limits from IceCube begin to disfavour a UHECR origin inside relativistic gamma-ray bursts and active galactic-nuclei (AGN) jets, and that cosmogenic neutrinos would provide a smoking-gun signal of the GZK effect. However, as Sigl noted, fluxes of diffuse cosmogenic neutrinos and photons depend strongly on the chemical composition, maximal acceleration energy and redshift evolution of sources.

Future options

Looking towards the future, the symposium discussed potentially attractive new technologies for cosmic-ray detection. Radio observations of EAS at frequencies of some tens of megahertz are being performed at the prototype level by a couple of groups and the underlying physical emission processes are being understood in greater detail. Ad van den Berg of the University of Groningen described the status of the largest antenna array under construction, the Auger Engineering Radio Array (AERA). More recently, microwave emission by molecular bremsstrahlung was suggested as another potentially interesting emission process. Unlike megahertz-radiation, gigahertz-emission would occur isotropically, opening the opportunity to observe showers sideways from large distances, a technique known from the powerful EAS fluorescence observations. Thus, huge volumes could be surveyed with minimal equipment available off the shelf. Pedro Facal of the University of Chicago and Radomir Smida of the Karlsruhe Institute of Technology reported preliminary observations of such radiation, with signals being much weaker than expected from laboratory measurements.

The goal is to reach huge apertures with particle-physics capability at cost levels of €100 million.

The TA collaboration is pursuing forward-scattered radar detection of EAS, as John Belz of the University of Utah reported; this again potentially allows huge volumes to be monitored for reflected signals. However, the method still needs to be proved to work. Interesting concepts for future giant ground-based observatories based on current and novel technologies were presented by Antoine Letessier-Selvon of the CNRS, Paolo Privitera and Shoichi Ogio of Osaka City University. The goal is to reach huge apertures with particle-physics capability at cost levels of €100 million.

Parallel to pushing for a new giant ground-based observatory, space-based approaches, most notably by JEM-EUSO – the Extreme Universe Space Observatory aboard the Japanese Experiment Module – to be mounted on the International Space Station, were discussed by Toshikazu Ebizusaki of RIKEN, Andrea Santangelo of the Institut für Astronomie und Astrophysik Tübingen and Mario Bertaina of Torino University/INFN. Depending on the effective duty cycle, apertures of almost 10 times that of the Auger Observatory with a uniform coverage of northern and southern hemispheres may be reached. However, the most important weakness as compared with ground-based experiments is the poor sensitivity to the primary mass and the inability to perform particle-physics-related measurements.

CCuhe3_06_12

The true highlights of the symposium were reports given by the joint working groups. This type of co-operation, inspired by the former working groups for CERN’s Large Electron–Positron Collider, marked a new direction for the community. Yoshiki Tsunesada of the Tokyo Institute of Technology reported detailed comparisons of the energy spectra measured by the different observatories. All spectra are in agreement within the given energy-scale uncertainties of around 20%. Accounting for these overall differences, spectral shapes and positions of the spectral features are in good agreement. Nevertheless, the differences are not understood in detail and studies of the fluorescence yield and photometric calibration – treated differently by the TA and Auger collaborations – are to be pursued.

The studies of the mass-composition working group, presented by Jose Bellido of the University of Adelaide, addressed whether the composition measured by HiRes and TA is compatible with proton-dominated spectra while Auger suggests a significant fraction of heavy nuclei above 1019 eV. Following many cross-checks and cross-correlations between the experiments, differences could not be attributed to issues in the data analysis. Even after taking into account the shifts in the energy scale, the results are not fully consistent within quoted uncertainties, assuming no differences existed between the northern and southern hemispheres.

The anisotropy working group discussed large-scale anisotropies and directional correlations to sources in various catalogues and concluded that there is no major departure from anisotropy in any of the data sets, although some hints at the 10–20° scale might be have been seen by Auger and TA. Directional correlations to AGN and to the overall nearby matter-distribution are found by Auger at the highest energies, but the HiRes collaboration could not confirm this finding. Recent TA data agree with the latest signal strength of Auger but, owing to the lack of statistics, they are also compatible with isotropy at the 2% level.

Studies by the photon and neutrino working group, presented by Markus Risse of the University of Siegen and Grisha Rubtsov from the Russian Academy of Sciences, addressed the pros and cons of different search techniques and concluded that the results are similar. No photons and neutrinos have been observed yet but prospects for the coming years seem promising for reaching sensitivities for optimistic GZK fluxes.

CCuhe4_06_12

Lastly, considerations of the hadronic-interaction and EAS-simulation working group, presented by Ralph Engel of Karlsruhe Institute of Technology, acknowledged the many constraints – so far without surprises – that are provided by the LHC. Despite the good overall description of showers, significant deficits in the muon densities at ground level are observed in the water Cherenkov tanks of Auger. The energy obtained by the plastic scintillator array of TA is around 30% higher than the energies measured by fluorescence telescopes. These differences are difficult to understand and deserve further attention. Nevertheless, proton–air and proton–proton inelastic cross-sections up to √s = 57 TeV have been extracted from Auger, HiRes and Yakutsk data, demonstrating the particle-physics potential of high-energy cosmic rays.

CCuhe5_06_12

The intense and lively meeting was summarized enthusiastically by Angela Olinto of the University of Chicago and Masaki Fukushima of the University of Tokyo. A round-table discussion, chaired by Alan Watson of the University of Leeds, iterated the most pressing questions to be addressed and the future challenges to be worked on towards a next-generation giant observatory. Clearly, important steps were made at this symposium, marking the start of a coherent worldwide effort towards reaching these goals. The open and vibrant atmosphere of CERN contributed much to the meeting’s success and was highly appreciated by all participants, who agreed to continue the joint working groups and discuss progress at future symposia.

• For more information about the symposium, see http://2012.uhecr.org.

A neutrino telescope deep in the Mediterranean Sea

CCmed1_06_12

Particle physicists – like many other scientists – are used to working under well controlled laboratory conditions, with constant temperature, controlled humidity and perhaps even a clean-room environment. They would consider crazy anyone who tried to install an experiment in the field outside the lab environment, without shelter against wind and weather. So what must they think of a group of physicists and engineers planning to install a huge, highly complex detector on the bottom of the open sea?

This is exactly what the KM3NeT project is about: a neutrino telescope that will consist of an array of photo-sensors instrumenting several cubic kilometres of water deep in the Mediterranean Sea (figure 1). The aim is to detect the faint Cherenkov light produced as charged particles emerge from the reactions of high-energy neutrinos in the instrumented volume of ocean or the rock beneath it. Most of the neutrinos that are detected will be “atmospheric neutrinos”, originating from the interactions of charged cosmic rays in the Earth’s atmosphere. Hiding among these events will be a few that have been induced by neutrinos of cosmic origin, and these are the prime objects that the experimenters desire.

Ideal messengers

Why are a few cosmic neutrinos worth the huge effort to construct and operate such an instrument? A century after the discovery of cosmic rays, the start of construction of the KM3NeT neutrino telescope marks a big step forwards in understanding their origin and solving the mystery of the astrophysical processes in which they acquire energies that are many orders of magnitude beyond the reach of terrestrial particle accelerators. This is because neutrinos are ideal messengers from the universe: they are neither absorbed nor deflected, i.e. they can escape from dense environments that would absorb all other particles; they point back to their origin; and they are produced inevitably if protons or heavier nuclei with the energies typical of cosmic rays – up to eight orders of magnitude above the LHC beam energy – scatter on other nuclei or on photons and thereby signal astrophysical acceleration of nuclei.

Only a handful of neutrinos assigned to an astrophysical source would convey the unambiguous message that this source accelerates nuclei – a finding that can not be achieved any other way. Of course, much more can be studied with neutrino telescopes. Cosmic neutrinos might signal annihilations of dark-matter particles, and their isotropic flux provides information about sources that cannot be resolved individually. Moreover, atmospheric neutrinos could be used to make measurements of unique importance for particle physics, such as the determination of the neutrino-mass hierarchy.

Driven by the fundamental significance of neutrino astronomy, a first generation of neutrino telescopes with instrumented volumes up to about a per cent of a cubic kilometre was constructed over the past two decades: Baikal, in the homonymous lake in Siberia; AMANDA, in the deep ice at the South Pole; and ANTARES, off the French Mediterranean coast. These detectors have proved the feasibility of neutrino detection in the respective media and provided a wealth of experience on which to build. However, they have not – yet – identified any neutrinos of cosmic origin.

These results and the evolution of astrophysical models of potential classes of neutrino sources over the past few years indicate that, in fact, much larger target volumes are necessary for neutrino astronomy. The first neutrino telescope of cubic-kilometre size, the IceCube observatory at the South Pole, was completed in December 2010. Its integrated exposure is growing rapidly and the discovery of a first source may be just round the corner.

Why then start constructing another large neutrino telescope? Would it not be better to wait and see what IceCube finds? To answer this question it is important to understand in somewhat more detail the way in which neutrinos are actually measured.

The key reaction is the charged-current (mostly deep-inelastic) scattering of a muon-neutrino or muon-antineutrino on a target nucleus. In such a reaction, an outgoing muon is produced that, on average, carries a large fraction of the neutrino energy and is emitted with only a small angular deflection from the neutrino direction. The muon trajectory – and thus the neutrino direction – is reconstructed from the arrival times of the Cherenkov light in the photo-sensors and the positions of the sensors. This method is suitable for the identification of neutrinos if they come from the opposite hemisphere, i.e. through the Earth. If they come from above, then the resulting muons are barely distinguishable from “atmospheric” muons that penetrate to the detector and are much more numerous. Neutrino telescopes therefore look predominantly “downwards” and do not cover the full sky. IceCube, being at the South Pole, can thus observe the Northern sky but not the Galactic centre and the largest part of the Galactic plane (figure 2).

CCmed2_06_12

The KM3NeT telescope will have the Galactic centre and central plane of the Galaxy in its field of view and will be optimized to discover and investigate the neutrino flux from Galactic sources. Shell-type supernova remnants are a particularly interesting kind of candidate source. In these objects the supernova ejecta hit interstellar material, such as molecular clouds, and form shock fronts. Gamma-ray observations show that these are places where particles are accelerated to very high energies – but there is an intense debate as to whether these gamma rays stem from accelerated electrons and positrons or hadrons. The only way to give a conclusive answer is through observing neutrinos. Figure 3 shows the sensitivity of KM3NeT and other different experiments to neutrino point sources. According to simulations based on model calculations using gamma-ray measurements by the High Energy Stereoscopic System (HESS) – an air Cherenkov telescope – KM3NeT could make an observation of the supernova remnant RX J1713.7-3946 (figure 4) with a significance of 5σ within 5 years, if the emission process is purely hadronic.

CCmed3_06_12

The construction of a neutrino telescope of this sensitivity within a realistic budget faces a number of challenges. The components have to withstand the hostile environment with several hundred bar of static pressure and extremely aggressive salt water. That limits the choice of materials, in particular as maintenance is difficult or even impossible. In addition, background light from the radioactive decay of potassium-40 and bioluminescence causes high rates of photomultiplier hits, while the deployment of the detector requires tricky sea operations and the use of unmanned submersibles to make cable connections.

CCmed4_06_12

When the KM3NeT design effort started out with an EU-funded Design Study (2006–2009), a target cost of €200 million for a cubic-kilometre detector was defined. At the time, this was considered utterly optimistic in view of the investment cost for ANTARES of about €20 million. Now, in 2012, the collaboration is confident that it can construct a detector of 5–6 km3 for €220–250 million. This enormous development is partly a result of optimizing the neutrino telescope for slightly higher energies, which implies larger horizontal and vertical distances between the photo-sensors. The main progress, however, has been in the technical design. Almost all of the components have been newly designed, in many cases pursuing completely new approaches.

The design of the optical module is a prime example. Instead of a large, hemispherical photomultiplier (8- or 10-inch diameter) in a glass sphere (17-inch diameter), the design now uses as many as 31 photomultipliers of 3-inch diameter per sphere (figure 5). This triples the photocathode area for each optical module, allows for a clean separation of hits with one or two photo-electrons and adds some directional sensitivity.

CCmed5_06_12

All data, i.e. all photomultiplier hits, will be digitized in the optical modules and sent to shore via optical fibres. At the shore station, a data filter will run on a computer cluster and select the hit combinations in which the hit pattern and timing are compatible with particle-induced events.

Three countries (France, Italy and the Netherlands) have committed major contributions to an overall funding of €40 million for a first construction phase; others (Germany, Greece, Romania and Spain) are contributing at a smaller level or have not yet made final decisions. It is expected that final prototyping and validation activities will be concluded by 2013 and that construction will begin in 2013–2014. The installation will soon substantially exceed any existing northern-hemisphere instruments in sensitivity, thus providing discovery potential from an early stage.

Last, astroparticle physicists are not alone in looking forward to KM3NeT. For scientists from various areas of underwater research, the detectors will provide access to long-term, continuous measurements in the deep sea. It will provide nodes in a global network of deep-ocean observatories and thus be a truly multidisciplinary research infrastructure.

• For more information, see the KM3NeT Technical Design Report at www.km3net.org.

bright-rec iop pub iop-science physcis connect