Comsol -leaderboard other pages

Topics

One billion J/ψ events in Beijing

In a 40-day run ending on 22 May, the Institute of High-Energy Physics in China accumulated a total of 1.3 billion J/ψ events at the upgraded Beijing Electron Positron Collider (BEPCII) and Beijing Spectrometer (BESIII).

In a two-year run from 1999 until 2001, the earlier incarnations of BEPC and BESII had accumulated a highly impressive 58 million J/ψs. By analysing these and 220 million events at BESIII, important results such as the discovery of X(1835) have already been produced. Now, thanks to the upgrades, data-acquisition efficiency is 120 times higher, and as many as 40 million J/ψs were being collected daily towards the end of the latest run.

BEPCII is a two-ring electron–positron collider with beam energy of 1.89 GeV. With a design luminosity of 1 × 1033 cm–2s–1, it reached a peak of 2.93 × 1032  cm–2s–1 in the latest run, 59 times higher than that of its predecessor, BEPC.

The Milky Way’s dark-matter halo reappears

Back in April, a study of the motion of hundreds of stars in the Milky Way found no evidence of a massive dark-matter halo (CERN Courier June 2012 p11). The finding came as a surprise and did not long withstand the assault of sceptical scientists questioning the results. A new study based on the same data set, but proposing a different underlying assumption, now reconciles the observations with the presence of a dark-matter halo in line with expectations.

One of the first pieces of evidence for dark matter was that the rotation velocity of stars in the Milky Way remains constant instead of decreasing with distance from the Galactic centre. This flat rotation curve implies the presence of an extended distribution of dark matter, whose mass compensates the decreasing stellar density in the outer regions of the Galaxy. The presence of a similar dark-matter halo is implied by the flat rotation curve observed in almost every spiral galaxy but its actual shape and density distribution is difficult to predict.

To determine the amount of dark matter in the vicinity of the Sun, a team of Chilean astronomers measured the motions of more than 400 red giant stars up to 13,000 light-years from the Sun, in a volume that is four times larger than ever previously considered. Visible matter in the form of stars and gas is dominant in the plane of the Galaxy but at higher elevation above the Galactic disc, dark matter should dominate. The rotational velocity of stars at different Galactic heights should thus result in a measure of the local density of dark matter in the solar neighbourhood.

To their surprise, Christian Moni Bidin of the Universidad de Concepción and colleagues found no evidence at all for a dark-matter halo. They obtained an upper limit of 0.07 kg of dark matter in a volume the size of the Earth, whereas theories predict a mass in the range of 0.4–1.0 kg. This difference of about an order of magnitude led some astronomers to query the validity of the analysis.

Jo Bovy and Scott Tremaine of the Institute for Advanced Study, Princeton, claim that they found a fault in one of the assumptions made by Moni Bidin and colleagues. The problematic assumption is that the average rotational velocity <V> is constant with distance from the Galactic centre at all heights above the plane of the Galaxy. For Bovy and Tremaine, this assumption applies to the circular velocity Vc but not to <V>. The difference is rather subtle, but it is a well identified effect known as the “asymmetric drift”, which arises from a sub-population of stars with elliptical orbits that have on average a lower velocity than Vc. The result is a difference between <V> and Vc that evolves with the height above the Galactic plane and would have led the Chilean researchers to underestimate the density of dark matter.

With their modified assumption that the circular velocity curve is flat in the mid-plane, Bovy and Tremaine obtain a local dark-matter density of 0.3±0.1 GeV/cm3, fully consistent with estimates from the usual models. They also claim to demonstrate that this assumption is motivated by observations, while the previous one was implausible.

As with the OPERA result on the faster-than-light neutrinos (EXO, MINOS and OPERA reveal new results), this is another example of an unexpected result being later disproved. It seems that submitting the problem to the scientific community in the form of a paper is an efficient way to identify quickly the origin of the disagreement. The strength of the scientific community as a whole is to be able to solve major issues more effectively than a single research group.

A discovery of cosmic proportions

CC1001_06_12

“We took off at 6.12 a.m. from Aussig on the Elbe. We flew over the Saxony border by Peterswalde, Struppen near Pirna, Birchofswerda and Kottbus. The height of 5350 m was reached in the region of Schwielochsee. At 12.15 p.m. we landed near Pieskow, 50 km east of Berlin.”

The flight on 7 August 1912 was the last in a series of balloon flights that Victor Hess, an Austrian physicist, undertook in 1912 with the aid of a grant from what is now the Austrian Academy of Sciences in Vienna. The previous year, he had taken two flights to investigate the penetrating radiation that had been found to discharge electroscopes above the Earth’s surface. He had reached an altitude of around 1100 m and found “no essential change” in the amount of radiation compared with observations near the ground. This indicated the existence of some source of radiation in addition to γ-rays emitted by radioactive decays in the Earth’s crust.

For the flights in 1912 he equipped himself with two electroscopes of the kind designed by Wulf, which were “perfectly airtight” and could withstand the pressure changes with altitude. The containers were electrolytically galvanized on the inside to reduce the radiation from the walls. To improve accuracy the instruments were equipped with a new “sliding lens” that allowed Hess to focus on the electroscopes’ fibres as they discharged without moving the eyepiece and hence changing the magnification.

Hess undertook the first six flights from his base in Vienna, beginning on 17 April 1912, during a partial solar eclipse. Reaching 2750 m, he found no reduction in the penetrating radiation during the eclipse but indications of an increase around 2000 m. However, on the following flights he found that “the weak lifting power of the local gas, as well as the meteorological conditions” did not allow him to ascend higher.

So, on 7 August he took off instead from Aussig [today Ústí nad Labem in the Czech Republic], several hundred kilometres north of Vienna. Although cumulus clouds appeared during the day, the balloon with Hess and the electrometers were never close to them; there was only a thin layer above him, at around 6000 m. The results of this flight were more conclusive. “In both γ-ray detectors the values at the greatest altitude are about 22–24 ions higher than at the ground.”

Before reporting these results, Hess combined all of the data from his various balloon flights. At altitudes above 2000 m the measured radiation levels began to rise. “By 3000 to 4000 m the increase amounts to 4 ions, and at 4000 to 5200 m fully to 16 to 18 ions, in both detectors.”

He concludes: “The results of the present observations seem to be most readily explained by the assumption that a radiation of very high penetrating power enters our atmosphere from above … Since I found a reduction … neither by night nor at a solar eclipse, one can hardly consider the Sun as the origin.”

Although continuing research discovered more about the particles involved, the exact location of the source remains a mystery that continues to drive adventurous research in astroparticle physics.

• The extracts are from a translation of the original paper by Hess, taken from Cosmic Rays by A M Hillas, in the series “Selected readings in physics”, Pergamon Press 1972.

Domenico Pacini and the origin of cosmic rays

CCpac1_06_12

In 1785 Charles-Augustin de Coulomb presented three reports on electricity and magnetism to France’s Royal Academy of Sciences. In the third of these he described his experiments showing that isolated electrified bodies can spontaneously discharge and that this phenomenon was not a result of defective insulation. After dedicated studies by Michael Faraday around 1835, William Crookes observed in 1879 that the speed of discharge decreased when the pressure was reduced: the ionization of air was thus the direct cause. But what was ionizing air? Trying to answer this question paved the way in the early 20th century towards a revolutionary scientific discovery – that of cosmic rays.

Spontaneous radioactivity had been discovered at the end of the 19th century and researchers observed that a charged electroscope promptly discharges in the presence of radioactive material. The discharge rate of an electroscope could then be used to gauge the level of radioactivity. A new era of research into discharge physics opened up, this period being strongly influenced by the discoveries of the electron and positive ions.

CCpac2_06_12

During the first decade of the 20th century, results on ionization phenomena came from several researchers in Europe and North America. Around 1900, Charles Wilson in Scotland and, independently two high-school teachers and good friends in Germany, Julius Elster and Hans Geitel, improved the technique for the careful insulation of electroscopes in a closed vessel, thus improving the sensitivity of the electroscope itself (figure 1). As a result, they could make measurements of the rate of spontaneous discharge. They concluded that ionizing agents were coming from outside the vessel and that part of this radioactivity was highly penetrating: it could ionize the air in an electroscope shielded by metal walls a few centimetres thick. This was confirmed in 1902 by quantitative measurements performed by Ernest Rutherford and Henry Cooke, as well as by John McLennan and F Burton, who immersed an electroscope in a tank filled with water.

The obvious questions concerned the nature of such radiation and whether it was of terrestrial or extra-terrestrial origin. The simplest hypothesis was that its origin was related to radioactive materials in the Earth’s crust, which were known to exist following the studies by Marie and Pierre Curie on natural radioactivity. A terrestrial origin was thus a commonplace assumption – an experimental proof, however, seemed difficult to achieve. In 1901, Wilson made the visionary suggestion that the origin of this ionization could be an extremely penetrating extra-terrestrial radiation. Nikola Tesla in the US even patented in 1901 a power generator based on the fact that “the Sun, as well as other sources of radiant energy, throws off minute particles of matter […which] communicate an electrical charge”. However, Wilson’s investigations in tunnels with solid rock overhead showed no reduction in ionization and so did not support an extra-terrestrial origin. The hypothesis was dropped for many years.

New heights

A review by Karl Kurz summarizes the situation in 1909. The spontaneous discharge observed was consistent with the hypothesis that background radiation did exist even in insulated environments and that this radiation had a penetrating component. There were three possible sources for the penetrating radiation: an extra-terrestrial radiation, perhaps from the Sun; radioactivity from the crust of the Earth; and radioactivity in the atmosphere. Kurz concluded from ionization measurements made in the lower part of the atmosphere that an extra-terrestrial radiation was unlikely and that (almost all of) the radiation came from radioactive material in the crust. Calculations were made of how such radiation should decrease with height but measurements were not easy to perform because the electroscope was a difficult instrument to transport and the accuracy was not sufficient.

Although a large effort to build a transportable electroscope was made by the meteorology group in Vienna (leaders in measurements of air ionization at the time), the final realization of such an instrument was made by Father Theodor Wulf (figure 2, left), a German scientist and Jesuit priest serving in the Netherlands and later in Rome. In Wulf’s electroscope, the two metal leaves were replaced by metalized silicon-glass wires, with a tension spring in between, also made of glass. The instrument could be read by a microscope (figure 2, right). To test the origin of the radiation causing the spontaneous discharge, Wulf checked the variation of radioactivity with height: in 1909 he measured the rate of ionization at the top of the Eiffel Tower in Paris (300 m above ground). Supporting the hypothesis of the terrestrial origin of most of the radiation, he expected to find less ionization at the top of the tower than at ground level. However, the rate of ionization showed too small a decrease to confirm this hypothesis. Instead, he found that the amount of radiation “at nearly 300 m [altitude] was not even half of its ground value”, while with the assumption that radiation emerges from the ground there would remain at the top of the tower “just a few per cent of the ground radiation”.

CCpac3_06_12

Wulf’s observations were puzzling and demanded an explanation. One possible way to solve this puzzle was to make measurements at altitudes higher than the 300 m of the Eiffel tower. Balloon experiments had been widely used for studies of atmospheric electricity for more than a century and it became evident that they might give an answer to the problem of the origin of the penetrating radiation. In a flight in 1909, Karl Bergwitz, a former pupil of Elster and Geitel, found that the ionization at 1300 m altitude had decreased to about 24% of the value on the ground. However, Bergwitz’s results were questioned because his electrometer was damaged during the flight. He later investigated electrometers on the ground and at 80 m, reporting that no significant decrease of the ionization was observed. Other measurements with similar results were obtained around the same time by Alfred Gockel, from Fribourg, Switzerland, who flew up to 3000 m (and first introduced the term “kosmische Strahlung”, or “cosmic radiation”). The general interpretation was that radioactivity was coming mostly from the Earth’s surface, although the balloon results were puzzling.

The meteorologist Franz Linke had, in fact, made 12 balloon flights in 1900–1903 during his PhD studies at Berlin University, carrying an electroscope built by Elster and Geitel to a height of 5500 m. The thesis was not published, but a published report concludes: “Were one to compare the presented values with those on ground, one must say that at 1000 m altitude […] the ionization is smaller than on the ground, between 1 and 3 km the same amount, and above it is larger … with values increasing up to a factor of 4 (at 5500 m). […] The uncertainties in the observations […] only allow the conclusion that the reason for the ionization has to be found first in the Earth.” Nobody later quoted Linke and although he had made the right measurement, he had reached the wrong conclusions.

Underwater measurements

One person to question the conclusion that radioactivity came mostly from the Earth’s crust was an Italian, Domenico Pacini. An assistant meteorologist in Rome, he made systematic studies of ionization on mountains, on the shoreline and at sea between 1906 and 1910. Pacini’s supervisor was the Austrian-born Pietro Blaserna, who had graduated in physics within the electrology group at the University of Vienna. The instruments used in Rome were state of the art and Pacini could reach a sensitivity of one third of a volt.

In 1910 he placed one electroscope on the ground and one out at sea, a few kilometres off the coast, and made simultaneous measurements. He observed a hint of a correlation and concluded that “in the hypothesis that the origin of penetrating radiations is in the soil […] it is not possible to explain the results obtained”. That same year he looked for a possible increase in radioactivity during a passage of Halley’s comet and found no effect.

Pacini later developed an experimental technique for underwater measurements and in June 1911 compared the rate of ionization at sea level and at 3 m below water, at a distance of 300 m from the shore of the Naval Academy of Livorno. He repeated the measurements in October on the Lake of Bracciano. He reported on his measurements, the results – and their interpretation – in a note entitled, “Penetrating radiation at the surface of and in water”, published in Italian in Nuovo Cimento in February 1912. In that paper, Pacini wrote: “Observations carried out on the sea during the year 1910 led me to conclude that a significant proportion of the pervasive radiation that is found in air had an origin that was independent of the direct action of active substances in the upper layers of the Earth’s surface. … [To prove this conclusion] the apparatus … was enclosed in a copper box so that it could be immersed at depth. … Observations were performed with the instrument at the surface, and with the instrument immersed in water, at a depth of 3 m”.

Pacini measured the discharge rate of the electroscope seven times over three hours. The ionization underwater was 20% lower than at the surface, consistent with absorption by water of radiation coming from outside; the significance was larger than 4 σ. He wrote: “With an absorption coefficient of 0.034 for water, it is easy to deduce from the known equation I/I0 = exp(–d/λ), where d is the thickness of the matter crossed, that, in the conditions of my experiments, the activities of the sea-bed and of the surface were both negligible. The explanation appears to be that, owing to the absorbing power of water and the minimum amount of radioactive substances in the sea, absorption of radiation coming from the outside indeed happens, when the apparatus is immersed.” Pacini concluded: “[It] appears from the results of the work described in this note that a sizable cause of ionization exists in the atmosphere, originating from penetrating radiation, independent of the direct action of radioactive substances in the crust.”

Despite Pacini’s conclusions – and the puzzling results of Wulf and Gockel on the dependence of radioactivity on altitude – physicists were reluctant to abandon the hypothesis of a terrestrial origin for the mystery penetrating radiation. The situation was resolved in 1911 an 1912 with the long series of balloon flights by Victor Hess, who established the extra-terrestrial origin of at least part of the radiation causing the observed ionization. However, it was not until 1936 that Hess was rewarded with the Nobel Prize for the discovery of cosmic radiation. By then the importance of this “natural laboratory” was clear, and he shared the prize with Carl Anderson, who had discovered the positron in cosmic radiation four years earlier. Meanwhile, Pacini had died in 1934 – his contributions mainly forgotten through a combination of historical and political circumstances.

LHCf: bringing cosmic collisions down to Earth

CClhc1_06_12

Recent observations of ultra-high-energy cosmic rays (UHECRs) by extensive air-shower arrays have revealed a clear cut-off in the energy spectrum at 1019.5 eV. The results are consistent with the predictions made in the mid-1960s that interactions with the cosmic microwave background would suppress the flux of particles at high energies (Greisen 1966, Zatsepin and Kuz’min 1966). Nevertheless, as the article on page 22 explains, the nature of the cut-off – and, indeed, the origin of the UHECRs – remains unknown.

UHECRs are observed in the large showers of particles created when a high-energy particle (proton or nucleus) interacts in the atmosphere. This means that information about the primary cosmic ray has to be estimated by “interpreting” the observed extensive air shower. Both longitudinal and lateral shower structures measured by the fluorescence and surface detectors, respectively, are used in the interpretation of the energy and species of the primary particle through comparison with the predictions of Monte Carlo simulations. In high-energy hadronic collisions, the energy flow is dominated by the very-forward-emitted particles in which the shower development is determined by the energy balance of baryonic and mesonic particle production. However, the lack of knowledge about hadronic interactions at such high energies, especially in the forward region, means that the interpretations tend to be model-dependent. To constrain the models used in the simulations, measurements of the forward production of particles relevant to air-shower development are indispensable at the highest energies possible.

Into the lab

The most important cross-section for cosmic-ray shower development is for the forward production in hadron collisions of neutral pions (π0), which immediately decay to two forward photons. The highest energies accessed in the laboratory are reached in particle colliders, and until the start-up of the LHC, the only experiment dedicated to forward particle production at a collider was carried out at UA7 at CERN’s SppS collider (Paré et al. 1990). Now, two decades later, members of the UA7 team have formed a new collaboration for the Large Hadron Collider forward (LHCf) experiment (LHCf 2006). This is dedicated to measuring very-forward particle production at the LHC, where running with proton–proton collisions at the full design energy of 14 TeV will correspond to 1017 eV in the laboratory frame and so will be in touch with the UHECR region.

The LHCf experiment consists of two independent calorimeters (Arm1 and Arm2) installed 140 m on either side of the interaction point in the ATLAS experiment. The detectors fit in the instrumentation slots of the target neutral absorbers (TANs), which are located where the vacuum chamber for the beam makes a Y-shaped transition from the single beam pipe that passes through the interaction point to the two separate beam tubes that continue into the arcs of the LHC. Charged particles produced in the collision region in the direction of the TAN are swept aside by an inner beam-separation magnet before they reach it. Consequently, only neutral particles produced at the interaction point enter the TAN and the detectors. This location allows the observation of particles at nearly 0° to the proton beam direction.

Both LHCf detectors contain two sampling and imaging calorimeters, each consisting of 44 radiation lengths of tungsten and 16 sampling layers of 3 mm-thick plastic scintillator for the initial runs. The calorimeters in Arm1 have an area transverse to the beam direction of 20 × 20 mm2 and 40 × 40 mm2, while those in Arm2 have areas of 25 × 25 mm2 and 32 × 32 mm2. Four X-Y layers of position-sensitive sensors are interleaved with the tungsten and scintillator to provide the transverse positions of the showers generated in the calorimeters, employing different technologies in the two detectors: Arm1 uses scintillating fibres and multi-anode photomultiplier tubes (MAPMTs); Arm2 uses silicon-strip sensors. In each case, the sensors are installed in pairs in such a way that two pairs are optimized to detect the maximum of gamma-ray-induced showers, while the other two are for hadronic showers developed deep within the calorimeters. Although the lateral dimensions of these calorimeters are small, the energy resolution is expected to be better than 6% and the position resolution better than 0.2 mm for gamma-rays with energy between 100 GeV and 7 TeV. This has been confirmed by test-beam results at CERN’s Super Proton Synchrotron.

LHCf successfully took data right from the first collision at the LHC in 2009 and finished its first phase of data-taking in mid-July 2010, after collecting enough data in proton–proton collisions at both 900 GeV and 7 TeV in the centre of mass. In 2011, the collaboration reported its measurements of inclusive photon spectra at 7 TeV (Adriani et al. 2011). A comparison of the data with predictions from the hadron-interaction models used in the study of air showers and from PYTHIA 8.145, which is popular in the high-energy-physics community, revealed various discrepancies, with none of the models showing perfect agreement with the data.

Now, LHCf has results for the inclusive π0 production rate at rapidities greater than 8.9 in proton–proton data at 7 TeV in the centre of mass. Using data collected in two runs in May 2010, corresponding to integrated luminosities of 2.53 nb–1 in Arm1 and 1.90 nb–1 in Arm2, the collaboration measured instances where two photons emitted into the very-forward regions could be attributed to π0 decays and obtained the transverse momentum (pT) distributions of the π0s. The criteria for the selection of π0 events were based on the position of the incident photons (within 2 mm of the edge of the calorimeter), the photon energy (above 100 GeV), the number of hits (one in each calorimeter), photon-like particle identification using the energy deposition and, last, an invariant mass corresponding to the π0 mass.

The pT spectra were derived in independent analyses of the two detectors, Arm1 and Arm2, in six rapidity intervals covering the range 8.9–11.0. These spectra, which agree within statistical and systematic errors, were then combined and compared with the predictions from various hadronic interaction models: DPMJET 3.04, QGSJET II-03, SIBYLL 2.1, EPOS 1.99 and PYTHIA 8.145 (default parameter set).

CClhc2_06_12

Figure 1 shows the combined spectrum for one rapidity interval, 9.2 < y < 9.4, compared with the outcome from these models (Adriani et al. 2012). It is clear that DPMJET 3.04 and PYTHIA 8.145 predict the π0 production rates to be higher than the data from LHCf as pT increases. SIBYLL 2.1 also predicts harder pion spectra than are observed in the experimental data, although the expected π0 yield is generally small. On the other hand, QGSJET II-03 predicts π0 spectra that are softer than both the LHCf data and the other model predictions. Among the hadronic interaction models, EPOS 1.99 shows the best overall agreement with the LHCf data.

CClhc3_06_12

In figure 2 the values of average pT (〈pT〉) obtained in this analysis are compared as a function of ylab = ybeam – y with the results from UA7 and with the model predictions. Although the LHCf and UA7 data have limited overlap and the systematic errors for UA7 are relatively large, the values of 〈pT〉 from the two experiments lie mainly along a common curve and there is no evidence of a dependence on collision energy. EPOS 1.99 shows the smallest dependence of 〈pT〉 on the two collision energies among three of the models, and this tendency is consistent with the results from LHCf and UA7. It is also evident from figure 2 that the best agreement with the LHCf data are obtained by EPOS 1.99.

The photon and π0 data from the LHCf experiment can now be used in models to constrain the mesonic part (or electromagnetic part via π0 s) of the air-shower development. The collaboration, meanwhile, is turning to analysis of baryon production, which will provide complementary information on the hadronic interaction. At the same time, work is ongoing towards taking data on proton–lead collisions at the LHC, planned for the end of 2012. Such nuclear collision data are important for understanding the interaction between cosmic rays and the atmosphere. Also other work is under way on replacing the plastic scintillator in the calorimeters – which were removed after the runs in July 2010 – with more radiation-resistant crystal scintillator, so as to be ready for 2014 when the LHC will run at 7 TeV per beam. There are also plans to change the position of the silicon sensors to improve the performance of the experiment in measuring the energy of the interacting particles.

Studies of ultra-high-energy cosmic rays look to the future

CCuhe1_06_12

“Analysis of a cosmic-ray air shower recorded at the MIT Volcano Ranch station in February 1962 indicates that the total number of particles in the shower was 5 × 1010. The total energy of the primary particle that produced the shower was 1.0 × 1020 eV.” Thus begins the 1963 paper in which John Linsley described the first detection of a cosmic ray with a surprisingly high energy. Such ultra-high-energy cosmic rays (UHECRs), which arrive at Earth at rates of less than 1 km–2 a century, have since proved challenging both experimentally and theoretically. The International Symposium on Future Directions in UHECR Physics, which took place at CERN on 13–16 February, aimed to discuss these challenges and look to the next step in terms of a future large-scale detector. Originally planned as a meeting of about 100 experts from the particle- and astroparticle-physics communities, the symposium ended up attracting more than 230 participants from 24 countries, reflecting the strong interest in the current and future prospects for cosmic rays at the highest energies.

Soon after Linsley’s discovery, UHECRs became even more baffling when Arno Penzias and Robert Wilson discovered the cosmic microwave background (CMB) radiation in 1965. The reason for this is twofold: first, astrophysical sources delivering particle energies of 10 to 100 million times the beam energy of the LHC are hard to conceive of; and, second, the universe becomes opaque for protons and nuclei at energies above 5 × 1019 eV because of their interaction with the CMB radiation. In 1966, Kenneth Greisen, and independently Georgy Zatsepin and Vadim Kuz’min, pointed out that protons would suffer pion photoproduction and nuclei photodisintegration in the CMB. These processes limit the cosmic-ray horizon above the so-called “GZK” threshold to less than about 100 Mpc, resulting in strongly suppressed fluxes of protons and nuclei from distant sources.

The HiRes, Pierre Auger and Telescope Array (TA) collaborations recently reported a suppression of just this type at about the expected threshold. Does this mark the long awaited discovery of the GZK effect? At the symposium, not all participants were convinced because the break in the energy spectrum could also be caused by the sources running out of steam. To shed more light on this most important question of astroparticle physics, information about the mass composition and arrival directions, as well as the precise energy spectrum of the highest-energy cosmic rays, is now paramount.

Searching for answers

Three large-scale observatories, each operated by international collaborations, are currently taking data and trying to provide answers: the Pierre Auger Observatory in Argentina, the flagship in the field, which covers 3000 km2; the more recently commissioned TA in Utah, which samples an area of 700 km2; and the smaller Yakutsk Array in Siberia, which now covers about 10 km2. To make progress in understanding the data from these three different observatories new ground was broken in preparing for the symposium. Before the meeting, five topical working groups were formed comprising members from each collaboration. They were given the task of addressing differences between the respective approaches in the measurement and analysis methods, studying their impact on the physics results and delivering a report at the symposium. These working-group reports – on the energy spectrum, mass composition, arrival directions, multimessenger studies and comparisons of air-shower data to simulations – were complemented by invited overview talks, contributed papers and a large number of posters addressing various topics of analyses, new technologies and concepts for future experiments.

In opening the symposium and welcoming the participants, CERN’s director of research, Sergio Bertolucci, emphasized the organization’s interest in astroparticle physics in general and in cosmic rays in particular – the latter being explicitly named in the CERN convention. Indeed, many major astroparticle experiments have been given the status of “recognized experiment” by CERN. Pierre Sokolsky, a key figure in the legendary Fly’s Eye experiment and its successor HiRes, followed with the first talk, a historical review of the research on the most energetic particles in nature. Paolo Privitera of the University of Chicago then reviewed the current status of measurements, highlighting differences in observations and the understanding of systematic uncertainties. Theoretical aspects of acceleration and propagation were also discussed, as well as predictions of the energy and mass spectrum, by Pasquale Blasi of Istituto Nazionale di Astrofisica/Arcetri Astrophysical Observatory and Venya Berezinsky of Gran Sasso National Laboratory.

CCuhe2_06_12

Data from the LHC, particularly those measured in the very forward region, are of prime interest for verifying and optimizing hadronic-interaction event-generators that are employed in the Monte Carlo simulations of extensive air showers (EAS), which are generated by the primary UHECRs. Overviews of recent LHC data by Yoshikata Itow of Nagoya University and, more generally, the connection between accelerator physics and EAS were therefore given prominence at the meeting. Tanguy Pierog of Karlsruhe Institute of Technology demonstrated that the standard repertoire of interaction models employed in EAS simulations not only cover the LHC data reasonably well but also the predicted LHC data better than high-energy physics models, such as PYTHIA or HERWIG. Nonetheless, no perfect model exists and significant muon deficits in the models are seen at the highest air-shower energies. In a keynote talk, John Ellis, now of King’s College London, highlighted UHECRs as being the most extreme environment for studying particle physics – at a production energy of around 1011 GeV and more than 100 TeV in the centre-of-mass – and discussed the potential for exotic physics. In a related talk, Paolo Lipari of INFN Rome La Sapienza discussed the interplay of cross-sections, cosmic-ray composition and interaction properties, highlighting the mutual benefits provided by cosmic rays and accelerator physics.

High-energy photons and neutrinos are directly related to cosmic rays and are different observational probes of the high-energy non-thermal universe. Tom Gaisser of the University of Delaware, Günter Sigl of the University of Hamburg and others addressed this multimessenger aspect and argued that current neutrino limits from IceCube begin to disfavour a UHECR origin inside relativistic gamma-ray bursts and active galactic-nuclei (AGN) jets, and that cosmogenic neutrinos would provide a smoking-gun signal of the GZK effect. However, as Sigl noted, fluxes of diffuse cosmogenic neutrinos and photons depend strongly on the chemical composition, maximal acceleration energy and redshift evolution of sources.

Future options

Looking towards the future, the symposium discussed potentially attractive new technologies for cosmic-ray detection. Radio observations of EAS at frequencies of some tens of megahertz are being performed at the prototype level by a couple of groups and the underlying physical emission processes are being understood in greater detail. Ad van den Berg of the University of Groningen described the status of the largest antenna array under construction, the Auger Engineering Radio Array (AERA). More recently, microwave emission by molecular bremsstrahlung was suggested as another potentially interesting emission process. Unlike megahertz-radiation, gigahertz-emission would occur isotropically, opening the opportunity to observe showers sideways from large distances, a technique known from the powerful EAS fluorescence observations. Thus, huge volumes could be surveyed with minimal equipment available off the shelf. Pedro Facal of the University of Chicago and Radomir Smida of the Karlsruhe Institute of Technology reported preliminary observations of such radiation, with signals being much weaker than expected from laboratory measurements.

The goal is to reach huge apertures with particle-physics capability at cost levels of €100 million.

The TA collaboration is pursuing forward-scattered radar detection of EAS, as John Belz of the University of Utah reported; this again potentially allows huge volumes to be monitored for reflected signals. However, the method still needs to be proved to work. Interesting concepts for future giant ground-based observatories based on current and novel technologies were presented by Antoine Letessier-Selvon of the CNRS, Paolo Privitera and Shoichi Ogio of Osaka City University. The goal is to reach huge apertures with particle-physics capability at cost levels of €100 million.

Parallel to pushing for a new giant ground-based observatory, space-based approaches, most notably by JEM-EUSO – the Extreme Universe Space Observatory aboard the Japanese Experiment Module – to be mounted on the International Space Station, were discussed by Toshikazu Ebizusaki of RIKEN, Andrea Santangelo of the Institut für Astronomie und Astrophysik Tübingen and Mario Bertaina of Torino University/INFN. Depending on the effective duty cycle, apertures of almost 10 times that of the Auger Observatory with a uniform coverage of northern and southern hemispheres may be reached. However, the most important weakness as compared with ground-based experiments is the poor sensitivity to the primary mass and the inability to perform particle-physics-related measurements.

CCuhe3_06_12

The true highlights of the symposium were reports given by the joint working groups. This type of co-operation, inspired by the former working groups for CERN’s Large Electron–Positron Collider, marked a new direction for the community. Yoshiki Tsunesada of the Tokyo Institute of Technology reported detailed comparisons of the energy spectra measured by the different observatories. All spectra are in agreement within the given energy-scale uncertainties of around 20%. Accounting for these overall differences, spectral shapes and positions of the spectral features are in good agreement. Nevertheless, the differences are not understood in detail and studies of the fluorescence yield and photometric calibration – treated differently by the TA and Auger collaborations – are to be pursued.

The studies of the mass-composition working group, presented by Jose Bellido of the University of Adelaide, addressed whether the composition measured by HiRes and TA is compatible with proton-dominated spectra while Auger suggests a significant fraction of heavy nuclei above 1019 eV. Following many cross-checks and cross-correlations between the experiments, differences could not be attributed to issues in the data analysis. Even after taking into account the shifts in the energy scale, the results are not fully consistent within quoted uncertainties, assuming no differences existed between the northern and southern hemispheres.

The anisotropy working group discussed large-scale anisotropies and directional correlations to sources in various catalogues and concluded that there is no major departure from anisotropy in any of the data sets, although some hints at the 10–20° scale might be have been seen by Auger and TA. Directional correlations to AGN and to the overall nearby matter-distribution are found by Auger at the highest energies, but the HiRes collaboration could not confirm this finding. Recent TA data agree with the latest signal strength of Auger but, owing to the lack of statistics, they are also compatible with isotropy at the 2% level.

Studies by the photon and neutrino working group, presented by Markus Risse of the University of Siegen and Grisha Rubtsov from the Russian Academy of Sciences, addressed the pros and cons of different search techniques and concluded that the results are similar. No photons and neutrinos have been observed yet but prospects for the coming years seem promising for reaching sensitivities for optimistic GZK fluxes.

CCuhe4_06_12

Lastly, considerations of the hadronic-interaction and EAS-simulation working group, presented by Ralph Engel of Karlsruhe Institute of Technology, acknowledged the many constraints – so far without surprises – that are provided by the LHC. Despite the good overall description of showers, significant deficits in the muon densities at ground level are observed in the water Cherenkov tanks of Auger. The energy obtained by the plastic scintillator array of TA is around 30% higher than the energies measured by fluorescence telescopes. These differences are difficult to understand and deserve further attention. Nevertheless, proton–air and proton–proton inelastic cross-sections up to √s = 57 TeV have been extracted from Auger, HiRes and Yakutsk data, demonstrating the particle-physics potential of high-energy cosmic rays.

CCuhe5_06_12

The intense and lively meeting was summarized enthusiastically by Angela Olinto of the University of Chicago and Masaki Fukushima of the University of Tokyo. A round-table discussion, chaired by Alan Watson of the University of Leeds, iterated the most pressing questions to be addressed and the future challenges to be worked on towards a next-generation giant observatory. Clearly, important steps were made at this symposium, marking the start of a coherent worldwide effort towards reaching these goals. The open and vibrant atmosphere of CERN contributed much to the meeting’s success and was highly appreciated by all participants, who agreed to continue the joint working groups and discuss progress at future symposia.

• For more information about the symposium, see http://2012.uhecr.org.

ALICE looks to the skies

 

ALICE is one of the four big experiments at CERN’s LHC. It is devoted mainly to the study of a new phase of matter, the quark–gluon plasma, which is created in heavy-ion collisions at very high energies. However, located in a cavern 52 m underground with 28 m overburden of rock, it can also detect muons produced by the interactions of cosmic rays with the Earth’s atmosphere.

The use of high-energy collider detectors for cosmic-ray physics was pioneered during the era of the Large Electron–Positron (LEP) collider at CERN by the L3, ALEPH and DELPHI collaborations. An evolution of these programmes is now possible at the LHC, where the experiments are expected to operate for many years, with the possibility of recording a large amount of cosmic data. In this context, ALICE began a programme of cosmic data-taking, collecting data for physics for 10 days over 2010 and 2011 during pauses in LHC operations. In 2012, in addition to this standard cosmic data-taking, a special trigger now allows the detection of cosmic events during proton–proton collision runs.

A different approach

In a typical cosmic-ray experiment, the detection of atmospheric muons is usually done using large-area arrays at the surface of the Earth or with detectors deep underground. The main purpose of such experiments is to study the mass composition and energy spectrum of primary cosmic rays in an energy range above 1014 eV, which is not available through direct measurements using satellites or balloons. The big advantages of these apparatuses are the large size and, for the surface experiments, the possibilities for measuring different particles, such as electrons, muons and hadrons, created in extensive air showers. Because the detectors involved in collider experiments are tiny compared with the large-area arrays, the approach and the studies have to be different so that the remarkable performances of the detectors can be exploited.

The first different characteristic for experiments at LEP or the LHC is the location, being some 50–140 m underground. These are in an intermediate situation between surface arrays – where all of the components of the shower can be detected – and detectors deep underground, where only the highest-energy muons (usually of the order of 1 TeV at the surface) are recorded. In particular for ALICE, all of the electromagnetic and hadronic components are absorbed by the rock overburden and apart from neutrinos only muons with an energy greater than 15 GeV reach the detectors. The special features that are brought by ALICE are the ability to detect a clean muon component with a low-energy cut-off, allowing a larger number of detected events compared with deep underground sites, combined with the ability to measure a greater number of variables, such as momentum, arrival time, density and direction, than was ever achieved by earlier experiments.

The tradition in collider experiments, and also in ALICE, is to use these muons mainly for the calibration and alignment of the detectors. However, during the commissioning of ALICE, specific triggers were implemented to develop a programme of cosmic-ray physics. These employ three detectors: A COsmic Ray DEtector (ACORDE), time-of-flight (TOF) and the silicon pixel detector (SPD).

ACORDE is an array of 60 scintillator modules located on the three upper faces of the ALICE magnet yoke, covering 10% of its area. The trigger is given by the coincidence of the signals in at least two different modules. The TOF is a cylindrical array of multi-gap resistive-plate chambers, with a large area that completely surrounds the time-projection chamber (TPC), which is 5 m long and has a diameter of 5 m. The cosmic trigger requires a signal in a read-out channel (a pad) in the upper part of the TOF and another in a pad in the opposite lower part. The SPD consists of two layers of silicon pixel modules located close to the interaction point. The cosmic trigger is given by the coincidence of two signals in the top and bottom halves of the outer layer.

The track of an atmospheric muon crossing the apparatus can be reconstructed by the TPC. This detector’s excellent tracking performance can be exploited to measure the main characteristics of the muon – such as momentum, charge, direction and spatial distribution – with good resolution, while the arrival time can be measured with a precision of 100 ps with the TOF. In particular the ability to track a high density of muons – unimaginable with a standard cosmic-ray apparatus – together with the measurement of all of these observables at the same time, permits a new approach to the analysis of cosmic events, which has so far not been exploited. For these reasons, the main research related to the physics of cosmic rays with the ALICE experiment has centred on the study of the muon-multiplicity distribution and in particular high-density events.

The analysis of the data taken in 2010 and 2011 revealed a muon multiplicity distribution that can be reproduced only by a mixed composition. Figure 1 shows the multiplicity distribution for real data taken in 2011, together with the points predicted for pure-proton and pure-iron composition for the primaries. It is clear from the simulation that the lower multiplicities are closer to the pure-proton points, while at higher multiplicities the data tend to approach the iron points. This behaviour is expected from a mixed composition that on average increases the mass of the primary when its energy increases, a result confirmed by several previous experiments.

High-multiplicity events

However, a few events found both in 2010 and in 2011 (beyond the scale of figure 1) have an unexpectedly large number of muons. In particular, the highest multiplicity reconstructed by the TPC has a muon density of 18 muons/m2. Figure 2 shows the display of this event and gives an idea of the TPC’s capabilities in tracking such high particle densities without problems of saturation, a performance never achieved in previous experiments.

The estimated energy of the primary cosmic ray for this event is at least 3 × 1016 eV, assuming that the core of the air shower is inside ALICE and that the primary particle is an iron nucleus. Recalling that the rate of cosmic rays is 1 m–2 year–1 at the energy of the knee in the spectrum (3 × 1015 eV), and that over one decade in energy the flux decreases by a factor of 100, an event with this muon density is expected in ALICE in 4–5 years of data. Since other events of high multiplicity have been found in only 10 days of data-taking, further investigation and detection will be necessary to understand whether they are caused by standard cosmic rays – and if the high multiplicity is simply a statistical fluctuation – or whether they have a different production mechanism. A detailed study of these events has not shown any unusual behaviour in the other measured variables.

For all of these reasons it is important to see whether other unexpected high-multiplicity events are detected in future and at what rate. To this end, in addition to standard cosmic runs, a special trigger requiring the coincidence of at least four ACORDE modules has been implemented this year to record cosmic events during proton–proton collisions, and so increase the time for data-taking to more than 10 times that of the existing data.

It is interesting to note that the three LEP experiments – L3, ALEPH and DELPHI – also found an excess of high-multiplicity events that were not explained by Monte Carlo models. The hope with ALICE is to find and study a large number of these events in a more quantitative way to understand properly their nature.

Bruno Alessandro, INFN Torino, and Mario Rodriguez, Autonomous University of Puebla, Mexico.

Cherenkov Telescope Array is set to open new windows

CCcta1_06_12

In 2004, as the telescopes of the High Energy Stereoscopic System (HESS) were starting to point towards the skies, there were perhaps 10 astronomical objects that were known to produce very high-energy (VHE) gamma rays – and exactly which 10 was subject to debate. Now, in 2012, well in excess of 100 VHE gamma-ray objects are known and plans are under way to take observations to a new level with the much larger Cherenkov Telescope Array.

VHE gamma-ray astronomy covers three decades in energy, from a few tens of giga-electron-volts to a few tens of tera- electron-volts. At these high energies, even the brightest astronomical objects have fluxes of only around 10–11 photons cm–2 s–1, and the inevitably limited detector-area available to satellite-based instruments means that their detection from space requires unfeasibly long exposure times. The solution is to use ground-based telescopes, although at first sight this seems improbable, given that no radiation with energies above a few electron-volts can penetrate the Earth’s atmosphere.

The possibility of doing ground-based gamma-ray astronomy was opened up in 1952 when John Jelley and Bill Galbraith measured brief flashes of light in the night sky using basic equipment sited at the UK Atomic Energy Research Establishment in Oxfordshire – then, as now, not famed for its clear skies (The discovery of air-Cherenkov radiation). This confirmed Blackett’s suggestion that cosmic rays, and hence also gamma rays, contribute to the light intensity of the night sky via the Cherenkov radiation produced by the air showers that they induce in the atmosphere. The radiation is faint – constituting about one ten-thousandth of the night-sky background – and each flash is only a few nanoseconds in duration. However, it is readily detectable with suitable high-speed photodetectors and large reflectors. The great advantage of this technique is that the effective area of such a telescope is equivalent to the area of the pool of light on the ground, some 104 m2.

These observations can help in answering fundamental physics questions concerning the nature of both dark matter and gravity

Early measurements of astronomical gamma rays using this method were difficult to make because there was no method of distinguishing the gamma-ray-induced Cherenkov radiation from that produced by the more numerous cosmic-ray hadrons. However, in 1985 Michael Hillas at Leeds University showed that fundamental differences in the hadron- and photon-initiated air showers would lead to differences in the shapes of the observed flashes of Cherenkov light. Applying this technique, the Whipple telescope team in Arizona made the first robust detection of a VHE gamma-ray source – the Crab Nebula – in 1989. When his technique was combined with the arrays of telescopes developed by the HEGRA collaboration and the high-resolution cameras of the Cherenkov Array at Themis, the imaging atmospheric Cherenkov technique was well and truly born.

The current generation of projects based on this technique includes not only HESS, in Namibia, but also the Major Atmospheric Gamma-Ray Imaging Cherenkov (MAGIC) project in the Canary Islands, the Very Energetic Radiation Imaging Telescope Array System (VERITAS) in Arizona and CANGAROO, a collaborative project between Australia and Japan, which has now ceased operation.

These telescopes have revealed a wealth of phenomena to be studied. They have detected the remains of supernovae, binary star systems, highly energetic jets around black holes in distant galaxies, star-formation regions in our own and other galaxies, as well as many other objects. These observations can help not only with understanding more about what is going on inside these objects but also in answering fundamental physics questions concerning, for example, the nature of both dark matter and gravity.

The field is now reaching the limit of what can be done with the current instruments, yet the community knows that it is observing only the “tip of the iceberg” in terms of the number of gamma-ray sources that are out there. For this reason, some 1000 scientists from 27 countries around the world have come together to build a new instrument – the Cherenkov Telescope Array (CTA).

The Cherenkov Telescope Array

The aim of the CTA consortium is to build two arrays of telescopes – one in the northern hemisphere and one in the southern hemisphere – that will outperform current telescope systems in a number of ways. First, the sensitivity will be a factor of around 10 better than any current array, particularly in the “core” energy range around 1 TeV. Second, it will provide an extended energy range, from a few tens of giga-electron-volts to a few hundred tera-electron-volts. Third, its angular resolution at tera-electron-volt energies will be of the order of one arc minute – an improvement of around a factor of four on the current telescope arrays. Last, its wider field of view will allow the array to survey the sky some 200 times faster at 1 TeV.

CCcta2_06_12

This unprecedented performance will be achieved using three different telescope sizes, covering the low-, intermediate- and high-energy regimes, respectively. The larger southern-hemisphere array is designed to make observations across the whole energy range. The lowest-energy photons (20–200 GeV) will be detected with a few large telescopes of 23 m diameter. Intermediate energies, from about 200 GeV to 1 TeV, will be covered with some 25 medium-size telescopes of 12 m diameter. Gamma rays at the highest energies (1–300 TeV) produce so many Cherenkov photons that they can be easily seen with small (4–6 m diameter) telescopes. These extremely energetic photons are rare, however, so a large area must be covered on the ground (up to 10 km2), needing as many as 30 to 70 small telescopes to achieve the required sensitivity. The northern-hemisphere array will cover only the low and intermediate energy ranges and will focus on observations of extragalactic objects.

Being both an astroparticle-physics experiment and a true astronomical observatory, with access for the community at large, the CTA’s science remit is exceptionally broad. The unifying principle is that gamma rays at giga- to tera-electron-volt energies cannot be produced thermally and therefore the CTA will probe the “non-thermal” universe.

Gamma rays can be generated when highly relativistic particles – accelerated, for example, in supernova shock waves – collide with ambient gas or interact with photons and magnetic fields. The flux and energy spectrum of the gamma rays reflects the flux and spectrum of the high-energy particles. They can therefore be used to trace these cosmic rays and electrons in distant regions of the Galaxy or, indeed, other galaxies. In this way, VHE gamma rays can be used to probe the emission mechanisms of some of the most powerful astronomical objects known and to probe the origin of cosmic rays.

VHE gamma rays can also be produced in a top-down fashion by decays of heavy particles such as cosmic strings or the hypothetical dark-matter particles. Large dark-matter densities that arise from the accumulation of the particles in potential wells, such as near the centres of galaxies, might lead to detectable fluxes of gamma rays, especially given that the annihilation rate – and therefore the gamma-ray flux – is proportional to the square of the density. Slow-moving dark-matter particles could give rise to a striking, almost mono-energetic photon emission.

The discovery of such line emission would be conclusive evidence for dark matter, and the CTA could have the capability to detect gamma-ray lines even if the cross-section is “loop-suppressed”, which is the case for the most popular candidates of dark matter, i.e. those inspired by the minimal supersymmetric extensions to the Standard Model and models with extra dimensions, such as Kaluza-Klein theory. Line radiation from these candidates is not detectable by current telescopes unless optimistic assumptions about the dark-matter density distribution are made. The more generic continuum contribution (arising from pion production) is more ambiguous but with its curved shape it is potentially distinguishable from the usual power-law spectra produced by known astrophysical sources.

It is not only the mechanisms by which gamma rays are produced that can provide useful scientific insights. The effects of propagation of gamma rays over cosmological distances can also lead to important discoveries in astrophysics and fundamental physics. VHE gamma rays are prone to photon–photon absorption on the extragalactic background light (EBL) over long distances, and the imprint of this absorption process is expected to be particularly evident in the gamma-ray spectra from active galactic nuclei (AGN) and gamma-ray bursts. The EBL is difficult to measure because of the presence of foreground sources of radiation – yet its spectrum reveals information about the history of star formation in the universe. Already, current telescopes detect more gamma rays from AGN than might have been expected in some models of the EBL, but understanding of the intrinsic spectra of AGN is limited and more measurements are needed.

Building the CTA

CCcta3_06_12

How to build this magnificent observatory? This is the question currently preoccupying the members of the CTA consortium. There is much experience and know-how within the consortium of building VHE gamma-ray telescopes around the world but nonetheless challenges remain. Foremost is driving down the costs of components while also ensuring reliability. It is relatively easy to repair and maintain four or five telescopes, such as those found in the current arrays, but maintaining 60, 70 or even 100 presents difficulties on a different scale. Technology is also ever changing, particularly in light detection. The detector of choice for VHE gamma-ray telescopes has until now been the photomultiplier tube – but these are bulky, relatively expensive and have low quantum-efficiency. Innovative telescope designs, such as dual-mirror systems, might allow the exploitation of newer, smaller detectors such as silicon photodiodes, at least on some of the telescopes. Mirror technologies are another area of active research because the CTA will require a large area of robust, easily reproducible mirrors.

The CTA is currently in its preparatory phase, funded by the European Union Seventh Framework Programme and by national funding agencies. Not only are many different approaches to telescope engineering and electronics being prototyped to enable the consortium to choose the best possible solution, but organizational issues, such as the operation of the CTA as an observatory, are also under development. It is hoped that building of the array will commence in 2014 and that it will become the premier instrument in gamma-ray astronomy for decades to come. Many of its discoveries will no doubt bring surprises, as have the discoveries of the current generation of telescopes. There are exciting times ahead.

• For more about the CTA project, see www.cta-observatory.org.

A neutrino telescope deep in the Mediterranean Sea

CCmed1_06_12

Particle physicists – like many other scientists – are used to working under well controlled laboratory conditions, with constant temperature, controlled humidity and perhaps even a clean-room environment. They would consider crazy anyone who tried to install an experiment in the field outside the lab environment, without shelter against wind and weather. So what must they think of a group of physicists and engineers planning to install a huge, highly complex detector on the bottom of the open sea?

This is exactly what the KM3NeT project is about: a neutrino telescope that will consist of an array of photo-sensors instrumenting several cubic kilometres of water deep in the Mediterranean Sea (figure 1). The aim is to detect the faint Cherenkov light produced as charged particles emerge from the reactions of high-energy neutrinos in the instrumented volume of ocean or the rock beneath it. Most of the neutrinos that are detected will be “atmospheric neutrinos”, originating from the interactions of charged cosmic rays in the Earth’s atmosphere. Hiding among these events will be a few that have been induced by neutrinos of cosmic origin, and these are the prime objects that the experimenters desire.

Ideal messengers

Why are a few cosmic neutrinos worth the huge effort to construct and operate such an instrument? A century after the discovery of cosmic rays, the start of construction of the KM3NeT neutrino telescope marks a big step forwards in understanding their origin and solving the mystery of the astrophysical processes in which they acquire energies that are many orders of magnitude beyond the reach of terrestrial particle accelerators. This is because neutrinos are ideal messengers from the universe: they are neither absorbed nor deflected, i.e. they can escape from dense environments that would absorb all other particles; they point back to their origin; and they are produced inevitably if protons or heavier nuclei with the energies typical of cosmic rays – up to eight orders of magnitude above the LHC beam energy – scatter on other nuclei or on photons and thereby signal astrophysical acceleration of nuclei.

Only a handful of neutrinos assigned to an astrophysical source would convey the unambiguous message that this source accelerates nuclei – a finding that can not be achieved any other way. Of course, much more can be studied with neutrino telescopes. Cosmic neutrinos might signal annihilations of dark-matter particles, and their isotropic flux provides information about sources that cannot be resolved individually. Moreover, atmospheric neutrinos could be used to make measurements of unique importance for particle physics, such as the determination of the neutrino-mass hierarchy.

Driven by the fundamental significance of neutrino astronomy, a first generation of neutrino telescopes with instrumented volumes up to about a per cent of a cubic kilometre was constructed over the past two decades: Baikal, in the homonymous lake in Siberia; AMANDA, in the deep ice at the South Pole; and ANTARES, off the French Mediterranean coast. These detectors have proved the feasibility of neutrino detection in the respective media and provided a wealth of experience on which to build. However, they have not – yet – identified any neutrinos of cosmic origin.

These results and the evolution of astrophysical models of potential classes of neutrino sources over the past few years indicate that, in fact, much larger target volumes are necessary for neutrino astronomy. The first neutrino telescope of cubic-kilometre size, the IceCube observatory at the South Pole, was completed in December 2010. Its integrated exposure is growing rapidly and the discovery of a first source may be just round the corner.

Why then start constructing another large neutrino telescope? Would it not be better to wait and see what IceCube finds? To answer this question it is important to understand in somewhat more detail the way in which neutrinos are actually measured.

The key reaction is the charged-current (mostly deep-inelastic) scattering of a muon-neutrino or muon-antineutrino on a target nucleus. In such a reaction, an outgoing muon is produced that, on average, carries a large fraction of the neutrino energy and is emitted with only a small angular deflection from the neutrino direction. The muon trajectory – and thus the neutrino direction – is reconstructed from the arrival times of the Cherenkov light in the photo-sensors and the positions of the sensors. This method is suitable for the identification of neutrinos if they come from the opposite hemisphere, i.e. through the Earth. If they come from above, then the resulting muons are barely distinguishable from “atmospheric” muons that penetrate to the detector and are much more numerous. Neutrino telescopes therefore look predominantly “downwards” and do not cover the full sky. IceCube, being at the South Pole, can thus observe the Northern sky but not the Galactic centre and the largest part of the Galactic plane (figure 2).

CCmed2_06_12

The KM3NeT telescope will have the Galactic centre and central plane of the Galaxy in its field of view and will be optimized to discover and investigate the neutrino flux from Galactic sources. Shell-type supernova remnants are a particularly interesting kind of candidate source. In these objects the supernova ejecta hit interstellar material, such as molecular clouds, and form shock fronts. Gamma-ray observations show that these are places where particles are accelerated to very high energies – but there is an intense debate as to whether these gamma rays stem from accelerated electrons and positrons or hadrons. The only way to give a conclusive answer is through observing neutrinos. Figure 3 shows the sensitivity of KM3NeT and other different experiments to neutrino point sources. According to simulations based on model calculations using gamma-ray measurements by the High Energy Stereoscopic System (HESS) – an air Cherenkov telescope – KM3NeT could make an observation of the supernova remnant RX J1713.7-3946 (figure 4) with a significance of 5σ within 5 years, if the emission process is purely hadronic.

CCmed3_06_12

The construction of a neutrino telescope of this sensitivity within a realistic budget faces a number of challenges. The components have to withstand the hostile environment with several hundred bar of static pressure and extremely aggressive salt water. That limits the choice of materials, in particular as maintenance is difficult or even impossible. In addition, background light from the radioactive decay of potassium-40 and bioluminescence causes high rates of photomultiplier hits, while the deployment of the detector requires tricky sea operations and the use of unmanned submersibles to make cable connections.

CCmed4_06_12

When the KM3NeT design effort started out with an EU-funded Design Study (2006–2009), a target cost of €200 million for a cubic-kilometre detector was defined. At the time, this was considered utterly optimistic in view of the investment cost for ANTARES of about €20 million. Now, in 2012, the collaboration is confident that it can construct a detector of 5–6 km3 for €220–250 million. This enormous development is partly a result of optimizing the neutrino telescope for slightly higher energies, which implies larger horizontal and vertical distances between the photo-sensors. The main progress, however, has been in the technical design. Almost all of the components have been newly designed, in many cases pursuing completely new approaches.

The design of the optical module is a prime example. Instead of a large, hemispherical photomultiplier (8- or 10-inch diameter) in a glass sphere (17-inch diameter), the design now uses as many as 31 photomultipliers of 3-inch diameter per sphere (figure 5). This triples the photocathode area for each optical module, allows for a clean separation of hits with one or two photo-electrons and adds some directional sensitivity.

CCmed5_06_12

All data, i.e. all photomultiplier hits, will be digitized in the optical modules and sent to shore via optical fibres. At the shore station, a data filter will run on a computer cluster and select the hit combinations in which the hit pattern and timing are compatible with particle-induced events.

Three countries (France, Italy and the Netherlands) have committed major contributions to an overall funding of €40 million for a first construction phase; others (Germany, Greece, Romania and Spain) are contributing at a smaller level or have not yet made final decisions. It is expected that final prototyping and validation activities will be concluded by 2013 and that construction will begin in 2013–2014. The installation will soon substantially exceed any existing northern-hemisphere instruments in sensitivity, thus providing discovery potential from an early stage.

Last, astroparticle physicists are not alone in looking forward to KM3NeT. For scientists from various areas of underwater research, the detectors will provide access to long-term, continuous measurements in the deep sea. It will provide nodes in a global network of deep-ocean observatories and thus be a truly multidisciplinary research infrastructure.

• For more information, see the KM3NeT Technical Design Report at www.km3net.org.

The discovery of air-Cherenkov radiation

CCair1_06_12

Sixty years ago, in September 1952, two young researchers at the UK’s Atomic Energy Research Establishment went out on a moonless night into a field next to the Harwell facility equipped with little more than a standard-issue dustbin containing a Second World War parabolic signalling mirror only 25 cm in diameter, with a 5 cm diameter photomultiplier tube (PMT) at its focus, along with an amplifier and an oscilloscope. They pointed the mirror at the night sky, adjusted the thresholds on the apparatus and for the first time detected Cherenkov radiation produced in the Earth’s atmosphere by cosmic rays (Galbraith and Jelley 1953).

William (Bill) Galbraith and John Jelley were members of Harwell’s cosmic-ray group, which operated an array of 16 large-area Geiger-Müller counters for studying extensive air showers (EAS) – the huge cascades of particles produced when a primary cosmic particle interacts in the upper atmosphere. Over several nights, by forming suitable coincidences between the Geiger-Müller array and their PMT, Jelley and Galbraith demonstrated – unambiguously – a correlation between signals from the array and light pulses of short duration (<200 ns) with amplitudes exceeding 2–3 times that of the night-sky noise. By cross-calibrating with alpha particles from a 239Pu source, they were further able to estimate that they were detecting three photons per square centimetre per light flash in the wavelength range of 300–550 nm. A new age of Cherenkov astronomy was born.

The sky at night

Five years before this observation, at a meeting of the Royal Society’s Gassiot Committee in July 1947 on “The emission spectra of the night sky and aurorae”, Patrick Blackett had presented a paper in which he suggested, for the first time, that Cherenkov radiation emitted by high-energy cosmic rays should contribute to the light in the night sky. Blackett estimated the contribution of cosmic-ray-induced Cherenkov light to be 0.01% of the total intensity, concluding: “Presumably such a small intensity of light could not be detected by normal methods.” Blackett’s work went largely unnoticed until a chance meeting at Harwell in 1952, which Jelley later recounted (Jelley 1986): “… hearing of our work on Cherenkov light in water, [Blackett] quite casually mentioned that … he had shown that there should be a contribution to the light of the night sky, amounting to about 10–4 of the total, due to Cherenkov radiation produced in the upper atmosphere from the general flux of cosmic rays.” Jelley continued: “Blackett was only with us a few hours, and neither he nor any of us ever mentioned the possibility of pulses of Cherenkov light, from EAS. It was a few days later that it occurred to Galbraith and myself that such pulses might exist and be detectable.”

The work of 1952 demonstrated the presence of short-duration pulses of light in coincidence with EAS but it did not prove that the light was, indeed, Cherenkov radiation. In particular, Galbraith and Jelley were aware that the light that they had observed could be also be produced either by bremsstrahlung or by recombination following ionization in the atmosphere. Thus, in the summer of 1953, they set out to establish the Cherenkov nature of the light pulses that they had observed.

Daunted by the vagaries of the British weather, they headed to the Pic du Midi observatory in France where, over six moonless weeks in July to September 1953, they carried out a series of experiments to determine the polarization and directionality of the light and also performed a rudimentary wavelength determination. This time they were equipped with four mirrors and two types of PMT. Conscious that the light-pulse counting rate would change with the noise level of the night sky, which in turn would depend on which part of the sky they were looking at, they devised a method of keeping the mean PMT current and, hence the noise, constant by using a small lamp next to the mirror.

Experimental conditions at the top of the mountain were challenging. EAS correlations were provided by requiring coincidences of signals from the PMTs with those from a linear array of five trays of Geiger-Müller counters, each tray 800 cm2 in area and aligned over almost 75 m – the positioning of these units was somewhat limited by the available space on the mountain (Galbraith and Jelley 1955). PMT pulses were recorded on an oscilloscope and subsequently photographed. Evidence for polarization of the observed light, a known characteristic of Cherenkov radiation, was clearly established by taking readings of a PMT with a polarizer placed over the PMT’s photocathode and calculating the ratio of the number of events seen when the polarizer was aligned parallel or perpendicular to the Geiger-Müller array. The result was a ratio of 3.0±0.5 to 1 for events seen in coincidence with two Geiger-Müller counter trays (Jelley and Galbraith 1955).

The two researchers also investigated the directionality of the observed light by plotting the coincidence rate of pulses seen in two light receivers (normalized accordingly) as a function of the angle between the two receivers. This experiment was done using pairs of receivers 1 m apart and was repeated with mirrors having different fields of view. The results fell between the two theoretical curves for Cherenkov and ionization light but they gave additional support for the premise that the light being observed was, indeed, Cherenkov light. In addition, the use of wide-band filters enabled Galbraith and Jelley to demonstrate that the light contained more blue light than green, which was another expected feature of Cherenkov radiation.

During their studies on the Pic du Midi, Jelley and Galbraith went on to explore the relationship between the light yield in the atmosphere and the energy of the shower, confirming, as expected, that larger light pulses were correlated with showers with higher particle densities. Finally, aware that their light receivers had both a considerable effective area and good angular resolution, they went on to search for possible point sources of cosmic rays in the night sky. The search yielded no statistically significant variations, and Galbraith and Jelley subsequently estimated that the receiver was sensitive to showers of energies of 1014 eV and above.

Following these studies in the early 1950s, it soon became apparent that use of the atmosphere as a Cherenkov radiator was a viable experimental technique. By the end of the decade, Cherenkov radiation in the atmosphere had been developed further as a means for studying cosmic rays – far away from the generally unsuitable British climate. In the Soviet Union, Aleksandr Chudakov and N M Nesterova of the Lebedev Physical Institute deployed a series of large-area Geiger counters along with eight light receivers at 3800 m in the Pamir Mountains to detect the lateral distribution of the Cherenkov light and thereby study the vertical structure of cosmic-ray showers. In Australia, around the same time, Max Brennan and colleagues of the University of Sydney used two or more mis-aligned light receivers to demonstrate the effects of Coulomb scattering of the charged particles in the cosmic-ray shower.

Meanwhile, at the International Cosmic Ray Conference in Moscow in 1959, Giuseppe Cocconi made a key theoretical prediction – that the Crab Nebula should be a strong emitter of gamma rays at tera-electron-volt energies. This stimulated further work, both by a British–Irish collaboration that included Jelley, and by Chudakov and his colleagues. The work at the Lebedev Physical Institute led in the early 1960s to the construction of the first air-Cherenkov telescope, with 12 searchlight mirrors, each 1.5 m in diameter and mounted on railway cars at a site in the Crimea close to the Black Sea.

The legacy

So, just a decade after the initial pioneering steps by Galbraith and Jelley, the first operational air-Cherenkov telescope had been built, setting in motion a chain of events that would ultimately lead in 1989 to the observation of gamma rays from the Crab Nebula by Trevor Weekes and colleagues at the Whipple telescope in the US. This breakthrough came nearly 25 years after Weekes had worked with Jelley in a collaboration between AERE and the University College Dublin, making the first attempts to detect gamma rays from quasars – a feat achieved only recently by the MAGIC air-Cherenkov telescope in the Canary Islands. Now, researchers around the world are teaming up to build the most sensitive telescope of this kind yet – the Cherenkov Telescope Array (Cherenkov Telescope Array is set to open new windows).

In writing only a few years ago about the work at Harwell, Weekes stated: “The account of these elegant experiments is a must-read for all newcomers to the field” (Weekes 2006). He also summed up well that first experiment by Galbraith and Jelley: “It is not often that a new phenomenon can be discovered with such simple equipment and in such a short time, but it may also be true that it is not often that one finds experimental physicists with this adventurous spirit!”

bright-rec iop pub iop-science physcis connect