IUPAC has officially approved the names “flerovium” (Fl) for the element with atomic number 114 and “livermorium” (Lv), for the one with atomic number 116. The names were proposed by the collaboration from the Joint Institute for Nuclear Research (JINR), Dubna, and the Lawrence Livermore National Laboratory in California, led by JINR’s Yuri Oganessian. Scientists from the two laboratories share the priority for the discovery of these new elements at the facilities in Dubna.
The name flerovium is in honour of the Flerov Laboratory of Nuclear Reactions, where these superheavy elements were synthesized. Georgy Flerov (1913–1990) was a pioneer in heavy-ion physics and founder of the JINR Laboratory of Nuclear Reactions in 1957, which has borne his name since 1991. Flerov is also known for his fundamental work in fields of physics that resulted in the discovery of new phenomena in properties and interactions of atomic nuclei.
The name livermorium honours the Lawrence Livermore National Laboratory. A group of researchers from Livermore took part in the work carried out in Dubna on the synthesis of superheavy elements, including element 116. Over the years, researchers at the laboratory have been involved in many areas of nuclear science and investigation of chemical properties of the heaviest elements.
The discoverers of flerovium and livermorium have submitted their claims for the discovery of further heavy elements, with atomic numbers 113, 115, 117 and 118 to the Joint Working Party of independent experts drawn from the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics.
Hard Probes 2012 – the 5th International Conference on Hard and Electromagnetic Probes of Nuclear Collisions – took place in Cagliari on 27 May – 1 June. The most important topical meeting to focus on the study of hard processes in ultra-relativistic heavy-ion collisions, this was the first time that the LHC collaborations presented their results based on lead–lead data. The main focus was undoubtedly on the wealth of new high-quality results from ALICE, ATLAS and CMS, complemented with significant contributions from the PHENIX and STAR experiments at the Relativistic Heavy Ion Collider (RHIC) in Brookhaven.
Quoting from the inspired opening talk given by Berndt Mueller of Duke University, the hard probes “manifesto” can be summarized as follows: hard probes are essential to resolve and study a medium of deconfined quarks and gluons at short spatial scales, and they have to be developed into as precise a tool as possible. This is accomplished by the study of the production and the propagation in the deconfined medium of heavy quarks, particles with high momentum transfer (pT), jets and quarkonia.
Jet quenching can be addressed by studying the suppression of leading hadrons in nuclear collisions with respect to the proton–proton case. The ALICE and CMS collaborations reported results on the production of open charm and beauty, and results were also presented from the STAR experiment. An important aspect of parton energy-loss in the medium is its mass dependence: the energy loss is expected to be strongest for light hadrons and smaller for heavy quarks. The LHC data shown at the conference are suggestive of such hierarchy, although more statistics are still needed to reach a firm conclusion.
In addition, the high-precision LHC data on light charged hadrons are significantly expanding the kinematic reach. This is fundamental to discriminating among theoretical models, which have been tuned at the lower energy of RHIC.
At the LHC, full reconstruction of high-energy jets has become possible for the first time, allowing ATLAS and CMS to present high-statistics results on jet–jet correlations. The emerging picture is consistent with one in which partons lose a large fraction of their energy while traversing the hot QCD medium – before fragmenting essentially in vacuum. First results on γ-jet correlations were also presented by the CMS and PHENIX collaborations; these allow the tagging of quark jets and give a better estimation of the initial parton energy. During the conference, an intense debate developed on how to exploit fully the information provided by full jet reconstruction.
Quarkonia suppression was another of the striking observables, for which results from LHC had been eagerly awaited. CMS presented the first exciting precision results on the suppression of the ϒ states. These reveal a clear indication of a much larger suppression for more weakly bound ϒ(2S) and ϒ(3S) with respect to the strongly bound ϒ(1S) states, in accordance with the predictions for the observation of colour screening. The ALICE collaboration presented new data on the rapidity and pT dependence of J/ψ suppression. The results show that, despite the higher initial temperatures reached at LHC, the size of the suppression remains significantly smaller than at RHIC. This is an intriguing hint that a regeneration mechanism from the large number of charm quarks present in the deconfined medium may take place at LHC energies.
Part of the conference was devoted to the study of initial-state phenomena. In particular, at high energy peculiar features related to the saturation of the gluon phase-space should emerge, leading to a state called “colour glass condensate”. A discussion took place on the how the existence of this state could be proved or disproved at LHC. The study of initial-state phenomena also came under debate because of its importance in disentangling the effects of cold nuclear matter from genuine final-state effects in hot matter.
With the advent of high-precision data, theory is being increasingly challenged, since the understanding of the bulk properties of the medium produced in heavy-ion collisions is rapidly advancing. As several speakers discussed, significant advances are being made both in the understanding of the parton energy-loss mechanism and in the quarkonia production, for which a quantitative picture is emerging.
Still, as CERN’s Jürgen Schukraft pointed out in his summary talk, there is a need for measurements of even higher precision, as well as a wish list for new measurements: for example, in the heavy-flavour sector, lowering the pT reach to measure the total charm cross-section; and reconstructing charmed and beauty baryons to gain further insight into thermalization of the medium.
On a shorter time scale, the next crucial step is the measurement of effects in cold nuclear matter, which will be possible in the forthcoming proton–nucleus run at the LHC. Based on the experience from the past lower energy measurements, new surprises might be just behind the corner.
The conference was preceded by introductory student lectures covering aspects of quarkonia production and jet quenching. About 40 students were supported by the organization, thanks to generous contributions by several international laboratories (CERN, EMMI, INFN) and, in particular, by the University of Cagliari and by the government of Sardinia. The conference was broadcast to a wider audience worldwide as a webcast.
In a 40-day run ending on 22 May, the Institute of High-Energy Physics in China accumulated a total of 1.3 billion J/ψ events at the upgraded Beijing Electron Positron Collider (BEPCII) and Beijing Spectrometer (BESIII).
In a two-year run from 1999 until 2001, the earlier incarnations of BEPC and BESII had accumulated a highly impressive 58 million J/ψs. By analysing these and 220 million events at BESIII, important results such as the discovery of X(1835) have already been produced. Now, thanks to the upgrades, data-acquisition efficiency is 120 times higher, and as many as 40 million J/ψs were being collected daily towards the end of the latest run.
BEPCII is a two-ring electron–positron collider with beam energy of 1.89 GeV. With a design luminosity of 1 × 1033 cm–2s–1, it reached a peak of 2.93 × 1032 cm–2s–1 in the latest run, 59 times higher than that of its predecessor, BEPC.
Back in April, a study of the motion of hundreds of stars in the Milky Way found no evidence of a massive dark-matter halo (CERN Courier June 2012 p11). The finding came as a surprise and did not long withstand the assault of sceptical scientists questioning the results. A new study based on the same data set, but proposing a different underlying assumption, now reconciles the observations with the presence of a dark-matter halo in line with expectations.
One of the first pieces of evidence for dark matter was that the rotation velocity of stars in the Milky Way remains constant instead of decreasing with distance from the Galactic centre. This flat rotation curve implies the presence of an extended distribution of dark matter, whose mass compensates the decreasing stellar density in the outer regions of the Galaxy. The presence of a similar dark-matter halo is implied by the flat rotation curve observed in almost every spiral galaxy but its actual shape and density distribution is difficult to predict.
To determine the amount of dark matter in the vicinity of the Sun, a team of Chilean astronomers measured the motions of more than 400 red giant stars up to 13,000 light-years from the Sun, in a volume that is four times larger than ever previously considered. Visible matter in the form of stars and gas is dominant in the plane of the Galaxy but at higher elevation above the Galactic disc, dark matter should dominate. The rotational velocity of stars at different Galactic heights should thus result in a measure of the local density of dark matter in the solar neighbourhood.
To their surprise, Christian Moni Bidin of the Universidad de Concepción and colleagues found no evidence at all for a dark-matter halo. They obtained an upper limit of 0.07 kg of dark matter in a volume the size of the Earth, whereas theories predict a mass in the range of 0.4–1.0 kg. This difference of about an order of magnitude led some astronomers to query the validity of the analysis.
Jo Bovy and Scott Tremaine of the Institute for Advanced Study, Princeton, claim that they found a fault in one of the assumptions made by Moni Bidin and colleagues. The problematic assumption is that the average rotational velocity <V> is constant with distance from the Galactic centre at all heights above the plane of the Galaxy. For Bovy and Tremaine, this assumption applies to the circular velocity Vc but not to <V>. The difference is rather subtle, but it is a well identified effect known as the “asymmetric drift”, which arises from a sub-population of stars with elliptical orbits that have on average a lower velocity than Vc. The result is a difference between <V> and Vc that evolves with the height above the Galactic plane and would have led the Chilean researchers to underestimate the density of dark matter.
With their modified assumption that the circular velocity curve is flat in the mid-plane, Bovy and Tremaine obtain a local dark-matter density of 0.3±0.1 GeV/cm3, fully consistent with estimates from the usual models. They also claim to demonstrate that this assumption is motivated by observations, while the previous one was implausible.
As with the OPERA result on the faster-than-light neutrinos (EXO, MINOS and OPERA reveal new results), this is another example of an unexpected result being later disproved. It seems that submitting the problem to the scientific community in the form of a paper is an efficient way to identify quickly the origin of the disagreement. The strength of the scientific community as a whole is to be able to solve major issues more effectively than a single research group.
“We took off at 6.12 a.m. from Aussig on the Elbe. We flew over the Saxony border by Peterswalde, Struppen near Pirna, Birchofswerda and Kottbus. The height of 5350 m was reached in the region of Schwielochsee. At 12.15 p.m. we landed near Pieskow, 50 km east of Berlin.”
The flight on 7 August 1912 was the last in a series of balloon flights that Victor Hess, an Austrian physicist, undertook in 1912 with the aid of a grant from what is now the Austrian Academy of Sciences in Vienna. The previous year, he had taken two flights to investigate the penetrating radiation that had been found to discharge electroscopes above the Earth’s surface. He had reached an altitude of around 1100 m and found “no essential change” in the amount of radiation compared with observations near the ground. This indicated the existence of some source of radiation in addition to γ-rays emitted by radioactive decays in the Earth’s crust.
For the flights in 1912 he equipped himself with two electroscopes of the kind designed by Wulf, which were “perfectly airtight” and could withstand the pressure changes with altitude. The containers were electrolytically galvanized on the inside to reduce the radiation from the walls. To improve accuracy the instruments were equipped with a new “sliding lens” that allowed Hess to focus on the electroscopes’ fibres as they discharged without moving the eyepiece and hence changing the magnification.
Hess undertook the first six flights from his base in Vienna, beginning on 17 April 1912, during a partial solar eclipse. Reaching 2750 m, he found no reduction in the penetrating radiation during the eclipse but indications of an increase around 2000 m. However, on the following flights he found that “the weak lifting power of the local gas, as well as the meteorological conditions” did not allow him to ascend higher.
So, on 7 August he took off instead from Aussig [today Ústí nad Labem in the Czech Republic], several hundred kilometres north of Vienna. Although cumulus clouds appeared during the day, the balloon with Hess and the electrometers were never close to them; there was only a thin layer above him, at around 6000 m. The results of this flight were more conclusive. “In both γ-ray detectors the values at the greatest altitude are about 22–24 ions higher than at the ground.”
Before reporting these results, Hess combined all of the data from his various balloon flights. At altitudes above 2000 m the measured radiation levels began to rise. “By 3000 to 4000 m the increase amounts to 4 ions, and at 4000 to 5200 m fully to 16 to 18 ions, in both detectors.”
He concludes: “The results of the present observations seem to be most readily explained by the assumption that a radiation of very high penetrating power enters our atmosphere from above … Since I found a reduction … neither by night nor at a solar eclipse, one can hardly consider the Sun as the origin.”
Although continuing research discovered more about the particles involved, the exact location of the source remains a mystery that continues to drive adventurous research in astroparticle physics.
• The extracts are from a translation of the original paper by Hess, taken from Cosmic Rays by A M Hillas, in the series “Selected readings in physics”, Pergamon Press 1972.
In 1785 Charles-Augustin de Coulomb presented three reports on electricity and magnetism to France’s Royal Academy of Sciences. In the third of these he described his experiments showing that isolated electrified bodies can spontaneously discharge and that this phenomenon was not a result of defective insulation. After dedicated studies by Michael Faraday around 1835, William Crookes observed in 1879 that the speed of discharge decreased when the pressure was reduced: the ionization of air was thus the direct cause. But what was ionizing air? Trying to answer this question paved the way in the early 20th century towards a revolutionary scientific discovery – that of cosmic rays.
Spontaneous radioactivity had been discovered at the end of the 19th century and researchers observed that a charged electroscope promptly discharges in the presence of radioactive material. The discharge rate of an electroscope could then be used to gauge the level of radioactivity. A new era of research into discharge physics opened up, this period being strongly influenced by the discoveries of the electron and positive ions.
During the first decade of the 20th century, results on ionization phenomena came from several researchers in Europe and North America. Around 1900, Charles Wilson in Scotland and, independently two high-school teachers and good friends in Germany, Julius Elster and Hans Geitel, improved the technique for the careful insulation of electroscopes in a closed vessel, thus improving the sensitivity of the electroscope itself (figure 1). As a result, they could make measurements of the rate of spontaneous discharge. They concluded that ionizing agents were coming from outside the vessel and that part of this radioactivity was highly penetrating: it could ionize the air in an electroscope shielded by metal walls a few centimetres thick. This was confirmed in 1902 by quantitative measurements performed by Ernest Rutherford and Henry Cooke, as well as by John McLennan and F Burton, who immersed an electroscope in a tank filled with water.
The obvious questions concerned the nature of such radiation and whether it was of terrestrial or extra-terrestrial origin. The simplest hypothesis was that its origin was related to radioactive materials in the Earth’s crust, which were known to exist following the studies by Marie and Pierre Curie on natural radioactivity. A terrestrial origin was thus a commonplace assumption – an experimental proof, however, seemed difficult to achieve. In 1901, Wilson made the visionary suggestion that the origin of this ionization could be an extremely penetrating extra-terrestrial radiation. Nikola Tesla in the US even patented in 1901 a power generator based on the fact that “the Sun, as well as other sources of radiant energy, throws off minute particles of matter […which] communicate an electrical charge”. However, Wilson’s investigations in tunnels with solid rock overhead showed no reduction in ionization and so did not support an extra-terrestrial origin. The hypothesis was dropped for many years.
New heights
A review by Karl Kurz summarizes the situation in 1909. The spontaneous discharge observed was consistent with the hypothesis that background radiation did exist even in insulated environments and that this radiation had a penetrating component. There were three possible sources for the penetrating radiation: an extra-terrestrial radiation, perhaps from the Sun; radioactivity from the crust of the Earth; and radioactivity in the atmosphere. Kurz concluded from ionization measurements made in the lower part of the atmosphere that an extra-terrestrial radiation was unlikely and that (almost all of) the radiation came from radioactive material in the crust. Calculations were made of how such radiation should decrease with height but measurements were not easy to perform because the electroscope was a difficult instrument to transport and the accuracy was not sufficient.
Although a large effort to build a transportable electroscope was made by the meteorology group in Vienna (leaders in measurements of air ionization at the time), the final realization of such an instrument was made by Father Theodor Wulf (figure 2, left), a German scientist and Jesuit priest serving in the Netherlands and later in Rome. In Wulf’s electroscope, the two metal leaves were replaced by metalized silicon-glass wires, with a tension spring in between, also made of glass. The instrument could be read by a microscope (figure 2, right). To test the origin of the radiation causing the spontaneous discharge, Wulf checked the variation of radioactivity with height: in 1909 he measured the rate of ionization at the top of the Eiffel Tower in Paris (300 m above ground). Supporting the hypothesis of the terrestrial origin of most of the radiation, he expected to find less ionization at the top of the tower than at ground level. However, the rate of ionization showed too small a decrease to confirm this hypothesis. Instead, he found that the amount of radiation “at nearly 300 m [altitude] was not even half of its ground value”, while with the assumption that radiation emerges from the ground there would remain at the top of the tower “just a few per cent of the ground radiation”.
Wulf’s observations were puzzling and demanded an explanation. One possible way to solve this puzzle was to make measurements at altitudes higher than the 300 m of the Eiffel tower. Balloon experiments had been widely used for studies of atmospheric electricity for more than a century and it became evident that they might give an answer to the problem of the origin of the penetrating radiation. In a flight in 1909, Karl Bergwitz, a former pupil of Elster and Geitel, found that the ionization at 1300 m altitude had decreased to about 24% of the value on the ground. However, Bergwitz’s results were questioned because his electrometer was damaged during the flight. He later investigated electrometers on the ground and at 80 m, reporting that no significant decrease of the ionization was observed. Other measurements with similar results were obtained around the same time by Alfred Gockel, from Fribourg, Switzerland, who flew up to 3000 m (and first introduced the term “kosmische Strahlung”, or “cosmic radiation”). The general interpretation was that radioactivity was coming mostly from the Earth’s surface, although the balloon results were puzzling.
The meteorologist Franz Linke had, in fact, made 12 balloon flights in 1900–1903 during his PhD studies at Berlin University, carrying an electroscope built by Elster and Geitel to a height of 5500 m. The thesis was not published, but a published report concludes: “Were one to compare the presented values with those on ground, one must say that at 1000 m altitude […] the ionization is smaller than on the ground, between 1 and 3 km the same amount, and above it is larger … with values increasing up to a factor of 4 (at 5500 m). […] The uncertainties in the observations […] only allow the conclusion that the reason for the ionization has to be found first in the Earth.” Nobody later quoted Linke and although he had made the right measurement, he had reached the wrong conclusions.
Underwater measurements
One person to question the conclusion that radioactivity came mostly from the Earth’s crust was an Italian, Domenico Pacini. An assistant meteorologist in Rome, he made systematic studies of ionization on mountains, on the shoreline and at sea between 1906 and 1910. Pacini’s supervisor was the Austrian-born Pietro Blaserna, who had graduated in physics within the electrology group at the University of Vienna. The instruments used in Rome were state of the art and Pacini could reach a sensitivity of one third of a volt.
In 1910 he placed one electroscope on the ground and one out at sea, a few kilometres off the coast, and made simultaneous measurements. He observed a hint of a correlation and concluded that “in the hypothesis that the origin of penetrating radiations is in the soil […] it is not possible to explain the results obtained”. That same year he looked for a possible increase in radioactivity during a passage of Halley’s comet and found no effect.
Pacini later developed an experimental technique for underwater measurements and in June 1911 compared the rate of ionization at sea level and at 3 m below water, at a distance of 300 m from the shore of the Naval Academy of Livorno. He repeated the measurements in October on the Lake of Bracciano. He reported on his measurements, the results – and their interpretation – in a note entitled, “Penetrating radiation at the surface of and in water”, published in Italian in Nuovo Cimento in February 1912. In that paper, Pacini wrote: “Observations carried out on the sea during the year 1910 led me to conclude that a significant proportion of the pervasive radiation that is found in air had an origin that was independent of the direct action of active substances in the upper layers of the Earth’s surface. … [To prove this conclusion] the apparatus … was enclosed in a copper box so that it could be immersed at depth. … Observations were performed with the instrument at the surface, and with the instrument immersed in water, at a depth of 3 m”.
Pacini measured the discharge rate of the electroscope seven times over three hours. The ionization underwater was 20% lower than at the surface, consistent with absorption by water of radiation coming from outside; the significance was larger than 4 σ. He wrote: “With an absorption coefficient of 0.034 for water, it is easy to deduce from the known equation I/I0 = exp(–d/λ), where d is the thickness of the matter crossed, that, in the conditions of my experiments, the activities of the sea-bed and of the surface were both negligible. The explanation appears to be that, owing to the absorbing power of water and the minimum amount of radioactive substances in the sea, absorption of radiation coming from the outside indeed happens, when the apparatus is immersed.” Pacini concluded: “[It] appears from the results of the work described in this note that a sizable cause of ionization exists in the atmosphere, originating from penetrating radiation, independent of the direct action of radioactive substances in the crust.”
Despite Pacini’s conclusions – and the puzzling results of Wulf and Gockel on the dependence of radioactivity on altitude – physicists were reluctant to abandon the hypothesis of a terrestrial origin for the mystery penetrating radiation. The situation was resolved in 1911 an 1912 with the long series of balloon flights by Victor Hess, who established the extra-terrestrial origin of at least part of the radiation causing the observed ionization. However, it was not until 1936 that Hess was rewarded with the Nobel Prize for the discovery of cosmic radiation. By then the importance of this “natural laboratory” was clear, and he shared the prize with Carl Anderson, who had discovered the positron in cosmic radiation four years earlier. Meanwhile, Pacini had died in 1934 – his contributions mainly forgotten through a combination of historical and political circumstances.
Recent observations of ultra-high-energy cosmic rays (UHECRs) by extensive air-shower arrays have revealed a clear cut-off in the energy spectrum at 1019.5 eV. The results are consistent with the predictions made in the mid-1960s that interactions with the cosmic microwave background would suppress the flux of particles at high energies (Greisen 1966, Zatsepin and Kuz’min 1966). Nevertheless, as the article on page 22 explains, the nature of the cut-off – and, indeed, the origin of the UHECRs – remains unknown.
UHECRs are observed in the large showers of particles created when a high-energy particle (proton or nucleus) interacts in the atmosphere. This means that information about the primary cosmic ray has to be estimated by “interpreting” the observed extensive air shower. Both longitudinal and lateral shower structures measured by the fluorescence and surface detectors, respectively, are used in the interpretation of the energy and species of the primary particle through comparison with the predictions of Monte Carlo simulations. In high-energy hadronic collisions, the energy flow is dominated by the very-forward-emitted particles in which the shower development is determined by the energy balance of baryonic and mesonic particle production. However, the lack of knowledge about hadronic interactions at such high energies, especially in the forward region, means that the interpretations tend to be model-dependent. To constrain the models used in the simulations, measurements of the forward production of particles relevant to air-shower development are indispensable at the highest energies possible.
Into the lab
The most important cross-section for cosmic-ray shower development is for the forward production in hadron collisions of neutral pions (π0), which immediately decay to two forward photons. The highest energies accessed in the laboratory are reached in particle colliders, and until the start-up of the LHC, the only experiment dedicated to forward particle production at a collider was carried out at UA7 at CERN’s SppS collider (Paré et al. 1990). Now, two decades later, members of the UA7 team have formed a new collaboration for the Large Hadron Collider forward (LHCf) experiment (LHCf 2006). This is dedicated to measuring very-forward particle production at the LHC, where running with proton–proton collisions at the full design energy of 14 TeV will correspond to 1017 eV in the laboratory frame and so will be in touch with the UHECR region.
The LHCf experiment consists of two independent calorimeters (Arm1 and Arm2) installed 140 m on either side of the interaction point in the ATLAS experiment. The detectors fit in the instrumentation slots of the target neutral absorbers (TANs), which are located where the vacuum chamber for the beam makes a Y-shaped transition from the single beam pipe that passes through the interaction point to the two separate beam tubes that continue into the arcs of the LHC. Charged particles produced in the collision region in the direction of the TAN are swept aside by an inner beam-separation magnet before they reach it. Consequently, only neutral particles produced at the interaction point enter the TAN and the detectors. This location allows the observation of particles at nearly 0° to the proton beam direction.
Both LHCf detectors contain two sampling and imaging calorimeters, each consisting of 44 radiation lengths of tungsten and 16 sampling layers of 3 mm-thick plastic scintillator for the initial runs. The calorimeters in Arm1 have an area transverse to the beam direction of 20 × 20 mm2 and 40 × 40 mm2, while those in Arm2 have areas of 25 × 25 mm2 and 32 × 32 mm2. Four X-Y layers of position-sensitive sensors are interleaved with the tungsten and scintillator to provide the transverse positions of the showers generated in the calorimeters, employing different technologies in the two detectors: Arm1 uses scintillating fibres and multi-anode photomultiplier tubes (MAPMTs); Arm2 uses silicon-strip sensors. In each case, the sensors are installed in pairs in such a way that two pairs are optimized to detect the maximum of gamma-ray-induced showers, while the other two are for hadronic showers developed deep within the calorimeters. Although the lateral dimensions of these calorimeters are small, the energy resolution is expected to be better than 6% and the position resolution better than 0.2 mm for gamma-rays with energy between 100 GeV and 7 TeV. This has been confirmed by test-beam results at CERN’s Super Proton Synchrotron.
LHCf successfully took data right from the first collision at the LHC in 2009 and finished its first phase of data-taking in mid-July 2010, after collecting enough data in proton–proton collisions at both 900 GeV and 7 TeV in the centre of mass. In 2011, the collaboration reported its measurements of inclusive photon spectra at 7 TeV (Adriani et al. 2011). A comparison of the data with predictions from the hadron-interaction models used in the study of air showers and from PYTHIA 8.145, which is popular in the high-energy-physics community, revealed various discrepancies, with none of the models showing perfect agreement with the data.
Now, LHCf has results for the inclusive π0 production rate at rapidities greater than 8.9 in proton–proton data at 7 TeV in the centre of mass. Using data collected in two runs in May 2010, corresponding to integrated luminosities of 2.53 nb–1 in Arm1 and 1.90 nb–1 in Arm2, the collaboration measured instances where two photons emitted into the very-forward regions could be attributed to π0 decays and obtained the transverse momentum (pT) distributions of the π0s. The criteria for the selection of π0 events were based on the position of the incident photons (within 2 mm of the edge of the calorimeter), the photon energy (above 100 GeV), the number of hits (one in each calorimeter), photon-like particle identification using the energy deposition and, last, an invariant mass corresponding to the π0 mass.
The pT spectra were derived in independent analyses of the two detectors, Arm1 and Arm2, in six rapidity intervals covering the range 8.9–11.0. These spectra, which agree within statistical and systematic errors, were then combined and compared with the predictions from various hadronic interaction models: DPMJET 3.04, QGSJET II-03, SIBYLL 2.1, EPOS 1.99 and PYTHIA 8.145 (default parameter set).
Figure 1 shows the combined spectrum for one rapidity interval, 9.2 < y < 9.4, compared with the outcome from these models (Adriani et al. 2012). It is clear that DPMJET 3.04 and PYTHIA 8.145 predict the π0 production rates to be higher than the data from LHCf as pT increases. SIBYLL 2.1 also predicts harder pion spectra than are observed in the experimental data, although the expected π0 yield is generally small. On the other hand, QGSJET II-03 predicts π0 spectra that are softer than both the LHCf data and the other model predictions. Among the hadronic interaction models, EPOS 1.99 shows the best overall agreement with the LHCf data.
In figure 2 the values of average pT (〈pT〉) obtained in this analysis are compared as a function of ylab = ybeam – y with the results from UA7 and with the model predictions. Although the LHCf and UA7 data have limited overlap and the systematic errors for UA7 are relatively large, the values of 〈pT〉 from the two experiments lie mainly along a common curve and there is no evidence of a dependence on collision energy. EPOS 1.99 shows the smallest dependence of 〈pT〉 on the two collision energies among three of the models, and this tendency is consistent with the results from LHCf and UA7. It is also evident from figure 2 that the best agreement with the LHCf data are obtained by EPOS 1.99.
The photon and π0 data from the LHCf experiment can now be used in models to constrain the mesonic part (or electromagnetic part via π0 s) of the air-shower development. The collaboration, meanwhile, is turning to analysis of baryon production, which will provide complementary information on the hadronic interaction. At the same time, work is ongoing towards taking data on proton–lead collisions at the LHC, planned for the end of 2012. Such nuclear collision data are important for understanding the interaction between cosmic rays and the atmosphere. Also other work is under way on replacing the plastic scintillator in the calorimeters – which were removed after the runs in July 2010 – with more radiation-resistant crystal scintillator, so as to be ready for 2014 when the LHC will run at 7 TeV per beam. There are also plans to change the position of the silicon sensors to improve the performance of the experiment in measuring the energy of the interacting particles.
“Analysis of a cosmic-ray air shower recorded at the MIT Volcano Ranch station in February 1962 indicates that the total number of particles in the shower was 5 × 1010. The total energy of the primary particle that produced the shower was 1.0 × 1020 eV.” Thus begins the 1963 paper in which John Linsley described the first detection of a cosmic ray with a surprisingly high energy. Such ultra-high-energy cosmic rays (UHECRs), which arrive at Earth at rates of less than 1 km–2 a century, have since proved challenging both experimentally and theoretically. The International Symposium on Future Directions in UHECR Physics, which took place at CERN on 13–16 February, aimed to discuss these challenges and look to the next step in terms of a future large-scale detector. Originally planned as a meeting of about 100 experts from the particle- and astroparticle-physics communities, the symposium ended up attracting more than 230 participants from 24 countries, reflecting the strong interest in the current and future prospects for cosmic rays at the highest energies.
Soon after Linsley’s discovery, UHECRs became even more baffling when Arno Penzias and Robert Wilson discovered the cosmic microwave background (CMB) radiation in 1965. The reason for this is twofold: first, astrophysical sources delivering particle energies of 10 to 100 million times the beam energy of the LHC are hard to conceive of; and, second, the universe becomes opaque for protons and nuclei at energies above 5 × 1019 eV because of their interaction with the CMB radiation. In 1966, Kenneth Greisen, and independently Georgy Zatsepin and Vadim Kuz’min, pointed out that protons would suffer pion photoproduction and nuclei photodisintegration in the CMB. These processes limit the cosmic-ray horizon above the so-called “GZK” threshold to less than about 100 Mpc, resulting in strongly suppressed fluxes of protons and nuclei from distant sources.
The HiRes, Pierre Auger and Telescope Array (TA) collaborations recently reported a suppression of just this type at about the expected threshold. Does this mark the long awaited discovery of the GZK effect? At the symposium, not all participants were convinced because the break in the energy spectrum could also be caused by the sources running out of steam. To shed more light on this most important question of astroparticle physics, information about the mass composition and arrival directions, as well as the precise energy spectrum of the highest-energy cosmic rays, is now paramount.
Searching for answers
Three large-scale observatories, each operated by international collaborations, are currently taking data and trying to provide answers: the Pierre Auger Observatory in Argentina, the flagship in the field, which covers 3000 km2; the more recently commissioned TA in Utah, which samples an area of 700 km2; and the smaller Yakutsk Array in Siberia, which now covers about 10 km2. To make progress in understanding the data from these three different observatories new ground was broken in preparing for the symposium. Before the meeting, five topical working groups were formed comprising members from each collaboration. They were given the task of addressing differences between the respective approaches in the measurement and analysis methods, studying their impact on the physics results and delivering a report at the symposium. These working-group reports – on the energy spectrum, mass composition, arrival directions, multimessenger studies and comparisons of air-shower data to simulations – were complemented by invited overview talks, contributed papers and a large number of posters addressing various topics of analyses, new technologies and concepts for future experiments.
In opening the symposium and welcoming the participants, CERN’s director of research, Sergio Bertolucci, emphasized the organization’s interest in astroparticle physics in general and in cosmic rays in particular – the latter being explicitly named in the CERN convention. Indeed, many major astroparticle experiments have been given the status of “recognized experiment” by CERN. Pierre Sokolsky, a key figure in the legendary Fly’s Eye experiment and its successor HiRes, followed with the first talk, a historical review of the research on the most energetic particles in nature. Paolo Privitera of the University of Chicago then reviewed the current status of measurements, highlighting differences in observations and the understanding of systematic uncertainties. Theoretical aspects of acceleration and propagation were also discussed, as well as predictions of the energy and mass spectrum, by Pasquale Blasi of Istituto Nazionale di Astrofisica/Arcetri Astrophysical Observatory and Venya Berezinsky of Gran Sasso National Laboratory.
Data from the LHC, particularly those measured in the very forward region, are of prime interest for verifying and optimizing hadronic-interaction event-generators that are employed in the Monte Carlo simulations of extensive air showers (EAS), which are generated by the primary UHECRs. Overviews of recent LHC data by Yoshikata Itow of Nagoya University and, more generally, the connection between accelerator physics and EAS were therefore given prominence at the meeting. Tanguy Pierog of Karlsruhe Institute of Technology demonstrated that the standard repertoire of interaction models employed in EAS simulations not only cover the LHC data reasonably well but also the predicted LHC data better than high-energy physics models, such as PYTHIA or HERWIG. Nonetheless, no perfect model exists and significant muon deficits in the models are seen at the highest air-shower energies. In a keynote talk, John Ellis, now of King’s College London, highlighted UHECRs as being the most extreme environment for studying particle physics – at a production energy of around 1011 GeV and more than 100 TeV in the centre-of-mass – and discussed the potential for exotic physics. In a related talk, Paolo Lipari of INFN Rome La Sapienza discussed the interplay of cross-sections, cosmic-ray composition and interaction properties, highlighting the mutual benefits provided by cosmic rays and accelerator physics.
High-energy photons and neutrinos are directly related to cosmic rays and are different observational probes of the high-energy non-thermal universe. Tom Gaisser of the University of Delaware, Günter Sigl of the University of Hamburg and others addressed this multimessenger aspect and argued that current neutrino limits from IceCube begin to disfavour a UHECR origin inside relativistic gamma-ray bursts and active galactic-nuclei (AGN) jets, and that cosmogenic neutrinos would provide a smoking-gun signal of the GZK effect. However, as Sigl noted, fluxes of diffuse cosmogenic neutrinos and photons depend strongly on the chemical composition, maximal acceleration energy and redshift evolution of sources.
Future options
Looking towards the future, the symposium discussed potentially attractive new technologies for cosmic-ray detection. Radio observations of EAS at frequencies of some tens of megahertz are being performed at the prototype level by a couple of groups and the underlying physical emission processes are being understood in greater detail. Ad van den Berg of the University of Groningen described the status of the largest antenna array under construction, the Auger Engineering Radio Array (AERA). More recently, microwave emission by molecular bremsstrahlung was suggested as another potentially interesting emission process. Unlike megahertz-radiation, gigahertz-emission would occur isotropically, opening the opportunity to observe showers sideways from large distances, a technique known from the powerful EAS fluorescence observations. Thus, huge volumes could be surveyed with minimal equipment available off the shelf. Pedro Facal of the University of Chicago and Radomir Smida of the Karlsruhe Institute of Technology reported preliminary observations of such radiation, with signals being much weaker than expected from laboratory measurements.
The goal is to reach huge apertures with particle-physics capability at cost levels of €100 million.
The TA collaboration is pursuing forward-scattered radar detection of EAS, as John Belz of the University of Utah reported; this again potentially allows huge volumes to be monitored for reflected signals. However, the method still needs to be proved to work. Interesting concepts for future giant ground-based observatories based on current and novel technologies were presented by Antoine Letessier-Selvon of the CNRS, Paolo Privitera and Shoichi Ogio of Osaka City University. The goal is to reach huge apertures with particle-physics capability at cost levels of €100 million.
Parallel to pushing for a new giant ground-based observatory, space-based approaches, most notably by JEM-EUSO – the Extreme Universe Space Observatory aboard the Japanese Experiment Module – to be mounted on the International Space Station, were discussed by Toshikazu Ebizusaki of RIKEN, Andrea Santangelo of the Institut für Astronomie und Astrophysik Tübingen and Mario Bertaina of Torino University/INFN. Depending on the effective duty cycle, apertures of almost 10 times that of the Auger Observatory with a uniform coverage of northern and southern hemispheres may be reached. However, the most important weakness as compared with ground-based experiments is the poor sensitivity to the primary mass and the inability to perform particle-physics-related measurements.
The true highlights of the symposium were reports given by the joint working groups. This type of co-operation, inspired by the former working groups for CERN’s Large Electron–Positron Collider, marked a new direction for the community. Yoshiki Tsunesada of the Tokyo Institute of Technology reported detailed comparisons of the energy spectra measured by the different observatories. All spectra are in agreement within the given energy-scale uncertainties of around 20%. Accounting for these overall differences, spectral shapes and positions of the spectral features are in good agreement. Nevertheless, the differences are not understood in detail and studies of the fluorescence yield and photometric calibration – treated differently by the TA and Auger collaborations – are to be pursued.
The studies of the mass-composition working group, presented by Jose Bellido of the University of Adelaide, addressed whether the composition measured by HiRes and TA is compatible with proton-dominated spectra while Auger suggests a significant fraction of heavy nuclei above 1019 eV. Following many cross-checks and cross-correlations between the experiments, differences could not be attributed to issues in the data analysis. Even after taking into account the shifts in the energy scale, the results are not fully consistent within quoted uncertainties, assuming no differences existed between the northern and southern hemispheres.
The anisotropy working group discussed large-scale anisotropies and directional correlations to sources in various catalogues and concluded that there is no major departure from anisotropy in any of the data sets, although some hints at the 10–20° scale might be have been seen by Auger and TA. Directional correlations to AGN and to the overall nearby matter-distribution are found by Auger at the highest energies, but the HiRes collaboration could not confirm this finding. Recent TA data agree with the latest signal strength of Auger but, owing to the lack of statistics, they are also compatible with isotropy at the 2% level.
Studies by the photon and neutrino working group, presented by Markus Risse of the University of Siegen and Grisha Rubtsov from the Russian Academy of Sciences, addressed the pros and cons of different search techniques and concluded that the results are similar. No photons and neutrinos have been observed yet but prospects for the coming years seem promising for reaching sensitivities for optimistic GZK fluxes.
Lastly, considerations of the hadronic-interaction and EAS-simulation working group, presented by Ralph Engel of Karlsruhe Institute of Technology, acknowledged the many constraints – so far without surprises – that are provided by the LHC. Despite the good overall description of showers, significant deficits in the muon densities at ground level are observed in the water Cherenkov tanks of Auger. The energy obtained by the plastic scintillator array of TA is around 30% higher than the energies measured by fluorescence telescopes. These differences are difficult to understand and deserve further attention. Nevertheless, proton–air and proton–proton inelastic cross-sections up to √s = 57 TeV have been extracted from Auger, HiRes and Yakutsk data, demonstrating the particle-physics potential of high-energy cosmic rays.
The intense and lively meeting was summarized enthusiastically by Angela Olinto of the University of Chicago and Masaki Fukushima of the University of Tokyo. A round-table discussion, chaired by Alan Watson of the University of Leeds, iterated the most pressing questions to be addressed and the future challenges to be worked on towards a next-generation giant observatory. Clearly, important steps were made at this symposium, marking the start of a coherent worldwide effort towards reaching these goals. The open and vibrant atmosphere of CERN contributed much to the meeting’s success and was highly appreciated by all participants, who agreed to continue the joint working groups and discuss progress at future symposia.
ALICE is one of the four big experiments at CERN’s LHC. It is devoted mainly to the study of a new phase of matter, the quark–gluon plasma, which is created in heavy-ion collisions at very high energies. However, located in a cavern 52 m underground with 28 m overburden of rock, it can also detect muons produced by the interactions of cosmic rays with the Earth’s atmosphere.
The use of high-energy collider detectors for cosmic-ray physics was pioneered during the era of the Large Electron–Positron (LEP) collider at CERN by the L3, ALEPH and DELPHI collaborations. An evolution of these programmes is now possible at the LHC, where the experiments are expected to operate for many years, with the possibility of recording a large amount of cosmic data. In this context, ALICE began a programme of cosmic data-taking, collecting data for physics for 10 days over 2010 and 2011 during pauses in LHC operations. In 2012, in addition to this standard cosmic data-taking, a special trigger now allows the detection of cosmic events during proton–proton collision runs.
A different approach
In a typical cosmic-ray experiment, the detection of atmospheric muons is usually done using large-area arrays at the surface of the Earth or with detectors deep underground. The main purpose of such experiments is to study the mass composition and energy spectrum of primary cosmic rays in an energy range above 1014 eV, which is not available through direct measurements using satellites or balloons. The big advantages of these apparatuses are the large size and, for the surface experiments, the possibilities for measuring different particles, such as electrons, muons and hadrons, created in extensive air showers. Because the detectors involved in collider experiments are tiny compared with the large-area arrays, the approach and the studies have to be different so that the remarkable performances of the detectors can be exploited.
The first different characteristic for experiments at LEP or the LHC is the location, being some 50–140 m underground. These are in an intermediate situation between surface arrays – where all of the components of the shower can be detected – and detectors deep underground, where only the highest-energy muons (usually of the order of 1 TeV at the surface) are recorded. In particular for ALICE, all of the electromagnetic and hadronic components are absorbed by the rock overburden and apart from neutrinos only muons with an energy greater than 15 GeV reach the detectors. The special features that are brought by ALICE are the ability to detect a clean muon component with a low-energy cut-off, allowing a larger number of detected events compared with deep underground sites, combined with the ability to measure a greater number of variables, such as momentum, arrival time, density and direction, than was ever achieved by earlier experiments.
The tradition in collider experiments, and also in ALICE, is to use these muons mainly for the calibration and alignment of the detectors. However, during the commissioning of ALICE, specific triggers were implemented to develop a programme of cosmic-ray physics. These employ three detectors: A COsmic Ray DEtector (ACORDE), time-of-flight (TOF) and the silicon pixel detector (SPD).
ACORDE is an array of 60 scintillator modules located on the three upper faces of the ALICE magnet yoke, covering 10% of its area. The trigger is given by the coincidence of the signals in at least two different modules. The TOF is a cylindrical array of multi-gap resistive-plate chambers, with a large area that completely surrounds the time-projection chamber (TPC), which is 5 m long and has a diameter of 5 m. The cosmic trigger requires a signal in a read-out channel (a pad) in the upper part of the TOF and another in a pad in the opposite lower part. The SPD consists of two layers of silicon pixel modules located close to the interaction point. The cosmic trigger is given by the coincidence of two signals in the top and bottom halves of the outer layer.
The track of an atmospheric muon crossing the apparatus can be reconstructed by the TPC. This detector’s excellent tracking performance can be exploited to measure the main characteristics of the muon – such as momentum, charge, direction and spatial distribution – with good resolution, while the arrival time can be measured with a precision of 100 ps with the TOF. In particular the ability to track a high density of muons – unimaginable with a standard cosmic-ray apparatus – together with the measurement of all of these observables at the same time, permits a new approach to the analysis of cosmic events, which has so far not been exploited. For these reasons, the main research related to the physics of cosmic rays with the ALICE experiment has centred on the study of the muon-multiplicity distribution and in particular high-density events.
The analysis of the data taken in 2010 and 2011 revealed a muon multiplicity distribution that can be reproduced only by a mixed composition. Figure 1 shows the multiplicity distribution for real data taken in 2011, together with the points predicted for pure-proton and pure-iron composition for the primaries. It is clear from the simulation that the lower multiplicities are closer to the pure-proton points, while at higher multiplicities the data tend to approach the iron points. This behaviour is expected from a mixed composition that on average increases the mass of the primary when its energy increases, a result confirmed by several previous experiments.
High-multiplicity events
However, a few events found both in 2010 and in 2011 (beyond the scale of figure 1) have an unexpectedly large number of muons. In particular, the highest multiplicity reconstructed by the TPC has a muon density of 18 muons/m2. Figure 2 shows the display of this event and gives an idea of the TPC’s capabilities in tracking such high particle densities without problems of saturation, a performance never achieved in previous experiments.
The estimated energy of the primary cosmic ray for this event is at least 3 × 1016 eV, assuming that the core of the air shower is inside ALICE and that the primary particle is an iron nucleus. Recalling that the rate of cosmic rays is 1 m–2 year–1 at the energy of the knee in the spectrum (3 × 1015 eV), and that over one decade in energy the flux decreases by a factor of 100, an event with this muon density is expected in ALICE in 4–5 years of data. Since other events of high multiplicity have been found in only 10 days of data-taking, further investigation and detection will be necessary to understand whether they are caused by standard cosmic rays – and if the high multiplicity is simply a statistical fluctuation – or whether they have a different production mechanism. A detailed study of these events has not shown any unusual behaviour in the other measured variables.
For all of these reasons it is important to see whether other unexpected high-multiplicity events are detected in future and at what rate. To this end, in addition to standard cosmic runs, a special trigger requiring the coincidence of at least four ACORDE modules has been implemented this year to record cosmic events during proton–proton collisions, and so increase the time for data-taking to more than 10 times that of the existing data.
It is interesting to note that the three LEP experiments – L3, ALEPH and DELPHI – also found an excess of high-multiplicity events that were not explained by Monte Carlo models. The hope with ALICE is to find and study a large number of these events in a more quantitative way to understand properly their nature.
Bruno Alessandro, INFN Torino, and Mario Rodriguez, Autonomous University of Puebla, Mexico.
In 2004, as the telescopes of the High Energy Stereoscopic System (HESS) were starting to point towards the skies, there were perhaps 10 astronomical objects that were known to produce very high-energy (VHE) gamma rays – and exactly which 10 was subject to debate. Now, in 2012, well in excess of 100 VHE gamma-ray objects are known and plans are under way to take observations to a new level with the much larger Cherenkov Telescope Array.
VHE gamma-ray astronomy covers three decades in energy, from a few tens of giga-electron-volts to a few tens of tera- electron-volts. At these high energies, even the brightest astronomical objects have fluxes of only around 10–11 photons cm–2 s–1, and the inevitably limited detector-area available to satellite-based instruments means that their detection from space requires unfeasibly long exposure times. The solution is to use ground-based telescopes, although at first sight this seems improbable, given that no radiation with energies above a few electron-volts can penetrate the Earth’s atmosphere.
The possibility of doing ground-based gamma-ray astronomy was opened up in 1952 when John Jelley and Bill Galbraith measured brief flashes of light in the night sky using basic equipment sited at the UK Atomic Energy Research Establishment in Oxfordshire – then, as now, not famed for its clear skies (The discovery of air-Cherenkov radiation). This confirmed Blackett’s suggestion that cosmic rays, and hence also gamma rays, contribute to the light intensity of the night sky via the Cherenkov radiation produced by the air showers that they induce in the atmosphere. The radiation is faint – constituting about one ten-thousandth of the night-sky background – and each flash is only a few nanoseconds in duration. However, it is readily detectable with suitable high-speed photodetectors and large reflectors. The great advantage of this technique is that the effective area of such a telescope is equivalent to the area of the pool of light on the ground, some 104 m2.
These observations can help in answering fundamental physics questions concerning the nature of both dark matter and gravity
Early measurements of astronomical gamma rays using this method were difficult to make because there was no method of distinguishing the gamma-ray-induced Cherenkov radiation from that produced by the more numerous cosmic-ray hadrons. However, in 1985 Michael Hillas at Leeds University showed that fundamental differences in the hadron- and photon-initiated air showers would lead to differences in the shapes of the observed flashes of Cherenkov light. Applying this technique, the Whipple telescope team in Arizona made the first robust detection of a VHE gamma-ray source – the Crab Nebula – in 1989. When his technique was combined with the arrays of telescopes developed by the HEGRA collaboration and the high-resolution cameras of the Cherenkov Array at Themis, the imaging atmospheric Cherenkov technique was well and truly born.
The current generation of projects based on this technique includes not only HESS, in Namibia, but also the Major Atmospheric Gamma-Ray Imaging Cherenkov (MAGIC) project in the Canary Islands, the Very Energetic Radiation Imaging Telescope Array System (VERITAS) in Arizona and CANGAROO, a collaborative project between Australia and Japan, which has now ceased operation.
These telescopes have revealed a wealth of phenomena to be studied. They have detected the remains of supernovae, binary star systems, highly energetic jets around black holes in distant galaxies, star-formation regions in our own and other galaxies, as well as many other objects. These observations can help not only with understanding more about what is going on inside these objects but also in answering fundamental physics questions concerning, for example, the nature of both dark matter and gravity.
The field is now reaching the limit of what can be done with the current instruments, yet the community knows that it is observing only the “tip of the iceberg” in terms of the number of gamma-ray sources that are out there. For this reason, some 1000 scientists from 27 countries around the world have come together to build a new instrument – the Cherenkov Telescope Array (CTA).
The Cherenkov Telescope Array
The aim of the CTA consortium is to build two arrays of telescopes – one in the northern hemisphere and one in the southern hemisphere – that will outperform current telescope systems in a number of ways. First, the sensitivity will be a factor of around 10 better than any current array, particularly in the “core” energy range around 1 TeV. Second, it will provide an extended energy range, from a few tens of giga-electron-volts to a few hundred tera-electron-volts. Third, its angular resolution at tera-electron-volt energies will be of the order of one arc minute – an improvement of around a factor of four on the current telescope arrays. Last, its wider field of view will allow the array to survey the sky some 200 times faster at 1 TeV.
This unprecedented performance will be achieved using three different telescope sizes, covering the low-, intermediate- and high-energy regimes, respectively. The larger southern-hemisphere array is designed to make observations across the whole energy range. The lowest-energy photons (20–200 GeV) will be detected with a few large telescopes of 23 m diameter. Intermediate energies, from about 200 GeV to 1 TeV, will be covered with some 25 medium-size telescopes of 12 m diameter. Gamma rays at the highest energies (1–300 TeV) produce so many Cherenkov photons that they can be easily seen with small (4–6 m diameter) telescopes. These extremely energetic photons are rare, however, so a large area must be covered on the ground (up to 10 km2), needing as many as 30 to 70 small telescopes to achieve the required sensitivity. The northern-hemisphere array will cover only the low and intermediate energy ranges and will focus on observations of extragalactic objects.
Being both an astroparticle-physics experiment and a true astronomical observatory, with access for the community at large, the CTA’s science remit is exceptionally broad. The unifying principle is that gamma rays at giga- to tera-electron-volt energies cannot be produced thermally and therefore the CTA will probe the “non-thermal” universe.
Gamma rays can be generated when highly relativistic particles – accelerated, for example, in supernova shock waves – collide with ambient gas or interact with photons and magnetic fields. The flux and energy spectrum of the gamma rays reflects the flux and spectrum of the high-energy particles. They can therefore be used to trace these cosmic rays and electrons in distant regions of the Galaxy or, indeed, other galaxies. In this way, VHE gamma rays can be used to probe the emission mechanisms of some of the most powerful astronomical objects known and to probe the origin of cosmic rays.
VHE gamma rays can also be produced in a top-down fashion by decays of heavy particles such as cosmic strings or the hypothetical dark-matter particles. Large dark-matter densities that arise from the accumulation of the particles in potential wells, such as near the centres of galaxies, might lead to detectable fluxes of gamma rays, especially given that the annihilation rate – and therefore the gamma-ray flux – is proportional to the square of the density. Slow-moving dark-matter particles could give rise to a striking, almost mono-energetic photon emission.
The discovery of such line emission would be conclusive evidence for dark matter, and the CTA could have the capability to detect gamma-ray lines even if the cross-section is “loop-suppressed”, which is the case for the most popular candidates of dark matter, i.e. those inspired by the minimal supersymmetric extensions to the Standard Model and models with extra dimensions, such as Kaluza-Klein theory. Line radiation from these candidates is not detectable by current telescopes unless optimistic assumptions about the dark-matter density distribution are made. The more generic continuum contribution (arising from pion production) is more ambiguous but with its curved shape it is potentially distinguishable from the usual power-law spectra produced by known astrophysical sources.
It is not only the mechanisms by which gamma rays are produced that can provide useful scientific insights. The effects of propagation of gamma rays over cosmological distances can also lead to important discoveries in astrophysics and fundamental physics. VHE gamma rays are prone to photon–photon absorption on the extragalactic background light (EBL) over long distances, and the imprint of this absorption process is expected to be particularly evident in the gamma-ray spectra from active galactic nuclei (AGN) and gamma-ray bursts. The EBL is difficult to measure because of the presence of foreground sources of radiation – yet its spectrum reveals information about the history of star formation in the universe. Already, current telescopes detect more gamma rays from AGN than might have been expected in some models of the EBL, but understanding of the intrinsic spectra of AGN is limited and more measurements are needed.
How to build this magnificent observatory? This is the question currently preoccupying the members of the CTA consortium. There is much experience and know-how within the consortium of building VHE gamma-ray telescopes around the world but nonetheless challenges remain. Foremost is driving down the costs of components while also ensuring reliability. It is relatively easy to repair and maintain four or five telescopes, such as those found in the current arrays, but maintaining 60, 70 or even 100 presents difficulties on a different scale. Technology is also ever changing, particularly in light detection. The detector of choice for VHE gamma-ray telescopes has until now been the photomultiplier tube – but these are bulky, relatively expensive and have low quantum-efficiency. Innovative telescope designs, such as dual-mirror systems, might allow the exploitation of newer, smaller detectors such as silicon photodiodes, at least on some of the telescopes. Mirror technologies are another area of active research because the CTA will require a large area of robust, easily reproducible mirrors.
The CTA is currently in its preparatory phase, funded by the European Union Seventh Framework Programme and by national funding agencies. Not only are many different approaches to telescope engineering and electronics being prototyped to enable the consortium to choose the best possible solution, but organizational issues, such as the operation of the CTA as an observatory, are also under development. It is hoped that building of the array will commence in 2014 and that it will become the premier instrument in gamma-ray astronomy for decades to come. Many of its discoveries will no doubt bring surprises, as have the discoveries of the current generation of telescopes. There are exciting times ahead.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.