Work during the current long shutdown (LS1) of CERN’s accelerator complex is making good progress since starting in February this year. Of the LHC’s 1232 dipoles, 15 are being replaced together with three quadrupole-magnet assemblies. By the beginning of September, all of the replacement magnets had been installed in their correct positions and were awaiting reconnection.
Moving the heavy magnets requires specially adapted cranes and trailers. Moreover, there is only one access shaft – made for the purpose during the installation phase – that is wide enough to lower dipoles, each 15 m long and weighing 35 tonnes, to the tunnel. Underground, a specialized trailer carried the replacement magnets to where they were needed. Sensors fitted below the trailer enabled it to “read” and follow a white line along the tunnel floor.
Back in April, the first Superconducting Magnets and Circuits Consolidation (SMACC) teams began work in the tunnel. They are responsible for opening the interconnects between the magnets to lay the groundwork for the series of operations needed for the consolidation effort on the magnet circuits.
The cables of superconductor that form the LHC’s superconducting dipoles and quadrupoles carry a current of up to 11,850 A. The SMACC project was launched in 2009 to avoid the serious consequences of electric arcs that could arise from discontinuities in the splices between the busbars of adjacent magnets (CERN Courier September 2010 p27). The main objective is to install a shunt – a small copper plate that is 50 mm long, 15 mm wide and 3 mm thick – on each splice, straddling the main electrical connection and the busbars of the neighbouring magnets. Should a quench occur in the superconducting cable, the current will pass through the copper part, which must therefore provide an unbroken path. In total, more than 27,000 shunts will have to be put in place – an average of one every three minutes for the teams of technicians, who work on a number of interconnects in parallel.
By the end of summer, three quarters of the interconnect bellows between magnets had been opened. Almost all of the SMACC consolidation activities had been completed in sector 5-6 and the first bellows were being closed again ready for testing. In sector 6-7, the installation of the shunts was being completed and the procedure was starting in sector 7-8. The aim is for completion of the task in July 2014.
After more than a year of upgrades, Fermilab’s revamped accelerator complex is ready to send beam to its suite of fixed-target experiments, which now includes the new NOvA neutrino detector in northern Minnesota, 810 km north of the laboratory.
On 30 July, a beam of protons passed through the main injector for the first time since April 2012. With a circumference of 3.3 km, this synchrotron is the final stage of acceleration in the Fermilab accelerator complex, propelling protons from 8 to 120 GeV. Prior to the shutdown, the machine achieved a beam power of about 350 MW. The shutdown work paves the way to increase this to 700 MW.
The majority of this beam power from the main injector will be used to make neutrinos for the NOvA, MINOS and Minerva experiments. The first neutrinos were delivered on 4 September. A smaller fraction of the proton beam will go to the SeaQuest experiment and Fermilab’s Test Beam Facility. In the future, the main injector will also provide beam for the planned Muon g-2 and Mu2e experiments and the Long-Baseline Neutrino Experiment.
Following the revamp, Fermilab’s chain of accelerators begins with a new ion source and radio-frequency quadrupole (RFQ) to create a beam of negatively charged hydrogen ions, which are accelerated by the RFQ to an energy of 750 keV. The ions then enter the linac, which accelerates the particles to 400 MeV and sends them into the booster, where the particles pass through a foil that strips off the electrons and yields a proton beam. The upgraded booster, which accelerates protons to 8 GeV, now features solid-state RF stations and a few refurbished RF cavities. Once all of the RF cavities have been refurbished – in about two years from now – it will be able to operate at a repetition rate of up to 15 Hz. This work is part of the laboratory’s Proton Improvement Plan.
A major component of the upgraded accelerator complex is the revamped Recycler storage ring, which will play a major role in achieving higher beam power in the main injector. In the past, the Recycler stored 8 GeV antiprotons for the Tevatron collider. The Recycler is now being used for slip-stacking 8 GeV protons and as a result the main injector can deliver beam to Fermilab’s neutrino experiments every 1.3 s. Previously, it could send beam every 2.2 s only.
There are four neutral mesons that allow particle–antiparticle transitions – mixing – and so make ideal laboratories for studies of matter–antimatter asymmetries (CP violation). Indeed, such an asymmetry has already been observed for three of these mesons: K0, B0 and B0s. So far, searches for CP violation in the fourth neutral meson – the charm meson D0 – have not revealed a positive result. However, being the only one of the four systems to contain up quarks, the D0 mesons provide unique access to effects from physics beyond the Standard Model.
The LHCb collaboration presented recently two new sets of measurements at the CHARM 2013 conference, held in Manchester on 31 August–4 September. Both measurements use several million decays of D0 mesons into two charged mesons. The first is based on D0 → K+π– decays and their charge conjugate, from data recorded in 2011 and 2012. Owing to the Cabibbo-Kobayashi-Maskawa mechanism, the direct decay is suppressed relative to its Cabibbo-favoured counterpart. However, the final state can also be reached through mixing of the D0 meson into its antimeson, followed by the favoured decay D0 → K+π–.
These two components and their interference are distinguished through analysis of the decay-time structure of the decay – comparison of the structure for D0 and D0 decays measures CP violation. The results give the best measurements to date of the mixing parameters in this system and are consistent with no CP violation at an unprecedented level of sensitivity (LHCb 2013a).
The second measurement is based on decays into a pair of kaons or a pair of pions and uses data that were recorded in 2011. The asymmetry between the mean lifetimes measured in D0 and D0 decays is related to a parameter, AΓ, which is the asymmetry between the inverse effective lifetimes of decays to the specific final state. It is a measurement of so-called indirect CP violation. The results for the two final states are AΓ(KK) = (–0.35±0.62±0.12) × 10–3 and AΓ(ππ) = (0.33±1.06±0.14) × 10–3 (LHCb 2013b). This is the first time that a search for indirect CP violation in charm mesons has reached a sensitivity of better than 10–3.
The combination of previous measurements performed by the Heavy Flavor Averaging Group hinted at potentially nonzero values for the parameters of CP-violation in D0 mixing, |q/p| and φ. As the figure shows, the new results from LHCb do not support this indication. However, they provide extremely stringent limits on the underlying parameters of charm mixing, therefore constraining the room for physics beyond the Standard Model.
The international Daya Bay collaboration has announced new results, including their first data on how neutrino oscillations vary with neutrino energy, which allows them to measure mass splitting between different neutrino types. Mass splitting represents the frequency of neutrino oscillation while mixing angles represent the amplitude and both are crucial for understanding the nature of neutrinos.
The Daya Bay experiment, which is run by a collaboration of more than 200 scientists from six regions and countries, is located close to the Daya Bay and Ling Ao nuclear power plants, 55 km north-east of Hong Kong. It measures neutrino oscillation using electron antineutrinos created by six powerful nuclear reactors. Because the antineutrinos travel up to 2 km to underground detectors, some transform to another type and therefore apparently disappear. The rate at which they transform is the basis for measuring the mixing angle, while the mass splitting is determined by studying how the rate of transformation depends on the antineutrino energy.
Daya Bay’s first results were announced in March 2012 and established an unexpectedly large value for the mixing angle θ13 – the last of three long-sought neutrino mixing angles. The new results, which were announced at the XVth International Workshop on Neutrino Factories, Super Beams and Beta Beams (NuFact2013) in Beijing, give a more precise value – sin2 2θ13 = 0.090±0.009. The improvement in precision is a result both of having more data to analyse and of having the additional measurements on how the oscillation process varies with neutrino energy.
The KamLAND experiment in Japan and other solar neutrino experiments have previously measured the mass splitting Δm221 by observing the disappearance of electron antineutrinos from reactors some 160 km from the detector and the disappearance of electron neutrinos from the Sun. The long-baseline experiments MINOS in the US and Super-Kamiokande and T2K in Japan have determined the effective mass splitting |Δm2μμ| using muon neutrinos. The Daya Bay collaboration has now measured the magnitude of the mass splitting |Δm2ee| to be (2.54±0.20) × 10–3 eV2.
The result establishes that the electron neutrino has all three mass states and is consistent with that from muon neutrinos measured by MINOS. Precision measurements of the energy dependence should further the goal of establishing a hierarchy of the three mass states for each neutrino flavour.
The ILC site evaluation committee of Japan has announced the result of the assessment of the two candidate sites for an International Linear Collider (ILC). In a press conference held at the University of Tokyo on 23 August, the committee recommended the Kitakami mountains in the Iwate and Miyagi prefectures as the preferred location.
The search for an appropriate candidate site for the construction of an ILC in Japan has been ongoing since 1999, with more than 10 candidates announced in 2003. In 2010, the list was further reduced to two, consisting of Kitakami in the north-east of the main island of Japan and Sefuri in Kyushu, on Japan’s south-west island. The process to assess these two remaining candidates to narrow them down from a scientific point of view began in January this year.
A site-evaluation committee of eight members was formed within Japan. In addition, two sub-committees of 16 technical experts and 12 socio-environmental experts were created separately to provide expertise on issues such as geological conditions, environmental impact, possible problems during construction and the social infrastructure of each candidate site.
After more than 300 hours of meetings, the site-evaluation committee made a tentative choice in early July. This choice was then submitted and reviewed by an international review committee. The committee recognized that the process to choose the site had been conducted with great care and that the selected site has excellent geological conditions for tunnelling and stability.
• For more information, see the Japanese ILC Strategy Council website http://ilc-str.jp/.
A stunning image of the nearby Andromeda galaxy (M31) captured by the Subaru Telescope’s Hyper Suprime-Cam (HSC) has demonstrated the instrument’s capability of fulfilling the goal to use the ground-based telescope to produce a large-scale survey of the universe. The combination of a large mirror, wide field of view and sharp imaging represents a major step into a new era of observational astronomy and will contribute to answering questions about the nature of dark energy and matter. The image marks a successful stage in the HSC’s commissioning process, which involves checking all of its capabilities before it is ready for open use.
The Subaru Telescope, which saw first light in 1999, is an 8.2-m optical-infrared telescope at the summit of Mauna Kea, Hawaii, and is operated by the National Astronomical Observatory of Japan (NAOJ). The HSC – which was installed on the telescope in August last year – substantially increases the field of view beyond that which is available with the present instrument, the Subaru Prime Focus Camera, Suprime-Cam. The 3-tonnes, 3-m high HSC mounted at the prime focus contains 116 innovative, highly sensitive CCDs. Its field of view with a diameter of 1.5° is seven times that of the Suprime-Cam and with the 8.2-m primary mirror enables the high-resolution images that will underpin what will be the largest-ever galaxy survey.
First conceived of in 2002, the HSC Project was established in 2008. The major research partners are NAOJ, the Kavli Institute for the Physics and Mathematics of the Universe, the School of Science at the University of Tokyo, KEK, Academia Sinica Institute of Astronomy and Astrophysics and Princeton University, with collaborators from industry, Hamamatsu Photonics KK, Canon Inc. and Mitsubishi Electric Corporation.
Follow-up observations of a recent short-duration gamma-ray burst (GRB) provide the strongest evidence yet that these elusive bursts result from the merger of two neutron stars. The evidence is in the detection with the Hubble Space Telescope (HST) of a new kind of stellar blast – a kilonova.
During the 1990s, the detection of thousands of GRBs by the Burst and Transient Source Experiment (BATSE) revealed two bumps in the distribution of their duration. GRBs were therefore classified as being of either short or long duration, with a dividing line at 2 s. The origin of these brief flashes of gamma rays remained mysterious until the “rosetta stone” burst, GRB 030329 (CERN Courier September 2003 p15). A supernova explosion was found to be associated with this bright, relatively nearby burst of 29 March 2003 and therefore proved that long-duration GRBs result from core-collapse in massive stars. The collapse of the core forms a black hole, which powers a pair of relativistic jets that drill their way through the remains of the dying star and produce an energetic flash of gamma rays (CERN Courier June 2013 p12).
So what is the origin of the short-duration GRBs? Are they really of a different nature? The favoured hypothesis is that they are produced by the merger of two neutron stars, or a neutron star and a black hole (CERN Courier December 2005 p20). Theorists expect such mergers to produce neutron-rich radioactive isotopes, whose decay within days would lead to a transient infrared source. Such a hypothetical transient is called a kilonova because its brightness is about a thousand times that of a typical stellar nova, but is still 10 to 100 times less bright than a supernova explosion.
A team of astronomers led by Nial Tanvir of the University of Leicester now claims to have detected the first kilonova associated with the short GRB 130603B. The burst was detected on 3 June by the Burst Alert Telescope on the Swift spacecraft. The subsequent detection of an optical afterglow allowed the team to pinpoint the location of this genuine short GRB, which lasted only about 0.2 s. The burst occurred in a known galaxy at a redshift of z = 0.356, an ideal target for the sharp vision of the Hubble Space Telescope (HST).
Two HST observations have been performed: one nine days after the burst and the second after 30 days. While no transient source is detected in visible light, the earlier near-infrared image has a point source at the position of the burst’s afterglow, which is no longer present in the later observation. Furthermore, the brightness of this source is found to be significantly in excess of the extrapolation of the afterglow decay to nine days after the burst. This discrepancy reveals the presence of an additional component that Tanvir and his team suggest is the expected kilonova. The time delay, infrared brightness and the absence of emission in the visible light are characteristics that are all consistent with recent calculations for the emission of a kilonova.
If the infrared transient observed by the HST is correctly interpreted, this would be a new milestone in the understanding of GRBs. It would confirm that short GRBs are indeed produced by the merger of two compact stellar objects ejecting neutron-rich radioactive elements decaying in a kilonova blast. This would also be good news for searches for gravitational-wave signals from the merger of compact objects. Detecting the kilonova transient associated with a gravitational-wave signal would allow the location and distance of the source to be obtained, even in the absence of a detectable short GRB when the gamma-ray emission is pointing away from the Earth.
Numerous astronomical observations indicate that about one quarter of the energy content of the universe is made up of a mysterious substance known as dark matter. The Planck collaboration recently measured this to the precise percentage of 26.8%, which is slightly greater than the previous value from nine years of observations by the Wilkinson Microwave Anisotropy Probe (WMAP). Dark matter, which is five times more abundant than baryonic matter, provides compelling evidence for new physics and could be made of a new particle not present in the Standard Model. Theories beyond the Standard Model, such as supersymmetric models or theories with extra dimensions, suggest promising candidates and naturally predict so-called weakly interacting massive particles (WIMPs), which are stable or have lifetimes longer than the age of the universe.
There are several complementary strategies to detect dark matter. The ATLAS and CMS experiments at the LHC search for such particles produced in proton–proton collisions. Indirect searches, for example by the AMS-02 or IceCube detectors, aim at detecting the products of dark-matter annihilation in cosmic rays.
Because dark-matter particles are expected to be abundant in the Galaxy, with an energy density of about 0.3 GeV/c2/cm3 at the location of the Sun, the most direct strategy is to look for their interactions in laboratory-based detectors. In general, it is possible to study spin-independent WIMP–nucleon interactions – which scale with the square of the target’s mass number, A – or spin-dependent couplings to unpaired nucleons in the target nucleus. Because of their nonrelativistic Maxwellian velocity distribution with a typical speed of around 220 km/s and because the WIMPs interact significantly only with nuclei (and not with the electrons), the expected signal is a featureless exponential nuclear-recoil spectrum. The recoil energies depend on the mass of the WIMP and on the target material and are typically of the order of a few tens of kilo-electron-volts.
Because the expected interaction rates are small, a sensitive WIMP detector needs to feature a large target mass, an ultralow background and a low energy threshold. In addition, it should allow the distinction of the nuclear-recoil signal (from WIMPs and also from background neutrons) from the overabundant electronic-recoil background from γ and β radiation.
The most sensitive dark-matter detector to date is XENON100, which is operated by the XENON collaboration and situated at the Italian Laboratori Nazionali del Gran Sasso (LNGS), under about 1.3 km of rock that provides a natural shield from cosmic rays. The experiment searches for WIMP interactions in a target of 62 kg of liquid xenon. The noble gas xenon is cooled to around –90°C to bring it to the liquid state with a density of around 3 g/cm3. Its high mass number, A, of around 130 makes it one of the heaviest of all target materials for dark-matter detection.
The detector was built from materials selected for their low intrinsic radioactivity
XENON100 is operated as a dual-phase time-projection chamber (TPC), as figure 1 illustrates. Particle interactions excite the liquid xenon, leading to prompt scintillation light, and also ionize the target atoms. A uniform electric field causes the ionization electrons to drift away from the interaction site to the top of the TPC. Here a strong electric field extracts them into the xenon-gas phase above the liquid. Subsequent scattering on the gas atoms leads to signal amplification and a secondary scintillation signal, which is directly proportional to the ionization extracted. Both the prompt and secondary scintillation light are detected by two arrays of low-radioactivity photomultipliers (PMTs), which are installed above and below the cylindrical target of around 30 cm height and 30 cm diameter (figure 2). The PMTs are immersed in the liquid and gaseous xenon to achieve the highest-possible light-detection efficiency and therefore the lowest threshold. The 3D position of the interaction vertex is obtained by combining the time difference between the prompt and the secondary scintillation signal with the hit pattern of the localized secondary signal on the array of 98 PMTs above the target. The number of secondary signals defines the event multiplicity.
The detector was built from materials selected for their low intrinsic radioactivity. Thanks to its novel detector design – placing most radioactive components outside of a massive passive shield – and the self-shielding provided by the liquid xenon, XENON100 features the lowest published background of all dark-matter experiments. The self-shielding is exploited by selecting only events that interact with the inner part of the detector (“fiducialization”) and by rejecting all events that exhibit a coincident signal in the active veto, which is made of 99 kg of liquid xenon that surrounds the target. Because of their small cross-section, WIMPs will interact only once in the detector, so background can be reduced further by selecting single-scatter interactions with a charge-to-light ratio typical for the expected nuclear-recoil events.
In the summer of 2012, the XENON collaboration published results from a search for spin-independent WIMP–nucleon interactions based on 225 live days of data (XENON collaboration 2012). No indication for dark matter was found but the derived upper limits are the most stringent to date for WIMP masses above 7 GeV/c2. The same data have now been interpreted in terms of spin-dependent interactions and the results published recently (XENON collaboration 2013). This latest analysis requires knowledge of the axial-vector coupling and the nuclear structure of the two xenon isotopes with unpaired nucleons, 129Xe and 131Xe. Improved calculations were employed here, which are based on chiral-effective field-theory currents. Compared with older calculations, these yield superior agreement between calculated and predicted nuclear energy-spectra (Menendez et al. 2012).
The specific nuclear structure of the relevant xenon isotopes leads to different sensitivities for the two extreme cases that are usually considered. For the case where WIMPs are assumed to couple to protons only, the new XENON100 limit is competitive with other results (figure 3). Indirect dark-matter searches looking for signals from the annihilation of WIMPs trapped in the Sun (which mainly consists of protons) are particularly sensitive to this channel. For the neutron-only coupling, XENON100 sets a new best limit for most masses, improving the previous constraints by more than an order of magnitude (figure 3).
The aim is to reach a dark-matter sensitivity two orders of magnitude better than the current best value
While XENON100 continues to take science data at LNGS, the development of a larger liquid-xenon detector is well under way. XENON1T will be about 35 times larger than XENON100, with a TPC of around 100 cm in height and diameter. The aim is to reach a dark-matter sensitivity two orders of magnitude better than the current best value. This will probe a significant part of the theoretically favoured WIMP parameter space but will require the radioactive background of the new instrument to be 100 times lower than that of XENON100. The greatly increased liquid xenon target mass of more than two tonnes helps to achieve this goal.
The largest background challenge comes from uniformly distributed traces of radioactive radon (mainly 222Rn) and krypton (85Kr, present in natural krypton at a fraction of about 10–11) dissolved in the xenon, because the background from these isotopes cannot be reduced by target fiducialization. To achieve the background goals for XENON1T, the contamination of radon and krypton in the xenon filling will be reduced to below a level of parts per 1012 by careful material selection and surface treatment and by cryogenic distillation, respectively. Additionally, all of the construction materials for the detector are being carefully selected based on their intrinsic radioactivity using ultrasensitive germanium detectors. A few of the world’s most sensitive detectors are owned and operated by institutions in the XENON collaboration.
The XENON1T detector will be placed inside a large water shield to protect it from environmental radioactivity (figure 4). The water will be equipped with PMTs to tag muons via emission of Cherenkov light, because muon-induced neutrons could mimic WIMP signals. The construction of the water tank is underway in Hall B of LNGS and will be finished by the end of 2013. Together with the XENON1T service building, it will be the first visible landmark of the experiment underground. The other XENON1T systems – from detector and cryogenics to massive facilities for the storage and purification of xenon – are currently being designed, built, commissioned and tested at the various collaborating institutions. In particular, the challenges associated with building a TPC of 100 cm drift length, which will be the longest liquid xenon-based TPC ever, are being addressed with dedicated R&D set-ups.
Once the main underground facilities are erected, the XENON1T low-background cryostat – to contain the TPC and more than three tonnes of xenon – will be installed inside the water shield. The infrastructure for the storage, purification and liquefaction have been designed to handle more than double the amount of xenon initially used in XENON1T. Their commissioning underground is expected to be completed by the summer of 2014. The timeline foresees commissioning of the full XENON1T experiment by the end of 2014 and the first data by early 2015. After two years of data-taking, XENON1T will reach a sensitivity of 2 × 10–47 cm2 for spin-independent WIMP-nucleon cross-sections at a WIMP mass of 100 GeV/c2. This is a factor 100 better than the current best WIMP result from XENON100.
ALICE : sur les traces d’un nouvel état de la matière
Après deux périodes de collisions plomb-plomb au LHC, complétées par des campagnes de collisions proton-plomb, de nouvelles perspectives s’ouvrent pour la compréhension de la matière à hautes température et densité, conditions dans lesquelles la chromodynamique quantique prédit l’existence d’un plasma de quarks et de gluons. Conçue pour supporter les fortes densités de particules générées par les collisions d’ions lourds, l’expérience ALICE a fourni de nombreuses mesures du milieu produit au LHC, qui sont ici résumées.Élément nouveau, la large section efficace pour les processus dits durs tels que la production de jets et de saveurs lourdes peut être utilisée pour ” voir ” à l’intérieur du milieu.
The dump of the lead beam in the early morning of 10 February this year marked the end of a successful and exciting first LHC running phase with heavy-ion beams. It started in November 2010 with the first lead–lead (PbPb) collisions at √sNN = 2.76 TeV per nucleon pair, when in one month of running the machine delivered an integrated luminosity of about 10 μb–1 for each experiment. In the second period a year later, the LHC’s heavy-ion performance exceeded expectation because the instantaneous luminosity reached more than 1026 cm–2 s–1 and the experiments collected about 10 times more integrated luminosity. A pilot proton–lead (pPb) run at √sNN = 5.02 TeV took place in September 2012, providing enough events for first surprises and publications. A full run followed in February, delivering 30 nb–1 of pPb collisions – precious reference data for the PbPb studies.
The ALICE experiment is optimized to cope with the large particle-densities produced in PbPb collisions and nothing was left unprepared for the first heavy-ion run in 2010. Nevertheless, immediately before the first collisions the tension was palpable until the first event displays appeared (figure 1). The image of the star-like burst with thousands of particles recorded by the time-projection chamber became an emblem for the accomplishment of a collaboration that had worked for 20 years on developing, building and operating the ALICE detector. With the arrival of a wealth of data, a new era for comprehension of the nature of matter at high temperature and density began, where QCD predicts that quark–gluon plasma (QGP) – a de-confined medium of quarks and gluons – exists.
Before the LHC started up, the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven was the most powerful machine of this kind, producing collisions between gold (Au) ions with a maximum energy of 200 GeV. After 10 years of analysis – just before the first PbPb data-taking at the LHC – the experimental collaborations at RHIC came to the surprising conclusion that central AuAu collisions create droplets of an almost perfect, dense fluid of partonic matter that is 250,000 times hotter than the core of the Sun. Their results indicate that because of the strength of the colour forces the plasma of partons (quarks and gluons) produced in these collisions has not yet reached its asymptotically gas-like state and, therefore, it has been dubbed strongly interacting QGP (sQGP). This finding raised several questions. What would the newly created medium look like at the LHC? How much denser and hotter would it be? Would it still be a perfect liquid or would it be closer to a weakly coupled gas-like state? How would the abundantly produced hard probes be modified by the medium?
Denser, hotter, bigger
In the most central PbPb collisions at the LHC, the charged-particle density at mid-rapidity amounts to dN/dη ≈ 1600, which is about 2.1 times more per nucleon pair participating in the collision than at RHIC. Since the particles are also, on average, more energetic at the LHC, the transverse-energy density is about 2.5 times higher. This allows a rough estimate of the energy density of the medium that is produced. Assuming the same equilibration time of the plasma at RHIC and the LHC, the energy density has increased at the LHC by at least a factor of three, corresponding to an increase in temperature of more than 30% (CERN Courier June 2011 p17).
A more accurate thermometer is provided by the spectrum of the thermal photons emitted by the plasma that reach the detector unscathed. Whereas counting inclusive charged particles is a relatively easy task, the thermal photons have to be arduously separated from a large background of photons from meson decays and photons produced by QCD processes in collisions with large momentum-transfer, pT. The thermal photons appear in the low-energy region of the direct-photon spectrum (pgT <2 GeV/c) as an excess above the yield expected from next-to-leading order QCD and have an exponential shape (figure 2). The inverse slope of this exponential measured by ALICE gives a value for the temperature, T = 304±51 MeV, about 40% higher than at RHIC. In hydrodynamic models, this parameter corresponds to an effective temperature averaged over the time evolution of the reaction. The measured values suggest initial temperatures that are well above the critical temperature of 150–160 MeV.
In the same way that astronomers determine the space–time structure of extended sources using Hanbury-Brown–Twiss optical intensity interferometry, heavy-ion physicists use 3D momentum correlation-functions of identical bosons to determine the size of the medium produced (the freeze-out volume) and its lifetime. In line with predictions from hydrodynamics, the volume increases between RHIC and the LHC by a factor of two and the system lifetime increases by 30% (CERN Courier May 2011 p6).
Perfect quantum liquids are characterized by a low shear-viscosity to entropy ratio, η/s, for which a lower limit of h/4πkB is postulated. This property is directly related to the ability of the medium to transform spatial anisotropy of the initial energy density into momentum-space anisotropy. Experimentally, the momentum-space anisotropy is quantified by the Fourier decomposition of the distribution in azimuthal angle of the produced particles with respect to the reaction plane. The second Fourier coefficient, v2, is commonly denoted as elliptic flow. With respect to RHIC, v2 is found to increase in mid-central collisions by 30% (CERN Courier April 2011 p7). Calculations based on hydrodynamical models show that the v2 measured at the LHC is consistent with a low η/s, close to or challenging the postulated limit.
Further knowledge on the collective behaviour of the medium has been obtained from the spectral analysis of pions, kaons and protons. The so-called blast-wave fit can be used to determine the common radial expansion velocity, <βr>. The velocity measured by ALICE comes to about 0.65 and has increased by 10% with respect to RHIC.
In di-hadron azimuthal correlations, elliptic flow manifests as cosine-shaped modulations that extend over large rapidity ranges. At the LHC, for central collisions, more complicated structures become prominent. These can be quantified by higher-order Fourier coefficients. Hydrodynamical models can then relate them to fluctuations of the initial density distribution of the interacting nucleons. Wavy structures that were formerly discussed at RHIC, such as Mach-cones and soft ridges, have now found a simpler explanation. By selecting events with larger than average higher-order Fourier coefficients, it is possible to select and study certain initial-state configurations. “Event shape engineering” has therefore been born.
As discussed above, using hydrodynamical models, the basic parameters of the medium can be extrapolated in a continuous manner between the energies of RHIC and the LHC and they turn out to show a moderate increase. Although this might not seem to be a spectacular discovery, its importance for the field should not be underestimated: it marks a transition from data-driven discoveries to precision measurements that constrain model parameters.
Hard probes
What is new at the LHC is the large cross-section (several orders of magnitude higher with respect to RHIC) for so-called hard processes, e.g. the production of jets and heavy flavour. In these cases, the production is decoupled from the formation of the medium and, therefore, as quasi-external probes traversing the medium they can be used for tomography measurements – in effect, to see inside the medium. Furthermore, they are well calibrated probes because their production rates in the absence of the medium can be calculated using perturbative QCD. Hard probes open a new window for the study of the QGP through high-pT parton and heavy-quark transport coefficients, as well as the possible thermalization and recombination of heavy quarks.
High-pT partons are produced in hard interactions at the early stage of heavy-ion collisions. They are ideal probes because they traverse the medium and their yield and kinematics are influenced by its presence. The ability of a parton to transfer momentum to the medium is particularly interesting. Described by a transport parameter, it is related to the density of colour charges and the coupling of the medium: the stronger the coupling, the larger the transport coefficient and, therefore, the modification of the probe. Energy loss of partons in the medium is caused by multiple elastic-scattering and gluon radiation (jet quenching). This was first observed at RHIC in the suppression of high-pT particles with respect to the appropriately scaled proton–proton (pp) and proton–nucleus (pA) references (the nuclear-modification factor RAA) and from the disappearance of back-to-back particle correlations.
At the LHC, rates are high at transverse energies where jets can be reconstructed above the fluctuations of the background-energy contribution from the underlying event. In particular, for jet transverse energies ET >100 GeV, the influence of the underlying event is relatively small, allowing robust jet-measurements. The ATLAS and CMS collaborations – whose detectors have almost complete calorimetric coverage – were the first to report direct observation of jet-quenching via the di-jet energy imbalance at the LHC (CERN Courier January/February 2011 p6 and March 2011 p6). However, the measurements of the suppression of inclusive single-particle production show that quenching effects are strongest for intermediate transverse momenta, 4 <pT <20 GeV/c, corresponding to parton pT values in the range around 6–30 GeV/c (CERN Courier June 2011 p17).
ALICE can approach this region – while introducing the smallest possible bias on the jet fragmentation – by measuring jet fragments down to low pT (pT >150 MeV/c). Although in jet reconstruction more of the original parton energy is recovered than with single particles, for the most central collisions the observed single inclusive jet suppression is similar to the one for single hadrons, with RjetAA = 0.2–0.4 in the range 30 <pjetT <100 GeV/c. Furthermore, no indication of energy redistribution within experimental uncertainties is observed from the ratios of jet yields with different cone sizes (CERN Courier June 2013 p8).
The suppression patterns are qualitatively and – to some extent – quantitatively similar for single hadrons and jets. This can be best explained by partonic energy loss through radiation mainly outside the jet cone that is used by the jet reconstruction algorithm and in-vacuum (pp-like) fragmentation of the remnant parton. Before the LHC start-up, it was widely believed that jets are more robust objects, i.e. jet quenching would soften their fragmentation without changing the total energy inside the cone. The study of jet fragmentation would have allowed insight into the details of the energy-loss mechanism. The latter is still true, but the energy lost by the partons has to be searched for at large distances from the jet axis, where the background from the underlying event is large. Detailed studies of the momentum and angular distribution of the radiated energy – which require future higher-statistics jet samples – will provide more detailed information on the nature of the energy-loss mechanisms.
Heavy versus light
At the LHC, high-pT hadron-production is dominated by gluon fragmentation. In QCD, quarks have a smaller colour-coupling factor with respect to gluons, so the energy loss for quarks is expected to be smaller than for gluons. In addition, for heavy quarks with pT <mq, small-angle gluon radiation is reduced by the so-called “dead-cone effect”. This will reduce further the effect of the medium. ALICE has measured the nuclear-modification factor for the charm mesons D0, D+ and D*+ for 2 <pT <16 GeV/c (figure 3). For central PbPb collisions, a strong in-medium energy loss of 0.2–0.34 is observed in the range pT >5 GeV/c. For lower transverse momenta there is a tendency for the suppression of D0 mesons to decrease (CERN Courier June 2012 p15, January/February 2013 p7).
The suppression is almost as large as that observed for charged particles that are dominated by pions from gluon fragmentation. This observation favours models that explain heavy-quark energy loss by additional mechanisms, such as in-medium hadron formation and dissociation or partial thermalization of heavy quarks through re-scatterings and in-medium resonant interactions. Such a scenario is further corroborated by measurement of the D-meson elliptic-flow coefficient, v2. For semi-central PbPb collisions, a positive flow is observed in the range 2 <pT <6 GeV/c indicating that the interactions with the medium transfer information on the azimuthal anisotropy of the system to charm quarks.
The suppression of the J/ψ and other charmonia states, as a result of short-range screening of the strong interaction, was one of the first signals predicted for QGP formation and has been observed both at CERN’s Super Proton Synchrotron and at RHIC. At the LHC, heavy quarks are abundantly produced – about 100 cc pairs per event in central PbPb collisions. If these charm quarks roam freely in the medium and the charm density is high enough, they can recombine to form quarkonia states, competing with the suppression mechanism.
Indeed, in the most central collisions a lower J/ψ suppression than at RHIC is observed. Also, a smaller suppression is observed at low pT compared with high pT and it is lower at mid-rapidity than in the forward direction (CERN Courier March 2012 p14). In line with regeneration models, suppression is reduced in regions where the charm-quark density is highest. In semi-central PbPb collisions, ALICE sees a hint of nonzero elliptic flow of the J/ψ. This also favours a scenario in which a significant fraction of J/ψ particles are produced by regeneration. The significance of these results will be improved with future heavy-ion data-taking.
Surprises from pPb reference data
The analysis of pPb collisions allows the ALICE collaboration to study initial and final state effects in cold nuclear matter, to establish a baseline for the interpretation of the heavy-ion results. However, the results from the data taken in the pilot run have already shown that pPb collisions are also good for surprises. First, the CMS collaboration observed from the analysis of two-particle angular correlations in high-multiplicity pPb collisions the presence of a ridge structure that is elongated in the pseudo-rapidity direction (CERN Courier January/February 2013 p9). Using low-multiplicity events as a reference, the ALICE and ATLAS collaborations found that this ridge-structure actually has a perfectly symmetrical counterpart, back-to-back in azimuth (CERN Courier March 2013 p7). The amplitude and shape of the observed double-ridge structure are similar to the modulations that are caused by the elliptic flow that is observed in PbPb collisions, therefore indicating collective behaviour in pPb. Other models attribute the effect to gluon saturation in the lead nucleus or to parton-induced final-state effects. These effects and their similarity to PbPb phenomena are intriguing. Their further investigation and theoretical interpretation will shed new light on the properties of matter at high temperatures and densities.
If pPb collisions do produce a QGP-like medium, its extension is expected to be much smaller than the one produced in PbPb collisions. However, the relevant quantity is not size but the ratio of the system size to the mean-free path of partons. If it is high enough, hydrodynamic models can explain the observed phenomena. If the observations can be explained by coherent effects between strings formed in different proton–nucleon scatterings, we must understand to what extent these effects contribute also to PbPb collisions. While the LHC takes a pause, the ALICE collaboration is looking forward to more exciting results from the existing data.
More than 100 years have passed since the discovery of cosmic rays by Victor Hess in 1912 and there are still no signs of decreasing interest in the study of the properties of charged leptons, nuclei and photons from outer space. On the contrary, the search for a better understanding and clarification of the long-standing questions – the origin of ultrahigh energy cosmic rays, the composition as a function of energy, the existence of a maximum energy, the acceleration mechanisms, the propagation and confinement in the Galaxy, the extra-galactic origin, etc. – are more pertinent than ever. In addition, new ambitious experimental initiatives are starting to produce results that could cast light on more recent challenging questions, such as the nature of dark matter, the apparent absence of antimatter in the explored universe and the search for new forms of matter.
The 33rd International Conference on Cosmic Rays (ICRC 2013) – The Astroparticle Physics Conference – took place in Rio de Janeiro on 2–9 July and provided a high-profile platform for the presentation of a wealth of results from solar and heliospheric physics, through cosmic-ray physics and gamma-ray astronomy to neutrino astronomy and dark-matter physics. A full session was devoted to the presentation of new results from the Alpha Magnetic Spectrometer, AMS-02. Sponsored by the US Department of Energy and supported financially by the relevant funding and space agencies in Europe and Asia, this experiment was deployed on the International Space Station (ISS) on 19 May 2011 (figure 1). The results, which were presented for the first time at a large international conference, are based on the data collected by AMS-02 during its first two years of operation on the ISS.
AMS-02 is a large particle detector by space standards and built using the concepts and technologies developed for experiments at particle accelerators but adapted to the extremely hostile environment of space. Measuring 5 × 4 × 3 m3, it weighs 7.5 tonnes. Reliability, performance and redundancy are the key features for the safe and successful operation of this instrument in space.
The main scientific goal is to perform a high-precision, large-statistics and long-duration study of cosmic nuclei (from hydrogen to iron and beyond), elementary charged particles (protons, antiprotons, electrons and positrons) and γ rays. In particular, AMS-02 is designed to measure the energy- and time-dependent fluxes of cosmic nuclei to an unprecedented degree of precision, to understand better the propagation models, the confinement mechanisms of cosmic rays in the Galaxy and the strength of the interactions with interstellar media. A second high-priority research topic is an indirect search for dark-matter signals based on looking at the fluxes of particles such as electrons, positrons, protons, antiprotons and photons.
Another important item on the list of priorities – which will be addressed in future – is the search for cosmic antimatter nuclei. This variety of matter is apparently absent in the region of the universe currently explored but – according to the Big Bang theory – it should have been highly abundant in the early phases of the universe. Last but not least, AMS-02 will explore the possible existence of new phenomena or new forms of matter, such as strangelets, which this state-of-the-art instrument will be in a unique position to unravel.
The AMS-02 detector was designed, built and is now operated by a large international collaboration led by Nobel laureate Samuel C C Ting, involving researchers from institutions in America, Europe and Asia. The detector components were constructed and tested in research centres around the world, with large facilities being built or refurbished for this purpose in China, France, Germany, Italy, Spain, Switzerland and Taiwan. The final assembly took place at CERN, benefiting from the laboratory’s significant expertise and experience in the technologies of detector construction. The instrument was then tested extensively with cosmic rays and particle beams at CERN, in the Maxwell electromagnetic compatibility chamber and the large-space thermal simulator at ESA-ESTEC in Noordwijk, as well as in the large facilities at the NASA Kennedy Space Center in the US.
The construction of AMS-02 has stimulated the development of important and novel technologies in advanced instrumentation. These include the first operation in space of a large two-phase CO2 cooling system for the silicon tracker and the two-gas (Xe-CO2) system for the operation of the transition-radiation detector, as well as the overall thermal system. The latter must protect the experiment from the continual changes of temperature that the detector undergoes at every position on its orbit, which affect various parts of the detector subsystems in a manner that is not easy to reproduce. The use of radiation-tolerant fast electronics, a sophisticated trigger, redundant systems for data acquisition, associated protocols for communications with the NASA on-board hardware and a high-rate downlink system for the real-time transmission of data from AMS-02 to the NASA ground facilities, are a few examples that illustrate the complexity and the kind of challenges that the project has had to meet.
The operation of the Payload Operation and Control Center (POCC) at CERN, 24 hours a day and 365 days a year, in permanent connection with the ISS and the NASA Johnson Space Center, has also been a major endeavour. Fast processing of data on reception at the Science Operation Center at CERN has been a formidable tour de force, resulting in the timely reconstruction of 36.5 × 109 cosmic rays during the period 19 May 2011 – August 2013.
After almost 28 months of operation, AMS-02 – with its 300,000 electronics channels, 650 computers, 1100 thermal sensors and 400 thermostats – has worked flawlessly. To maintain performance and reliability, three space-flight simulators operate continuously at CERN, at the NASA Johnson Space Center and at the NASA Marshall Space Flight Center, where they test and certify the numerous upgrades of the software packages for the on-board computers and the communication interfaces and protocols.
First results
At ICRC 2013, the AMS collaboration presented data on two important areas of cosmic-ray physics. One addresses the fluxes, ratios and anisotropies of leptons, while the other concerns charged cosmic nuclei (protons, helium, boron, carbon). The following presents a brief summary of the results and of some of the most critical experimental challenges.
In the case of electrons and positrons, efficient instrumental handles for the suppression of the dominant backgrounds are: the minimal amount of material in the transition-radiation and time-of-flight detectors; the magnet location, separating the transition-radiation detector and the electromagnetic calorimeter; and the capability to match the value of the particle momentum reconstructed in the nine tracker layers of the silicon spectrometer with the value of the energy of the particle showering in the electromagnetic calorimeter.
The performance of the transition-radiation detector results in a high proton-rejection efficiency (larger than 103) at 90% positron efficiency in the rigidity range of interest. The performance of the calorimeter with its 17 radiation lengths provides a rejection factor better than 103 for protons with momenta up to 103 GeV/c. The combination of the two efficiencies leads to an overall proton-rejection factor of 106 for most of the energy range under study.
A precision measurement of the positron fraction in primary cosmic rays, based on the sample of 6.8 million positron and electron events in the energy range of 0.5–350 GeV – collected during the initial 18 months of operation on the ISS – was recently published and presented at the conference (Aguilar et al. 2013 and Kounine ICRC 2013). The positron-fraction spectrum (figure 2), does not exhibit fine structure and the highly precise determination shows that the positron fraction steadily increases from 10–250 GeV, while from 20–250 GeV, the slope decreases by an order of magnitude. The AMS-02 measurements have extended the energy ranges covered by recent experiments to higher values and reveal a different behaviour in the high-energy region of the spectrum.
AMS-02 has also extended the measurements of the positron spectrum to 350 GeV – that is, above the energy range of determinations by other experiments. The individual electron and positron spectra, with the E3 multiplication factor and the combined spectrum, were presented at the conference (Schael, Bertucci ICRC 2013). Figure 3 shows the electron spectrum, which appears to follow a smooth, slowly falling curve up to 500 GeV. The positron spectrum, by contrast, rises to 10 GeV, flattens from 10–30 GeV, before rising again above 30 GeV (figure 4). For the time being, it is not obvious that the models or simple parametric estimations that are currently used to describe the rate spectrum can also describe the behaviour of the individual electron and positron spectra.
Using a larger data sample, comprising of the order of 9 million of electrons and positrons, the collaboration has performed a preliminary measurement of the combined fluxes of electrons and positrons in the energy range 0.5–700 GeV (Bertucci ICRC 2013). The data do not show significant structures, although a change in the spectral index with increasing lepton energies is clearly observed. However, the positron flux increases with energy and a promising approach to identifying the physics origin of this behaviour lies in the determination of the size of a possible anisotropy, arising in primary sources, in the arrival directions of positrons and electrons measured in galactic co-ordinates. AMS-02 has obtained a limit on the dipole anisotropy parameter d <0.030 at the 95% confidence level for energies above 16 GeV (Casaus ICRC 2013).
Turning to cosmic nuclei, the first AMS-02 measurements of the proton and helium fluxes were presented at the conference (Haino, Choutko ICRC 2013). The rigidity ranges were 1 GV – 1.8 TV for protons and 2 GV – 3.2 TV for helium (figures 5 and 6). In both cases, the experiment observed gradual changes of the fluxes owing to solar modulation, as well as drastic changes after large solar flares. Otherwise, the spectra are fairly smooth and do not exhibit breaks or fine structures of the kind reported for other recent experiments.
The ratio of the boron to carbon fluxes is particularly interesting because it carries important information about the production and propagation of cosmic rays in the Galaxy. Boron nuclei are produced mainly by spallation of heavier primary elements present in the interstellar medium, whereas primary cosmic rays – such as carbon and oxygen – are predominantly produced at the source. Precision measurements of the boron-to-carbon ratio therefore provide important input for determining the characteristics of the cosmic-ray sources by deconvoluting the propagation effects from the measured data. The capability of AMS-02 to do multiple independent determinations of the electric charges of the cosmic rays allows a separation of carbon from boron with a contamination of less than 10–4. Figure 7 presents a preliminary measurement of the boron-to-carbon ratio in the kinetic-energy interval 0.5 – 670 GeV/n (Oliva ICRC 2013).
For the future
After nearly 28 months of successful operation, the results presented at ICRC 2013 already give a taste of the scientific potential of the AMS-02 experiment. In the near future, the measurements sketched in this article will extend the energy or rigidity coverage and the study of systematic uncertainties will be finalized. The experiment will measure the fluxes of more cosmic nuclei with unprecedented precision to constrain further the size and energy dependence of the underlying background processes.
By the end of the decade AMS-02 will have collected more than 150 × 109 cosmic-ray events
High on the priority list for AMS-02 is the measurement of the antiproton flux and the antiproton/proton rate – a relevant and most sensitive quantity for disentangling, among the possible sources, those that induce the observed increase of the positron flux with energy. With the growing data sample and a deeper assessment of the systematic uncertainties, the searches for cosmic antinuclei will become extremely important, as will the search for unexpected new signatures.
By the end of the decade AMS-02 will have collected more than 150 × 109 cosmic-ray events. In view of what has been achieved so far, it is reasonable to be fairly confident that this massive amount of new and precise data will contribute significantly to a better understanding of the ever exciting and lively field of cosmic rays.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.