Like all of the LHC experiments, LHCb relies on a tremendous amount of CPU power to select interesting events out of the many millions that the LHC produces every second. Indeed, a large part of the ingenuity of the LHCb collaboration goes into developing trigger algorithms that can sift out the interesting physics from a sea of background. The cleverer the algorithms, the better the physics, but often the computational cost is also higher. About 1500 powerful computing servers in an event filter farm are kept 100% busy when LHCb is taking data and still more could be used.
However, this enormous computing power is used less than 20% of the time when averaged over the entire year. This is partly because of the annual shutdown, so preparations are under way to use the power of the filter farm during that period for offline processing of data – the issues to be addressed include feeding the farm with events from external storage. The rest of the idle time is a result of the gaps between the periods when there are protons colliding in the LHC (the “fills”), which typically last between two and three hours, where no collisions take place and therefore no computing power is required.
This raises the question about whether it is somehow possible to borrow the CPU power of the idle servers and use it during physics runs for an extra boost. Such thoughts led to the idea of “deferred triggering”: storing events that cannot be processed online on the local disks of the servers, and later, when the fill is over, processing them on the now idle servers.
The LHCb Online and Trigger teams quickly worked out the technical details and started the implementation of a deferred trigger early this year. As often happens in online computing, the storing and moving of the data is the easy part, while the true challenge lies in the monitoring and control of the processing, robust error-recovery and careful bookkeeping. After a few weeks, all of the essential pieces were ready for the first successful tests using real data.
Depending on the ratio of the fill length to inter-fill time, up to 20% of CPU time can be deferred – limited only by the available disk space (currently around 200 TB) and the time between fills in the LHC. Buying that amount of CPU power would correspond to an investment of hundreds of thousands of Swiss francs. Instead, this enterprising idea has allowed an increase in the performance of its trigger, allowing time for more complex algorithms (such as the online reconstruction of KS decays) to extend the physics reach of the experiment.
After a flying start, with the first stable beams at the new energy of 4 TeV on 5 April, the LHC successfully operated with 1380 bunches per beam – the maximum planned for 2012 – on 18 April. In the days that followed, the machine reached a record peak luminosity of about 5.6 × 1033 cm–2 s–1, with a bunch intensity of 1.4 × 1011 protons per bunch and a new highest stored energy of 120 MJ per beam.
As it entered a two-day machine-develop-ment period on 21–22 April, almost 1 fb–1 of data had been delivered to the experiments, a feat that took until June in 2011. The machine development focused on topics relevant for the 2012 physics-beam operation and was followed by a five-day technical stop, the first of the year.
The restart from 27 April onwards was slowed down by several technical faults that led to low machine availability and the ramp back up in intensity took longer than initially planned. LHC operation was further hampered by higher than usual beam losses in the ramp and squeeze. These required time to investigate the causes and to implement mitigation measures.
On 10 May the machine began running again with 1380 bunches and a couple of days later saw one of the year’s best fills, lasting for 13 hours and delivering an integrated luminosity of 120 pb–1 to ATLAS and CMS. By 15 May, after careful optimization of the beams in the injectors, the luminosity was back up to pre-technical-stop levels. The aim now is for steady running accompanied by a gentle increase in bunch intensity in order to deliver a sizeable amount of data in time for the summer conferences.
The Reactor Experiment for Neutrino Oscillations (RENO) has performed a definitive measurement of the neutrino-oscillation mixing angle, θ13, by observing the disappearance of electron-antineutrinos emitted from a nuclear reactor, with a significance of 4.9 σ
RENO detects antineutrinos from six reactors, each with a thermal power output of 2.8 GWth, at Yonggwang Nuclear Power Plant in Korea. The reactors are almost equally spaced in a line about 1.3 km long and the experiment uses two identical detectors located at 294 m and 1383 m on either side of the centre of this line, beneath hills that provide, respectively, 120 and 450 m of water-equivalent of rock overburden to reduce the cosmic backgrounds. This symmetric arrangement of reactors and detectors is useful for minimizing the complexity of the measurement. RENO is the first experiment to measure θ13, the smallest neutrino-mixing angle and the last to be known, with two identical detectors.
In the 229-day data-taking period from 11 August 2011 to 26 March 2012, the far (near) detector observed 17,102 (154,088) electron-antineutrino candidate events with a background fraction of 5.5% (2.7%). During this period, all six reactors were operating mainly at full power, with two reactors being off for a month each for fuel replacement.
The two identical antineutrino detectors allow a relative measurement through a comparison of the observed neutrino rates. Measuring the far-to-near ratio of the reactor neutrinos in this way can considerably reduce several systematic errors. The relative measurement is independent of correlated uncertainties and helps in minimizing uncorrelated reactor uncertainties.
Each detector comprises four layers. At the core lies the target volume of 16.5 tonnes of liquid scintillator that is doped with gadolinium. An electron-antineutrino can interact with a free proton in the scintillator, ν + p → e+ + n. The positron from this inverse β-decay annihilates immediately giving a prompt signal. The neutron wanders into the target volume, eventually being captured by the gadolinium – giving a delayed signal. The delayed coincidence between the positron and neutron signals provides the distinctive signature of inverse β-decay.
The central target volume is surrounded by a 60 cm layer of liquid scintillator without gadolinium, which serves to catch γ-rays escaping from the target volume, thus increasing the detection efficiency. Outside this γ-catcher, a 70 cm buffer-layer of mineral oil shields the inner detectors from radioactivity in the surrounding rocks and in the 354 photomultiplier tubes (10-inch) that are installed on the inner wall of the buffer container. The outermost veto layer consists of 1.5 m of pure water, which serves to identify events coming from the outside through their Cherenkov radiation and to shield against ambient γ-rays and neutrons from the surrounding rocks. Both detectors are calibrated using radioactive sources and cosmic-ray induced background samples.
Based on the number of events at the near detector and assuming no oscillation, RENO finds a clear deficit, with a far-to-near ratio R = 0.920 ± 0.009 (stat.) ± 0.014 (syst.). The value of sin22θ13 is determined from a χ2 fit with pull terms on the uncorrelated systematic uncertainties. The number of events in each detector after the background subtraction has been compared with the expected number of events, based on the neutrino flux, detection efficiency, neutrino oscillations and contribution from the reactors to each detector determined by the baselines and reactor fluxes. The best-fit value obtained is sin22θ13 = 0.113 ± 0.013 (stat.) ± 0.019 (syst.), which excludes the no-oscillation hypothesis at 4.9 σ.
The RENO collaboration consists of about 35 researchers from Seoul National University, Chonbuk National University, Chonnam National University, Chung Ang University, Dongshin University, Gyeongsang National University, Kyungpook National University, Pusan National University, Sejong University, Seokyeong University, Seoyeong University and Sungkyunkwan University.
Following a competitive call for tender, CERN has signed a contract with the Wigner Research Centre for Physics in Budapest for an extension to CERN’s data centre. Under the new agreement, the Wigner Centre will host CERN equipment that will substantially extend the capabilities of Tier-0 of the Worldwide LHC Computing Grid (WLCG) and provide the opportunity to implement solutions for business continuity. The contract is initially until 31 December 2015, with the possibility of up to four one-year extensions thereafter.
The WLCG is a global system organized in tiers, with the central hub being Tier-0 at CERN. Eleven major Tier-1 centres around the world are linked to CERN via dedicated high-bandwidth links. Smaller Tier-2 and Tier-3 centres linked via the internet bring the total number of computer centres involved to more than 140 in 35 countries. The WLCG serves a community of some 8000 scientists working on LHC experiments, allowing seamless access, distributed computing and data-storage facilities.
The Tier-0 at CERN currently provides some 30 PB of data storage on disk and includes the majority of the 65,000 processing cores in the CERN Computer Centre. Under the new agreement, the Wigner Research Centre will extend this capacity with 20,000 cores and 5.5 PB of disk storage, and will see this doubling after three years.
Astronomers have gathered the most direct evidence yet of a supermassive black hole shredding a star that wandered too close to it. By following the event over hundreds of days they could identify for the first time the nature of the victim, a helium-rich stellar core.
Supermassive black holes – weighing a few to a thousand million times more than the Sun – are known to exist in the centre of most galaxies and, like volcanoes, they can be active or dormant. In active galactic nuclei (AGN) they receive a continuous supply of gas sustaining their activity, whereas in quiet galaxies – such as the Milky Way – they remain dormant because of the lack of matter to accrete. However, as soon as a star ventures too close to a resting black hole it can suddenly awake in a flare of radiation. Because this happens on average only about once every 10,000 years per galaxy, astronomers have so far detected only a few such events.
Numerical simulations of this kind of event show how the star gets stretched and eventually disrupted in the vicinity of the black hole by the strong gradient of the gravitational field, which exerts more attraction on the near side of the star than on its far side. The observed emission comes from heated gas that is on the verge of falling into the back hole, while the remaining part of the stellar material forms a long tail of gas that is ejected at high speeds.
The new tidal disruption event, published in Nature by Suvi Gezari of the Johns Hopkins University in Baltimore and collaborators, is the first to be discovered in the visible range: all previous ones were detected in X-rays (CERN Courier April 2004 p12, CERN CourierJuly/August 2011 p14). The Pan-STARRS1 telescope on the summit of Haleakala in Hawaii first detected the optical transient PS1-10jh on 31 May 2010. The Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) scans the entire night sky for all kinds of transient phenomena, including supernovae. The event was discovered independently as a near-ultraviolet transient by NASA’s Galaxy Evolution Explorer (GALEX) satellite on 17 June 2010. Subsequent observations by both instruments traced its rise in intensity until 12 July 2010 and its slow decay until September 2011. The flare amplitude in the ultraviolet was found to be much too high for an AGN outburst and its brightness rise too slow to be consistent with a supernova explosion.
By comparing the observed evolution of the brightness with results of numerical simulations of tidal-disruption events, Gezari and colleagues not only confirmed this scenario but could even constrain the internal structure of the incoming star. They find that the star cannot be too centrally concentrated and hence suggest that the doomed star was not a solar-type star but rather a fully convective star or a degenerate stellar core. The latter interpretation is corroborated by a spectral study of the source that shows broad helium-emission lines, but no Balmer lines from hydrogen. This suggests that the outer hydrogen layer of the star was already lost prior to the encounter with the black hole.
A likely scenario is that the star was once a red giant that has lost its hydrogen envelope in one or several previous, less dramatic, fly-bys of the black hole, preserving just a helium-rich stellar core for the final encounter. This stellar core could be only about a quarter of the mass of the Sun and imply a black hole of about 3 million solar masses, which is similar to the one at the centre of the Milky Way. This dramatic event was witnessed by watching a galaxy 2.7 thousand million light-years away. Pan-STARRS is looking at thousands of galaxies for such events and is expected to detect one every two years or so. This will offer new opportunities to probe the effects of general relativity and may even determine whether the supermassive black hole is spinning or not.
When heavy nuclei collide at high energies, a high-density colour-deconfined state of strongly interacting matter is expected to form. According to lattice QCD calculations, the confinement of coloured quarks and gluons into colourless hadrons vanishes under the conditions of high energy-density and temperature that are reached in these collisions and a phase transition to a quark–gluon plasma (QGP) occurs.
The LHC, operating with heavy ions, is nowadays the frontier machine for exploring the QGP experimentally, but such studies began 25 years ago with fixed-target experiments at the Alternating Gradient Synchrotron at Brookhaven and the Super Proton Synchrotron at CERN. The field entered the collider era in 2000 with Brookhaven’s Relativistic Heavy-Ion Collider (RHIC). Experiments there showed that initial hard partonic collisions produce energetic quarks and gluons that interact with the hot and dense QGP, probing its properties and, more generally, those of the strong interaction in an extended many-body system. The abundant production of these “hard probes” constitutes one of the leading opportunities that have opened up at the LHC – where collisions of heavy ions have nearly a 14-fold increase in centre-of-mass energy with respect to RHIC – and their extensive study is a leading feature of the heavy-ion programmes of the ALICE, ATLAS and CMS experiments.
Heavy quark probes
High-momentum partons are created in hard-scattering processes that occur in the early stage of the nuclear collision. They subsequently traverse the hot QGP, losing energy as they interact with its constituents. This energy loss is expected to occur via inelastic processes (gluon radiation induced in the medium, or radiative energy loss, analogous to bremsstrahlung in QED) and via elastic processes (collisional energy loss).
The massive c and b quarks (mc ˜ 1.5 GeV/c2, mb ˜ 5 GeV/c2) are useful probes of these energy-loss mechanisms. In QCD, quarks have a lower colour coupling-strength than gluons, thus the energy loss should be smaller for quarks than for gluons. At LHC energies, hadrons containing light flavours originate mainly from gluons. Therefore, charmed mesons provide an experimental tag for a low colour-charge, quark parent. In addition, the “dead-cone effect” should reduce small-angle gluon radiation for heavy quarks that have moderate energy-over-mass values, i.e. for c and b quarks with momenta up to about 10 GeV/c.
Models based on parton energy loss describe well the measured suppression of high-momentum charmed mesons
The nuclear modification factor, RAA, is one of the observables that are sensitive to the interaction of hard partons with the medium. This quantity is defined as the ratio of particle production measured in nucleus–nucleus (AA) interactions to that expected on the basis of the proton–proton (pp) spectrum, scaled by the average number of binary nucleon–nucleon collisions occurring in the collisions of the nuclei. Loss of energy in the medium leads to a suppression of hadrons at moderate-to-high transverse momentum (pt > 2 GeV/c), so RAA < 1. In the range pt < 10 GeV/c, where the masses of the heavy c and b quarks are not negligible with respect to their momenta, the properties of parton energy-loss described above mean that an increase in RAA (i.e. a smaller suppression) is expected when going from the mostly gluon-originated light-flavour hadrons (such as pions) to D and B mesons with c quarks and b quarks, respectively: RAA(π) < RAA(D) < RAA(B).
The measurement and comparison of these different probes provides a unique test of how the energy loss of the partons depends on their colour charge and mass. Because these dependences are predicted by QCD, their experimental verification is a crucial step for the understanding of the properties of the strongly interacting medium.
Experiments at RHIC reported a strong suppression, by a factor of 4–5 at pt > 5 GeV/c, for light-flavour hadrons in central collisions of gold nuclei at a centre-of-mass energy √sNN = 200 GeV. The suppression of heavy-flavour hadrons, measured inclusively from their decay electrons by the PHENIX and STAR experiments, turned out to be similar to that of pions and generally stronger than most expectations based on radiative energy loss. This striking observation raised high expectations for the separate measurements of charm and beauty hadrons in the collisions of lead ions at √sNN = 2.76 TeV at the LHC. Such a study is favoured by the abundant production yields (e.g. about 50 cc pairs per central collision, according to perturbative QCD calculations) and by the design of the LHC experiments, all of which have excellent capabilities for the detection of heavy flavour.
In the ALICE experiment, the charmed mesons D0, D+ and D*+ are reconstructed in the central barrel through their decays to charged hadrons, namely D0 → K–π+, D+ → K–π+π+ and D*+ → D0π+, followed by D0 → K–π+. The signal is extracted from the invariant-mass distributions of the combinations of charged tracks reconstructed in the inner tracking system (ITS) and the time-projection chamber (TPC). The high-multiplicity environment of lead–lead (PbPb) interactions, where about 1600 primary charged particles per unit of rapidity are produced for head-on collisions, is particularly challenging for the exclusive reconstruction of D-meson decays because of the large combinatorial background. However, the signal-to-background ratio can be enhanced by requiring the separation of the D0 and D+ decay vertices from the interaction vertex. This separation, typically of a few hundred microns, is resolved thanks to the high-spatial-precision hits measured by the six-layer silicon ITS. Background is reduced further using the excellent particle-identification capabilities provided by the measurement of the specific energy deposit in the TPC and of the particle time-of-flight (TOF) from the interaction vertex to the TOF detector. The D-meson yields are corrected for detector effects and for the contribution from B-meson decays. The nuclear modification factor RAA is then computed using as the pp reference the cross-section measured at 7 TeV centre-of-mass energy and scaled – via perturbative QCD calculations – to the PbPb energy of 2.76 TeV (ALICE collaboration 2012a).
Figure 1 shows the nuclear modification factor measured by ALICE in the transverse momentum interval 6 < pt < 12 GeV/c, as a function of the collision centrality for the three species of D meson (ALICE collaboration 2012b). The centrality of the collision is determined from the measured particle multiplicity and it is quantified by the average number of participant nucleons, 〈Npart〉, i.e. nucleons that suffered at least one inelastic scattering with a nucleon of the other nucleus. The more central the collision, the larger the number of participant nucleons. The observed suppression increases (RAA decreases) with increasing centrality – as expected because of the larger, hotter and denser medium created in more central collisions – reaching a factor of about four for head-on collisions.
Figure 2 shows the average RAA of the three D-meson species as a function of the transverse momentum for the most central collisions (ALICE collaboration 2012b). To study the expected dependences of the energy loss on colour charge and parton mass, the nuclear modification factor is compared with those of charged hadrons measured by ALICE and those of non-prompt J/ψ mesons (from B decays) measured by the CMS experiment for pt > 6.5 GeV/c (CMS collaboration 2012). The charged-hadron nuclear modification factor is dominated by light flavours and coincides with that of charged pions above pt ≈ 5 GeV/c. This comparison between the values of RAA for D mesons and charged hadrons shows that the average nuclear modification factor for the D mesons is close to that of charged hadrons. However, considering that the systematic uncertainties of D mesons are not fully correlated with pt, there is an indication for RAA(D) > RAA(charged). The suppression of J/ψ from B decays is clearly weaker than that of charged hadrons, while the comparison with D mesons is not conclusive and requires more differential and precise measurements of the transverse momentum dependence.
Apart from final-state effects, which are related to the formation of a hot and deconfined medium, initial-state effects are also expected to influence the nuclear modification factor, because it is nuclei rather than nucleons that collide. In particular, the modification of the parton distribution functions (PDFs) of the nucleons in the nuclei affects the initial hard-scattering probability and, thus, the yields of energetic partons, including heavy quarks. In the kinematic range relevant for charm production at LHC energies, the main effect is nuclear shadowing, which induces a reduction in the yields of D mesons at low momentum. As shown in figure 3, a perturbative QCD calculation supplemented with a phenomenological parameterization of the nuclear modification of the PDFs indicates that the shadowing-induced effect on RAA is limited to ±15% for pt > 6 GeV/c. This suggests that the strong suppression observed in the high-pt data is a final-state effect, arising predominantly from energy loss of c quarks in the medium.
Theoretical models based on parton energy loss describe well the measured suppression of high-momentum charmed mesons. Figure 3 displays the comparison with the data of some selected models that within the same framework compute the suppression of particles with heavy and light flavour. A thorough validation of the ingredients of the models, which differ from one another, requires a systematic comparison, extended to higher momentum, over a range in collision centrality for a variety particle species, in particular beauty hadrons. This will eventually provide important constraints on the energy density of the hot QGP formed at the LHC.
In conclusion, the first ALICE results on the nuclear modification factor RAA for charm hadrons in PbPb collisions at a centre-of-mass energy √sNN = 2.76 TeV indicate strong in-medium energy loss for charm quarks. There is a possible indication, which is not fully significant with the current level of experimental uncertainties, that RAA(D) > RAA(charged). The precision of the measurements will be improved in the future, using the large sample of PbPb collisions recorded in 2011. In addition, proton–lead collisions will provide insight into possible initial-state effects, which may play an important role, mainly in the low-momentum region.
Three-dimensional silicon sensors are opening a new era in radiation imaging and radiation-hard, precise particle-tracking through a revolutionary processing concept that brings the collecting electrodes close to the carriers generated by ionizing particles and that also extends the sensitive volume to a few microns from the physical sensor’s edge. Since the summer of 2011, devices as large as 4 cm2 with more than 100,000 cylindrical electrodes have become available commercially thanks to the vision and effort of a group of physicists and engineers in the 3DATLAS and ATLAS Insertable B-Layer (IBL) collaborations who worked together with the original inventors and several processing laboratories in Europe and the US. This unconventional approach enabled a rapid transition from the R&D phase to industrialization, and has opened the way to being able to use more than 200 such sensors in the first upgrade in 2014 of the pixel system in the ATLAS experiment.
Radiation effects
Silicon sensors with 3D design were proposed 18 years ago at the Stanford Nanofabrication Facility (SNF) to overcome the limitations of the poor signal-efficiency of gallium-arsenide sensors, a problem that affects silicon sensors after exposure to heavy non-ionizing radiation. The study of microscopic and macroscopic properties of irradiated silicon was, and still is, the subject of extensive studies in several R&D groups and has led to the identification of stable defects generated after exposure to neutral or charged particles. The presence of such defects makes the use of silicon as a detector challenging in the highly exposed inner trackers of high-energy-physics experiments. The studies have discovered that while some of these defects act as generation centres, others act as traps for the moving carriers generated by incident particles produced in the primary collisions of accelerator beams. The three most severe macroscopic consequences for silicon-tracking detectors that have been found concern linearly proportional increases in the leakage current and in the effective doping concentration with increasing fluence, as well as severe signal loss that arises from trapping.
Apart from applications in high-energy physics, 3D sensor technology has potential uses in medical, biological and neutron imaging
However, other studies have found evidence that the spatial proximity of the p+ and n+ electrodes in the pin junction not only allows it to be depleted with a reduced bias-voltage but also that the highest useful electric field across the junction can be applied homogeneously to reduce the trapping probability of generated carriers after radiation-induced defects are formed. This leads to less degradation of the signal efficiency – defined as the ratio of irradiated versus non-irradiated signal amplitudes – after exposure at increasing radiation fluences.
What now makes 3D radiation sensors one of the most radiation-hard designs is that the distance between the p+ and n+ electrodes can be tailored to match the best signal efficiency, the best signal amplitude and the best signal-to-noise or signal-to-threshold ratio to the expected non–ionizing radiation fluence. Figure 1 indicates how this is possible by comparing planar sensors – where electrodes are implanted on the top and bottom surfaces of the wafer – with 3D ones. The sketch on the left shows how the depletion region between the two electrodes, L, grows vertically to become as close as possible to the substrate thickness, Δ. This means that there is a direct geometrical correlation between the generated signal and the amplitude of the depleted volume. By contrast, in 3D sensors (figure 1, right) the electrode distance, L, and the substrate thickness, Δ, can be decoupled because the depletion region grows laterally between electrodes whose separation is a lot smaller than the substrate thickness. In this case, the full depletion voltage, which depends on L and grows with the increase of radiation-induced space charge, can be reduced dramatically.
For the same substrate thickness – before or at moderate irradiation – the amount of charge generated by a minimum-ionizing particle is the same for both types of sensor. However, because the charge-collection distance in 3D sensors is much shorter – and high electric fields as well as saturation of the carrier velocity can be achieved at low bias-voltage – the times for charge collection can be much faster. Apart from making applications that require high speeds easier, this property can counteract the charge–trapping effects expected at high radiation levels. A 3D sensor reaching full depletion at less than 10 V before irradiation can operate at just 20 V and provide full tracking efficiency. After the heavy irradiation expected for the increased LHC luminosity, the maximum operational bias-voltage can be limited to 200–300 V. This has a crucial impact on the complexity of the biasing and cooling systems needed to keep the read-out electronics well below the temperatures at which heat-induced failures occur. By comparison, the voltages required to extract a useful signal when L increases, for example in planar sensors, can be as high as 1000 V.
These 3D silicon sensors are currently manufactured on standard 4-inch float-zone-produced, p-type, high-resistivity wafers, using a combination of two well established industrial technologies: micro-electro-mechanical systems (MEMS) and very large scale integration (VLSI). VLSI is used in microelectronics and in the fabrication of traditional silicon microstrip and pixel trackers in high-energy-physics experiments, as well as in the CCDs used in astronomy and in many kinds of commercial cameras, including those in mobile phones. A unique aspect of the MEMS technology is the use of deep-reactive ion etching (DRIE) to form deep and narrow apertures within the silicon wafer using the so-called “Bosch process”, where etching is followed first by the deposition of a protective polymer layer and then by thermal-diffusion steps to drive in dopants to form the n+ and p+ electrodes.
Currently, two main 3D-processing options exist. The first, called Full3D with active edges, is based on the original idea. It is fabricated at SNF at Stanford and is now also available at SINTEF in Oslo. In this option, column etching for both types of electrodes is performed all through the substrate from the front side of the sensor wafer. At the same time, active ohmic trenches are implemented at the edge to form so-called “active edges”, whereas the underside is oxide-bonded to a support wafer to preserve mechanical robustness. This requires extra steps to attach and remove the support wafer when the single sensors are connected to the read-out electronics chip. An additional feature of this approach is that the columns and trenches are completely filled with poly-silicon (figure 2, left).
The second approach, called double-side with slim fences is a double-side process, developed independently in slightly different versions by Centro Nacional de Microelectrónica (CNM) in Barcelona and Fondazione Bruno Kessler (FBK) in Trento. While in both cases junction columns are etched from the front side and ohmic columns from the back side, without the presence of a support wafer, in CNM sensors columns do not pass through the entire wafer thickness but stop a short distance from the surface of the opposite wafer-side (figure 2, centre). This was also the case for the first prototypes of FBK sensors but the technology was later modified to allow for the columns to pass through (figure 2, right).
While all of the remaining processing steps after electrode etching and filling are identical for a 3D sensor to those of a planar silicon sensor – so that hybridization with front-end electronics chips and general sensor handling is the same – the overall processing time is longer, which limits the production-volume capability for a single manufacturer at a given time. For this reason, to speed up the transition from R&D to industrialization, the four 3D-silicon-processing facilities (SNF, SINTEF, CNM and FBK) agreed to combine their expertise for the production of the required volume of sensors for the first ATLAS upgrade, the IBL. Based on the test results obtained in 2007–2009, which demonstrated a comparable performance between different 3D sensors both before and after irradiation, the collaboration decided in June 2009 to go for a common design and joint processing effort, aiming at a full mechanical compatibility and equivalent functional performance of the 3D sensors while maintaining the specific flavours of the different technologies. Figure 3 demonstrates the success of this strategy by showing a compilation of signal efficiencies versus fluence (in neutron equivalent per square centimetre) of samples from different manufacturers after exposure to heavy irradiation. Their position fits the theoretical parameterization curve, within errors.
All of these 3D-processing techniques were successfully used to fabricate sensors compatible with the FE-I4 front-end electronics of the ATLAS IBL. FE-I4 is the largest front-end electronics chip ever designed and fabricated for pixel-vertex detectors in high-energy physics and covers an area of 2.2 × 1.8 cm2 with 26,880 pixels, each measuring 250 × 50 μm2. These will record images of the production of the primary vertex in proton–proton collisions, 3.2 cm from the LHC beam in the IBL. Each 3D sensor uses two n electrodes tied together by an aluminium strip to cover the 250 μm pixel length. This means that each sensor has more than 100,000 holes.
Currently more than 60 wafers of the kind shown in figure 4 made with double-sided processing – which do not require support-wafer removal and have 200 μm slim fences rather than active edges – are at the IZM laboratory in Berlin, where single sensors will be connected with front-end electronics chips using bump-bonding techniques to produce detector modules for the IBL. Each wafer hosts eight such sensors, 62% of which have the required quality to be used for the IBL.
What’s next?
Following the success of the collaborative effort of the 3DATLAS R&D project, the industrialization of active-edge 3D sensors with even higher radiation hardness and a lighter structure is the next goal in preparation for the LHC High-Luminosity Upgrade beyond 2020. Before that, 3D sensors will be used in the ATLAS Forward Physics project, where sensors will need to be placed as close to the beam as possible to detect diffractive protons at 220 m on either side of the interaction point. Apart from applications in high-energy physics, where microchannels can also be etched underneath integrated electronics substrates for cooling purposes, 3D sensor technology is used to etch through silicon vias (TSV) in vertical integration, to fabricate active edge with planar central electrodes and has potential use in medical, biological and neutron imaging. The well defined volume offered by the 3D geometry is also ideal for microdosimetry at cell level.
A programme of experiments based on innovative detectors aims to take dark-matter detection to a new level of sensitivity.
Dark energy and dark matter together present one of the most challenging mysteries of the universe. While explaining the first seems to be within the reach of only cosmologists and astrophysicists, the latter appears to be accessible also to particle physicists. One of the most recent and innovative experiments designed for the direct detection of dark-matter particles is DarkSide, a prototype for which – DarkSide 10 – is currently being tested in the Gran Sasso National Laboratory in central Italy. The first detector for physics – DarkSide 50 – is scheduled for commissioning underground in December this year.
Astronomical observations suggest that dark matter is made of a new species of non-baryonic particle, which must lie outside the Standard Model. These particles must also be neutral, quite massive, stable and weakly interacting – hence the acronym WIMPs, for weakly interacting massive particles. One of the most promising candidates for a dark-matter particle is the neutralino, the lightest particle that is predicted in theories based on supersymmetry. However, constraints from recent measurements by experiments at CERN’s LHC suggest that WIMPs may have a different origin.
Several potential background sources can mimic the interaction between dark-matter particles and nuclei.
A powerful way of detecting WIMPs directly in the local galactic halo is to look for the nuclear recoils produced when they collide with ordinary matter in a sensitive detector. However, WIMP-induced nuclear recoils are difficult to detect. Theory indicates that they would be extremely rare, with some 10 events expected per year in 100 kg of liquid argon for a WIMP mass of 50 GeV/c2 and a WIMP–nucleon cross-section of 10–45 cm2. They would also produce energy deposits below the order of 100 keV. Moreover, there are several potential background sources that can mimic the interaction between dark-matter particles and nuclei.
Sources of background
In a typical target, there are three main sources of background at energies up to tens of kilo-electron-volts: natural β and γ radioactivity, which induces electron recoils; α decays on the surface of the target in which the daughter nucleus recoils into the target and the α particle remains undetected; and nuclear recoils produced by the elastic scattering of background neutrons. This latter process is nearly indistinguishable from the signals expected for WIMPs and requires an efficient neutron veto in the apparatus.
DarkSide is a new experiment that uses novel techniques to suppress background sources as much as possible, while also understanding them well. The programme centres on a series of detectors of increasing mass, each making possible a convincing claim for the detection of dark matter based on the observation of a few well characterized nuclear-recoil events in an exposure of several years. The design concept involves a two-phase, liquid-argon time-projection chamber (LAr-TPC) in which the energy released in WIMP-induced nuclear recoils can produce both scintillation and ionization. Arrays of photomultiplier tubes at the bottom and top of the cylindrical active volume detect the scintillation light. A pair of novel transparent high-voltage electrodes and a field cage provide a uniform drift field of about 1 kV/cm to extract the ionization produced. A reflective, wavelength-shifting lining renders the scintillation light from the argon (wavelength 128 nm) visible to the photomultipliers.
In a two-phase argon TPC, rejection of background comes from three independent discrimination parameters: pulse-shape analysis of the direct liquid-argon scintillation signal (S1); the ratio of ionization produced in an event to scintillation, where the former is read out by extracting ionization electrons from the liquid into the gaseous argon phase, where they are accelerated and emit light through electroluminescence (S2); and reconstruction of the event’s location in 3D using the TPC. The z co-ordinates for the event are determined by the time delay between S2 and S1, while the transverse co-ordinates are determined through the distribution of the S2 light across the layer of photomultiplier tubes.
As in other experiments searching for rare events, DarkSide’s detectors will be constructed using materials with low intrinsic radioactivity. In particular, the experiment uses underground argon with extremely low quantities of 39Ar, which is present in atmospheric argon at levels of about 1Bq/kg as a result of the interaction of cosmic rays, primarily with 40Ar. The DarkSide collaboration has developed processes to extract argon from underground gas wells, where the proportion of 39Ar is low. A particularly good source of underground argon is in the Kinder Morgan Doe Canyon Complex in Colorado. The CO2 natural gas extracted there contains about 600 ppm of argon. The DarkSide collaboration has operated an extraction facility at the Kinder Morgan site since February 2010; it has to date extracted some 90 kg of underground depleted argon and subsequently distilled 23 kg to about 99.99% purity. (The throughput is about 1 kg/day, with 99% efficiency.) Studies of the residual 39Ar content of the distilled gas with a low-background detector at the Kimballton Underground Research Facility, Virginia, give an upper limit for the 39Ar content equivalent to 0.6% of the 39Ar in atmospheric argon.
It is not only the argon that has to have low intrinsic radioactivity. Nuclear recoils produced by energetic neutrons that scatter only once in the active volume form a background that is, on an event-by-event basis, indistinguishable from dark-matter interactions. Neutrons capable of producing these recoil backgrounds are created by radiogenic processes in the detector material. In detectors made from clean materials, the dominant source of the radiogenic neutrons is typically the photodetectors, so ultralow background photodetectors are another important goal for DarkSide. A long-term collaboration with the Hamamatsu Corporation has resulted in the commercialization of 3-inch photomultiplier tubes with a total γ activity of around only 60 mBq per tube, with a further 10-fold reduction foreseen in the near future.
To measure and exclude neutron background produced by cosmic-ray muons, the DarkSide TPC will be deployed within an active neutron veto based on liquid scintillator, which will in turn be deployed within 1000 m3 of water in a tank 10 m high and 11 m in diameter, which was previously used in the Borexino Counting Test Facility at Gran Sasso. The liquid-scintillator neutron veto is a unique feature of the DarkSide design and is filled with ultrapure, boron-loaded organic scintillator, which has been distilled using the purification system of the Borexino experiment. The water serves as a Cherenkov detector to veto muons. Monte Carlo simulations suggest that with this combined veto system, the number of neutron events generated by cosmic-rays at the depth of the Gran Sasso Laboratory should be negligible, even for exposures of the order of tonne-years.
The DarkSide programme will follow a staged approach. The collaboration has been operating DarkSide 10, a prototype detector with a 10 kg active mass, in the underground laboratory at Gran Sasso since September 2011. This has been a valuable test bed during the construction of the veto system. It has allowed the light-collection, high-voltage and TPC field structures – and the data-acquisition and particle-discrimination analysis systems – to be optimized using γ and americium-beryllium sources. The first physics detector in the programme, DarkSide 50, should be deployed inside the completed veto system in the Gran Sasso Laboratory by the end of 2012. Looking forward to the second generation, upgrades to the underground argon plants are planned, and the nearly completed veto system has been designed to accommodate a DarkSide-G2 detector, which will have a fiducial mass of 3.5 tonnes.
Towards the end of July 1958, at a house in the hills south-east of Rome, three Italian scientists discussed key ideas that were to form the foundations of the European Space Agency (ESA). Edoardo Amaldi, who had been instrumental in the establishment of CERN four years previously, was with Giorgio Salvini – whose house it was – and Gino Crocco, who was Goddard Professor of Jet Propulsion at Princeton in the US. During their conversation, the old friends discussed how European countries, in particular Italy, could become involved in space research. Only the previous October, the Soviet Union had opened up the space age with the launch of the first artificial satellite, Sputnik 1. This had been followed in January 1958 by Explorer 1, launched by the US. So what could Europe do?
As Salvini recalls, the conversation was “long and animated”. While Crocco was sceptical about what Italy could achieve, Salvini was more optimistic, and Amaldi, with all of his experience in setting up CERN, saw the case for an organization that would enable European countries to work together on research in space. In particular, Amaldi insisted on two points: that there should be no military involvement and that such an organization should be based on the successful model that had given rise to CERN.
At the end of the year, Amaldi wrote to Crocco at Princeton, describing the contacts that he had made in the meantime with some influential scientists. In the letter, Amaldi went on to describe how he thought the project to launch a “Euroluna” (“Euromoon”) satellite for scientific research should take shape. The letter makes clear his insistence that the underlying organization should not be linked to the military but should be purely scientific and based on the same principles as CERN.
Amaldi insisted on two points: there should be no military involvement and the organization should be based on the model that had given rise to CERN.
As a starting point, Amaldi suggested that a small group of experts from the major European countries could prepare a plan for creating an appropriate organization. By early 1959 he had discovered an ally in an old friend, Pierre Auger, the French cosmic-ray physicist who had also been involved in setting up CERN. By May, after several interactions with Auger, Amaldi had written the first draft of his paper, Space Research in Europe, with the aim of stimulating discussions on the formation of a European organization for space research. A French version, together with supportive coments from several countries, was distributed in December (Amaldi 1959).
In Amaldi’s original vision, not only the development of the satellites – the “Eurolunas” – but also that of their launchers would be the responsibility of the organization, which would need experts in the technology and engineering of rockets as well as space scientists. The idea was to mirror CERN, which had accelerator physicists and engineers to build its own machines for the high-energy-physics community to use in scientific research. By collaborating at CERN, Europe’s scientists had access to accelerators that no country had the means to build on its own.
It soon became clear that this vision was not to be, albeit not to begin with. There was too much political and commercial interest surrounding the construction of rockets. Governments, in particular the British and French, began the negotiations that would separate the business of building launchers from that of making the satellites for scientific research. On 29 March 1962 in London, seven countries – Belgium, France, Germany, Italy, the Netherlands, the UK and Australia (associate member) – signed the convention that created the European Launcher Development Organisation (ELDO). Three months later, on 14 June 1962 in Paris, Belgium, Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland and the UK signed a different convention, in this case to create the European Space Research Organisation (ESRO).
The foundation of these separate bodies may have been counter to Amaldi’s vision for an organization similar to CERN but they were the forebears of ESA, which was established in May 1975. With the formation of ESA, the science and the means to do it were brought into the same fold.
Amaldi’s letter to Crocco, which is translated from Italian on the following pages, constitutes the first document in which a European space organization is mentioned. It is for this reason that 10 copies recently went into space on board a spacecraft taking essential supplies to the International Space Station (ISS). ESA’s 3rd Automated Transfer Vehicle (ATV), named in honour of Amaldi, arrived at the ISS on 29 March 2012, exactly 50 years to the day of the convention to create ELDO being signed in London. Appropriately, the ATV had been launched by an Ariane rocket built by ESA. The copies of the letter will be signed by the astronauts and brought back to Earth by a Soyuz spacecraft. One will be given to CERN.
Amaldi’s 1958 letter, translated
16 December 1958
Prot. No 4674/A
Distinguished Prof.
Gino Crocco
College Road 74
PRINCETON – N.J.
Dear Gino,
After our discussion at Salvini’s home in Rocca di Papa at the end of July, I thought over the possibility to develop an appropriate activity in Europe in the field of rockets and satellites. It is now very much evident that this problem is not at the level of the single states like Italy, but mainly at the continental level. Therefore, if such an endeavour is to be pursued, it must be done on a European scale as already done for the building of the large accelerators for which CERN was created.
The launch of one or more “Euroluna”, performed by a dedicated European organization, would definitely be of the highest importance, both moral and practical, for all the nations of the continent.
With these ideas in mind, at the end of July I wrote a letter to [Luigi] Broglio who replied, at the end of August, expressing his substantial agreement with the theoretical formulation of the problem but also a considerable scepticism with regards to the practical feasibility of an actual project.
During the Conference of Geneva, held in the first half of September, I had the opportunity to discuss it with [Isidor] Rabi who reacted very positively and stated that, if this would have developed further, he would have done everything possible for obtaining the support of the United States. Actually, himself being a representative of the United States in the NATO Science Committee, he thought that this could be the initiating body for this activity; however, I think this wouldn’t be appropriate, as I shall explain later.
In November I spoke to [Harrie] Massey of [University College] London who, however, was rather sceptical; though this is the normal British attitude in front of any continental initiative.
At the beginning of December I spoke about the matter with [Francis] Perrin who was very interested and convinced and he promised me to look for some competent people in this specific field in France that could flag the problem.
The idea I have about this organization is that, in addition to the six EURATOM nations, Britain and the Scandinavian countries should participate in the manufacturing of satellites. Britain would at first limit itself to sending some observers and would probably show some resistance, but would certainly end up contributing substantially, would the project start taking shape.
It should, in my opinion, proceed as follows: some authoritative expert in the field (Broglio I hoped, but he seems not to have the necessary enthusiasm) should start flagging the problem and obtaining some level of participation of one or two experts of the largest European countries. Some Italian, French and German experts would be needed to start. These five or six people should prepare, within a few months, a plan of technical development containing :
1) a very well defined scope which should be so ambitious to be comparable with the targets that the USA and the USSR have set for themselves in this field, and in order to justify the European character of the endeavour;
2) an assessment of the cost and its time distribution;
3) an assessment of the specialized workforce;
4) a realistic time frame.
Such programme should be submitted to the governments for approval and for the resulting creation of the final organization which should be provided with the necessary resources.
In the case of CERN, things essentially developed as mentioned above; however, that case took advantage of the existence of UNESCO which, by calling the representatives of the governments to a first conference, played the role of the mother and nurse of CERN. I do not know who could be the mother and nurse of the new organization; according to Rabi this could be the “Science Committee” of NATO, but I believe that it wouldn’t be the best mother for such organization. As a matter of fact, I think that it is absolutely imperative for the future organization to be neither military nor linked to any military organization. It must be a purely scientific organization open, like CERN, to all forms of co-operation both inside and outside the participating countries. I have the impression that all attempts to set up international organizations of a military nature have either failed or, if they didn’t fail, present such characteristics that do not minimally satisfy even their own promoters and managers.
The high level start-up project should include :
a) the construction of common European laboratories for solving the various major problems,
b) a related research programme to be run in the participating countries.
Through either one or the other of these activities, the individual countries would have all the technologies at their disposal, and therefore their scientific-technical structure would be greatly strengthened. Such strengthening would bring, evidently, great advantages also in the military sector in case the defence activity would be necessary but it wouldn’t make the realisation of the programme more difficult and complicated as would occur if the military, directly or indirectly, were the masters.
The financial problem, definitely irresolvable within the economy of one single country, could be solved in the context of the European continent.
The problem of the specialized workforce constitutes a second difficulty, but I believe that this could be solved in such a project; this would have the double advantage of attracting the liveliest part of the new generation and making it possible to recover academics who work outside Europe.
I would like to ask you to think about what I wrote here and to reply, as soon as possible, to the following questions which, in a more or less direct manner and on different levels, are related to the project mentioned above :
1) I would like to know whether you are interested and whether you would like to take an active role or even the leading role in it. Personally I don’t want to be involved in all of this except for launching the idea, at this stage, and later – in a few years – if the idea becomes reality, for participating in collecting the scientific data which can be obtained with this kind of activity;
2) I would like to know from you the names of the most competent and open persons in this field in Italy, France, Germany, Great Britain and in the Scandinavian countries. As I already told you, I contacted Broglio since July, but he seemed to be too sceptical for taking this route for the moment at least;
3) I would like to know which organizations, even of modest size, exist in Italy in this field and can provide an absolute guarantee of trust; for example, I came in contact with SAMI’s engineer Salvatore but I have no idea of neither the value and competence of this person nor the robustness of the company. The seriousness of the people is a very fundamental issue; this venture is destined to fail, if people who are not sufficiently trustworthy slip into the initial organization committee.
Furthermore, I would like to have von Karman’s address; Rabi asked me permission to speak to him about this and I agreed, but I don’t know if he actually did it and whether this would be of any help. I would like to have your opinion on this subject too; nevertheless, I think that an authoritative person like him could, if favourable, have a considerable influence.
I believe that you will be very much surprised by this letter of mine; it is based on my experience with CERN: in 1952 only three or four persons in the whole of Europe believed in the possibility of creating CERN, but in 1958 the laboratories in Geneva have exceeded 800 workers, the first machine has started running giving first class scientific results and the second machine will work before mid-1960.
I believe that, if the European experts in the field of rockets and satellites start moving already now, they will be in a condition, together with the American and Russian groups, to contribute very substantially to the study of space by 1965.
I take this opportunity for sending you and your wife my best wishes, including among them the wish for a Euroluna before 1965.
Cooling with carbon dioxide has benefits that are making it the preferred choice for the latest generation of silicon detectors.
The demand for efficient cooling systems that employ relatively small amounts of material – i.e. “low mass” systems – is becoming increasingly important for the new silicon detectors that are being used in high-energy physics. One solution that is gaining popularity is to use evaporative cooling with carbon dioxide (CO2). Currently, two detectors are cooled this way: the Vertex Locator (VELO) in the LHCb detector; and the silicon detector of the orbiting AMS-02 space experiment on board the International Space Station. The CO2 cooling system for VELO has been working since 2008 and the one on AMS has operated in space since May 2011. Both systems have so far functioned without any major issues and both are stable at their design cooling temperature, –30°C for the VELO and 0°C for AMS.
The benefit of using CO2 cooling is that it becomes possible to use much smaller cooling pipes compared with other methods used to cool particle detectors. The secret of CO2 is based on the fact that evaporation takes place at much higher pressures than other two-phase refrigerants. In general, the volume of vapour created stays low while it remains compressed, which means that it flows more easily through small channels. The evaporation temperature of high-pressure CO2 in small cooling lines is also more stable because the pressure drop has a limited effect on the boiling pressure. Savings in the mass of the cooling hardware in the detector when using CO2 can be as high as an order of magnitude compared with other methods used to date.
The thermal performance of a cooling tube is based on two components: the temperature gradient along the tube caused by the changing boiling pressure, and the temperature gradient from the wall of the tube into the fluid – which depends on the heat-transfer coefficient. It is difficult to compare different fluids with each other because the combination of these two performance indicators leads to different results for different tube geometries, heat-load densities and cooling temperatures. To show the benefits of CO2, a specific case is plotted in figure 2 for a 1-m-long tube with a heat load of 500 W at –20°C. As efficiency in terms of the amount of matter in the cooling system is the driving factor in particle detectors, the cooling efficiency is plotted in terms of thermal conduction per cooling-tube volume. The benefit of using CO2 is clear, especially in tubes with a small diameter. The general tendency for high-pressure fluids to have the best performances is also clear – only ammonia in this example seems to deviate from this trend.
Apart from its outstanding thermal performance, CO2 is also a practical fluid. It is neither flammable nor toxic, although it will asphyxiate when released in larger quantities. In general, the small systems used in laboratories have smaller volumes than a standard fire extinguisher and are not dangerous if the CO2 contents were to leak out. The larger systems used in detectors, however, must be designed with proper safety precautions. Some additional benefits of using CO2 are the low utilization costs, the fact that it is a natural gas and, importantly, compatibility with sensitive instruments – contact with CO2 is in general not damaging to electronics or other equipment. CO2 does not exist as a liquid in ambient conditions and when released it is vented as the solid–gas mixture known as dry ice. CO2 evaporates from its liquid phase between –56°C and +31°C and a practical range of application is from –45°C to +25°C.
For LHCb and AMS, a special CO2-cooling method has been developed that is different from ordinary two-phase cooling systems. The best performance of the evaporative CO2 method is achieved with an overflow of liquid, rather than evaporating the last drop. A liquid-pumped system with external cooling is preferable to a compressor-driven vapour system of the kind used in refrigerators. A big advantage is that a liquid-pumped CO2 system is relatively simple, which is useful when integrating it into a complex detector. The CO2 condensing can be done externally using a standard industrial cooler.
The method that has been developed for cooling detectors is called 2PACL, for 2-Phase Accumulator Controlled Loop. Accumulator control being a proven method in existing two-phase cooling systems for satellites, this method was initially developed for AMS by Nikhef in an international collaboration led by the Netherlands National Aerospace Laboratory, NLR. The novelty is precise pressure regulation with a vessel containing a two-phase CO2 mixture. The benefit of using this system with detectors is that the cooling plant containing all of the active components can be set up some distance away. The cooling plant can be designed to be remote from the inaccessible detector, leaving only tubing of small diameter inside or near the detector.
Figure 3 shows the thermodynamic cycle for the 2PACL system in a pressure–enthalpy diagram – a useful representation of the cycle in evaporative-cooling systems. Figure 4 shows the 2PACL principle used in detectors, with the node numbers corresponding to those used in figure 3. For AMS, the external cooler was replaced by cold radiator panels mounted on the outside of the experiment (see figure 1).
The 2PACL concept was also successfully applied by Nikhef for cooling the LHCb’s VELO with CO2 and it has become the baseline concept for future detectors that are under development. The pixel detectors for ATLAS and CMS phase-1 upgrades are being designed to be cooled by the 2PACL CO2 system and the same technology is also under consideration for the silicon detectors for the full phase-2 upgrades for ATLAS and CMS. Elsewhere, CO2 cooling is under development for the Belle-2 detector at KEK and the IL-TPC detector for a future linear collider. Industrial hi-tech applications are also showing interest in the technique as an alternative cooling method.
Currently, CERN and Nikhef are developing small, laboratory CO2 coolers for multipurpose use (figure 5). The units, called TRACI, for Transportable Refrigeration Apparatus for CO2 Investigation, are relatively low cost and optimized for a wide operating range and user-friendly operation. Five prototypes have been manufactured and the hope is that results from these units will lead to a design that can be outsourced for manufacture by external companies. In this way, the many research laboratories investigating CO2 for their future detectors could be supplied with test equipment.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.