Comsol -leaderboard other pages

Topics

ECLOUD12 sheds light on electron clouds

CCclo1_07_12

Electron clouds – abundantly generated in accelerator vacuum chambers by residual-gas ionization, photoemission and secondary emission – can affect the operation and performance of hadron and lepton accelerators in a variety of ways. They can induce increases in vacuum pressure, beam instabilities, beam losses, emittance growth, reductions in the beam lifetime or additional heat loads on a (cold) chamber wall. They have recently regained some prominence: since autumn 2010, all of these effects have been observed during beam commissioning of the LHC.

Electron clouds were recognized as a potential problem for the LHC in the mid-1990s and the first workshop to focus on the phenomenon was held at CERN in 2002. Ten years later, the fifth electron-cloud workshop has taken place, again in Europe. More than 60 physicists and engineers from around the world gathered at La Biodola, Elba, on 5–8 June to discuss the state of the art and review recent electron-cloud experience.

Valuable test beds

Many electron-cloud signatures have been recorded and a great deal of data accumulated, not only at the LHC but also at the CESR Damping Ring Test Accelerator (CesrTA) at Cornell, DAΦNE at Frascati, the Japan Proton Research Complex (J-PARC) and PETRA III at DESY. These machines all serve as valuable test beds for simulations of electron-cloud build-up, instabilities and heat load, as well as for new diagnostics methods. The latter include measurements of synchronous phase-shift and cryoeffects at the LHC, as well as microwave transmission, coded-aperture images and time-resolved shielded pick-ups at CesrTA. The impressive resemblance between simulation and measurement suggests that the existing electron-cloud models correctly describe the phenomenon. The workshop also analysed the means of mitigating electron-cloud effects that are proposed for future projects, such as the High-Luminosity LHC, SuperKEKB in Japan, SuperB in Italy, Project-X in the US, the upgrade of the ISIS machine in the UK and the International Linear Collider (ILC).

An international advisory committee had assembled an exceptional programme for ECLOUD12. As a novel feature for the series, members of the spacecraft community participated, including the Val Space consortium based in Valencia, the French aerospace laboratory Onera, Massachusetts Institute of Technology, the Instituto de Ciencia de Materiales de Madrid and the École Polytechnique Fédérale de Lausanne (EPFL). Indeed, satellites in space suffer from problems that greatly resemble the electron cloud in accelerators, which can be modelled and cured by similar countermeasures. These problems include the motion of the satellites through electron clouds in outer space, the relative charging of satellite components under the influence of sunlight and the loss of performance of high-power microwave devices on space satellites. Intriguingly, the “Furman formula” parameterizing the secondary emission yield, which was first introduced around 1996 to analyse electron-cloud build-up for the PEP-II B factory, then under construction at SLAC, is now widely used to describe secondary emission on the surface of space satellites. Common countermeasures for both accelerators and satellites include advanced coatings and both communities use simulation codes such as BI-RME/ECLOUD and FEST3D. A second community to be newly involved in the workshop series included surface scientists, who at this meeting explained the chemistry and secrets of secondary emission, conditioning and photon reflections. Another important first appearance at ECLOUD12 was the use of Gabor lenses, e.g. at the University of Frankfurt, to study incoherent electron-cloud effects in a laboratory set-up.

Several powerful new simulation codes were presented for the first time at ECLOUD12. These novel codes include: SYNRAD3D from Cornell, for photon tracking, modelling surface properties and 3D geometries; OSMOSEE from Onera, to compute the secondary-emission yield, including at low primary energies; PyECLOUD from CERN, to perform improved and faster build-up simulations; the latest version of WARP-POSINST from Lawrence Berkeley National Laboratory, which allows for self-consistent simulations that combine build-up, instability and emittance growth, and is used to study beam-cloud behaviour over hundreds of turns through the Super Proton Synchrotron (SPS); and BI-RME/ECLOUD from a collaborative effort of EPFL and CERN, to study various aspects of the interaction of microwaves with an electron cloud. New codes also mean more work. For example, the advocated transition from ECLOUD to PyECLOUD implies that substantial code development done at Cornell and EPFL for ECLOUD may need to be redone.

Several open questions remain

ECLOUD12 could not solve all of the puzzles, and several open questions remain. Why, for example, does the betatron sideband signal – characterizing the electron-cloud related instability – at CesrTA differ from similar signals at KEKB and PETRA III? Why was the beam-size growth at PEP-II observed in the horizontal plane, while simulations had predicted it to be vertical? How can the complex nature of intricate incoherent effects be described fully? Which ingredients are missing for correctly modelling the electron-cloud behaviour for electron beams, e.g. the existence of a certain fraction of high-energy photoelectrons? How does the secondary-emission yield of the copper coating on the LHC beam-screen decrease as a function of incident electron dose and incident electron energy (looking for the “correct” equation to describe the variation of the primary energy at which the maximum yield is attained as a function of this maximum yield, εmaxmax) and the concurrent evolution in the reflectivity of low-energy electrons, R)? Does the conditioning of stainless steel differ from that of copper? If it is the same, then why should the SPS’s beam pipe be coated but not the LHC’s? Can the secondary-emission yield change over a timescale of seconds during the accelerator cycle (a suspicion based on evidence from the Main Injector at Fermilab)? Can the surface conditioning be speeded up by the controlled injection of carbon-monoxide gas?

As for the “electron-cloud safety” of future machines, ECLOUD12 concluded that the design mitigations for the ILC and for SuperKEKB appear to be adequate. The LHC and its upgrades (HL-LHC, HE-LHC) should also be safe with regard to electron cloud if the surface conditioning (“scrubbing”) of the chamber wall progresses as expected. The situations for Project-X, the upgrade for the Relativistic Heavy Ion Collider, J-PARC and SuperB are less finalized and perhaps more challenging.

ECLOUD12 was organized jointly and co-sponsored by INFN-Frascati, INFN-Pisa, CERN, EuCARD-AccNet and the Low Emittance Ring (LER) study at CERN. In addition, the SuperB project provided a workshop pen “Made in Italy”. The participants also enjoyed a one-hour football match (another novel feature) between experimental and theoretical electron-cloud experts – the latter clearly outnumbered – as well as post-dinner discussions until well past midnight. The next workshop of the series could be ECLOUD15, which would coincide with the 50th anniversary of the first observation of the electron-cloud phenomenon at a small proton storage-ring in Novosibirsk and its explanation by Gersh Budker.

• All of the presentations at ECLOUD12.

The ECLOUD12 workshop was dedicated to the memory of the late Francesco Ruggiero, former leader of the accelerator physics group at CERN, who launched an important remedial electron-cloud crash programme for the LHC in 1997.

LHC delivers for the summer conferences

CCnew1_06_12

With more luminosity delivered by the LHC between April and June 2012 than in the whole of 2011, the experiments had just what the collaborations wanted: as much data as possible before the summer conferences. By the time that a six-day period of machine development began on 18 June, the integrated luminosity for 2012 had reached about 6.6 fb–1, compared with around 5.6 fb–1 delivered in 2011.

The LHC’s performance over the preceding week had become so efficient that the injection kicker magnets – which heat up while beams continue to pass through them as they circulate – did not have time to cool down between fills. The kickers lose their magnetic properties when the ferrites at their centres become too hot, so on some occasions a few hours of cool-down time had to be included before beam for the next fill could be injected.

As the time constants for warming up and cooling down are both of the order of many hours, the temperature of the magnets turns out to provide a good indicator of the LHC’s running efficiency. The record for luminosity production of more than 1.3 fb–1 in a single week corresponds well with the highest measured kicker-magnet temperature of 70°C. A programme is now under way to reduce further the beam impedance of the injection kickers, which should substantially reduce the heating effect in future.

Routine operation of the LHC for physics is set to continue over the summer, with the machine operating with 1380 proton bunches in each beam – the maximum value for this year – and around 1.5 × 1011 protons a bunch. The higher beam energy of 4 TeV (compared with 3.5 TeV in 2011) and the higher number of collisions are expected to enhance the machine’s discovery potential considerably, opening new possibilities in the searches for new and heavier particles.

One billion J/ψ events in Beijing

In a 40-day run ending on 22 May, the Institute of High-Energy Physics in China accumulated a total of 1.3 billion J/ψ events at the upgraded Beijing Electron Positron Collider (BEPCII) and Beijing Spectrometer (BESIII).

In a two-year run from 1999 until 2001, the earlier incarnations of BEPC and BESII had accumulated a highly impressive 58 million J/ψs. By analysing these and 220 million events at BESIII, important results such as the discovery of X(1835) have already been produced. Now, thanks to the upgrades, data-acquisition efficiency is 120 times higher, and as many as 40 million J/ψs were being collected daily towards the end of the latest run.

BEPCII is a two-ring electron–positron collider with beam energy of 1.89 GeV. With a design luminosity of 1 × 1033 cm–2s–1, it reached a peak of 2.93 × 1032  cm–2s–1 in the latest run, 59 times higher than that of its predecessor, BEPC.

Cherenkov Telescope Array is set to open new windows

CCcta1_06_12

In 2004, as the telescopes of the High Energy Stereoscopic System (HESS) were starting to point towards the skies, there were perhaps 10 astronomical objects that were known to produce very high-energy (VHE) gamma rays – and exactly which 10 was subject to debate. Now, in 2012, well in excess of 100 VHE gamma-ray objects are known and plans are under way to take observations to a new level with the much larger Cherenkov Telescope Array.

VHE gamma-ray astronomy covers three decades in energy, from a few tens of giga-electron-volts to a few tens of tera- electron-volts. At these high energies, even the brightest astronomical objects have fluxes of only around 10–11 photons cm–2 s–1, and the inevitably limited detector-area available to satellite-based instruments means that their detection from space requires unfeasibly long exposure times. The solution is to use ground-based telescopes, although at first sight this seems improbable, given that no radiation with energies above a few electron-volts can penetrate the Earth’s atmosphere.

The possibility of doing ground-based gamma-ray astronomy was opened up in 1952 when John Jelley and Bill Galbraith measured brief flashes of light in the night sky using basic equipment sited at the UK Atomic Energy Research Establishment in Oxfordshire – then, as now, not famed for its clear skies (The discovery of air-Cherenkov radiation). This confirmed Blackett’s suggestion that cosmic rays, and hence also gamma rays, contribute to the light intensity of the night sky via the Cherenkov radiation produced by the air showers that they induce in the atmosphere. The radiation is faint – constituting about one ten-thousandth of the night-sky background – and each flash is only a few nanoseconds in duration. However, it is readily detectable with suitable high-speed photodetectors and large reflectors. The great advantage of this technique is that the effective area of such a telescope is equivalent to the area of the pool of light on the ground, some 104 m2.

These observations can help in answering fundamental physics questions concerning the nature of both dark matter and gravity

Early measurements of astronomical gamma rays using this method were difficult to make because there was no method of distinguishing the gamma-ray-induced Cherenkov radiation from that produced by the more numerous cosmic-ray hadrons. However, in 1985 Michael Hillas at Leeds University showed that fundamental differences in the hadron- and photon-initiated air showers would lead to differences in the shapes of the observed flashes of Cherenkov light. Applying this technique, the Whipple telescope team in Arizona made the first robust detection of a VHE gamma-ray source – the Crab Nebula – in 1989. When his technique was combined with the arrays of telescopes developed by the HEGRA collaboration and the high-resolution cameras of the Cherenkov Array at Themis, the imaging atmospheric Cherenkov technique was well and truly born.

The current generation of projects based on this technique includes not only HESS, in Namibia, but also the Major Atmospheric Gamma-Ray Imaging Cherenkov (MAGIC) project in the Canary Islands, the Very Energetic Radiation Imaging Telescope Array System (VERITAS) in Arizona and CANGAROO, a collaborative project between Australia and Japan, which has now ceased operation.

These telescopes have revealed a wealth of phenomena to be studied. They have detected the remains of supernovae, binary star systems, highly energetic jets around black holes in distant galaxies, star-formation regions in our own and other galaxies, as well as many other objects. These observations can help not only with understanding more about what is going on inside these objects but also in answering fundamental physics questions concerning, for example, the nature of both dark matter and gravity.

The field is now reaching the limit of what can be done with the current instruments, yet the community knows that it is observing only the “tip of the iceberg” in terms of the number of gamma-ray sources that are out there. For this reason, some 1000 scientists from 27 countries around the world have come together to build a new instrument – the Cherenkov Telescope Array (CTA).

The Cherenkov Telescope Array

The aim of the CTA consortium is to build two arrays of telescopes – one in the northern hemisphere and one in the southern hemisphere – that will outperform current telescope systems in a number of ways. First, the sensitivity will be a factor of around 10 better than any current array, particularly in the “core” energy range around 1 TeV. Second, it will provide an extended energy range, from a few tens of giga-electron-volts to a few hundred tera-electron-volts. Third, its angular resolution at tera-electron-volt energies will be of the order of one arc minute – an improvement of around a factor of four on the current telescope arrays. Last, its wider field of view will allow the array to survey the sky some 200 times faster at 1 TeV.

CCcta2_06_12

This unprecedented performance will be achieved using three different telescope sizes, covering the low-, intermediate- and high-energy regimes, respectively. The larger southern-hemisphere array is designed to make observations across the whole energy range. The lowest-energy photons (20–200 GeV) will be detected with a few large telescopes of 23 m diameter. Intermediate energies, from about 200 GeV to 1 TeV, will be covered with some 25 medium-size telescopes of 12 m diameter. Gamma rays at the highest energies (1–300 TeV) produce so many Cherenkov photons that they can be easily seen with small (4–6 m diameter) telescopes. These extremely energetic photons are rare, however, so a large area must be covered on the ground (up to 10 km2), needing as many as 30 to 70 small telescopes to achieve the required sensitivity. The northern-hemisphere array will cover only the low and intermediate energy ranges and will focus on observations of extragalactic objects.

Being both an astroparticle-physics experiment and a true astronomical observatory, with access for the community at large, the CTA’s science remit is exceptionally broad. The unifying principle is that gamma rays at giga- to tera-electron-volt energies cannot be produced thermally and therefore the CTA will probe the “non-thermal” universe.

Gamma rays can be generated when highly relativistic particles – accelerated, for example, in supernova shock waves – collide with ambient gas or interact with photons and magnetic fields. The flux and energy spectrum of the gamma rays reflects the flux and spectrum of the high-energy particles. They can therefore be used to trace these cosmic rays and electrons in distant regions of the Galaxy or, indeed, other galaxies. In this way, VHE gamma rays can be used to probe the emission mechanisms of some of the most powerful astronomical objects known and to probe the origin of cosmic rays.

VHE gamma rays can also be produced in a top-down fashion by decays of heavy particles such as cosmic strings or the hypothetical dark-matter particles. Large dark-matter densities that arise from the accumulation of the particles in potential wells, such as near the centres of galaxies, might lead to detectable fluxes of gamma rays, especially given that the annihilation rate – and therefore the gamma-ray flux – is proportional to the square of the density. Slow-moving dark-matter particles could give rise to a striking, almost mono-energetic photon emission.

The discovery of such line emission would be conclusive evidence for dark matter, and the CTA could have the capability to detect gamma-ray lines even if the cross-section is “loop-suppressed”, which is the case for the most popular candidates of dark matter, i.e. those inspired by the minimal supersymmetric extensions to the Standard Model and models with extra dimensions, such as Kaluza-Klein theory. Line radiation from these candidates is not detectable by current telescopes unless optimistic assumptions about the dark-matter density distribution are made. The more generic continuum contribution (arising from pion production) is more ambiguous but with its curved shape it is potentially distinguishable from the usual power-law spectra produced by known astrophysical sources.

It is not only the mechanisms by which gamma rays are produced that can provide useful scientific insights. The effects of propagation of gamma rays over cosmological distances can also lead to important discoveries in astrophysics and fundamental physics. VHE gamma rays are prone to photon–photon absorption on the extragalactic background light (EBL) over long distances, and the imprint of this absorption process is expected to be particularly evident in the gamma-ray spectra from active galactic nuclei (AGN) and gamma-ray bursts. The EBL is difficult to measure because of the presence of foreground sources of radiation – yet its spectrum reveals information about the history of star formation in the universe. Already, current telescopes detect more gamma rays from AGN than might have been expected in some models of the EBL, but understanding of the intrinsic spectra of AGN is limited and more measurements are needed.

Building the CTA

CCcta3_06_12

How to build this magnificent observatory? This is the question currently preoccupying the members of the CTA consortium. There is much experience and know-how within the consortium of building VHE gamma-ray telescopes around the world but nonetheless challenges remain. Foremost is driving down the costs of components while also ensuring reliability. It is relatively easy to repair and maintain four or five telescopes, such as those found in the current arrays, but maintaining 60, 70 or even 100 presents difficulties on a different scale. Technology is also ever changing, particularly in light detection. The detector of choice for VHE gamma-ray telescopes has until now been the photomultiplier tube – but these are bulky, relatively expensive and have low quantum-efficiency. Innovative telescope designs, such as dual-mirror systems, might allow the exploitation of newer, smaller detectors such as silicon photodiodes, at least on some of the telescopes. Mirror technologies are another area of active research because the CTA will require a large area of robust, easily reproducible mirrors.

The CTA is currently in its preparatory phase, funded by the European Union Seventh Framework Programme and by national funding agencies. Not only are many different approaches to telescope engineering and electronics being prototyped to enable the consortium to choose the best possible solution, but organizational issues, such as the operation of the CTA as an observatory, are also under development. It is hoped that building of the array will commence in 2014 and that it will become the premier instrument in gamma-ray astronomy for decades to come. Many of its discoveries will no doubt bring surprises, as have the discoveries of the current generation of telescopes. There are exciting times ahead.

• For more about the CTA project, see www.cta-observatory.org.

Deferred triggering optimizes CPU use

Like all of the LHC experiments, LHCb relies on a tremendous amount of CPU power to select interesting events out of the many millions that the LHC produces every second. Indeed, a large part of the ingenuity of the LHCb collaboration goes into developing trigger algorithms that can sift out the interesting physics from a sea of background. The cleverer the algorithms, the better the physics, but often the computational cost is also higher. About 1500 powerful computing servers in an event filter farm are kept 100% busy when LHCb is taking data and still more could be used.

CCnew8_05_12

However, this enormous computing power is used less than 20% of the time when averaged over the entire year. This is partly because of the annual shutdown, so preparations are under way to use the power of the filter farm during that period for offline processing of data – the issues to be addressed include feeding the farm with events from external storage. The rest of the idle time is a result of the gaps between the periods when there are protons colliding in the LHC (the “fills”), which typically last between two and three hours, where no collisions take place and therefore no computing power is required.

This raises the question about whether it is somehow possible to borrow the CPU power of the idle servers and use it during physics runs for an extra boost. Such thoughts led to the idea of “deferred triggering”: storing events that cannot be processed online on the local disks of the servers, and later, when the fill is over, processing them on the now idle servers.

The LHCb Online and Trigger teams quickly worked out the technical details and started the implementation of a deferred trigger early this year. As often happens in online computing, the storing and moving of the data is the easy part, while the true challenge lies in the monitoring and control of the processing, robust error-recovery and careful bookkeeping. After a few weeks, all of the essential pieces were ready for the first successful tests using real data.

Depending on the ratio of the fill length to inter-fill time, up to 20% of CPU time can be deferred – limited only by the available disk space (currently around 200 TB) and the time between fills in the LHC. Buying that amount of CPU power would correspond to an investment of hundreds of thousands of Swiss francs. Instead, this enterprising idea has allowed an increase in the performance of its trigger, allowing time for more complex algorithms (such as the online reconstruction of KS decays) to extend the physics reach of the experiment.

Ramping up to higher luminosity

After a flying start, with the first stable beams at the new energy of 4 TeV on 5 April, the LHC successfully operated with 1380 bunches per beam – the maximum planned for 2012 – on 18 April. In the days that followed, the machine reached a record peak luminosity of about 5.6 × 1033 cm–2 s–1, with a bunch intensity of 1.4 × 1011 protons per bunch and a new highest stored energy of 120 MJ per beam.

As it entered a two-day machine-develop-ment period on 21–22 April, almost 1 fb–1 of data had been delivered to the experiments, a feat that took until June in 2011. The machine development focused on topics relevant for the 2012 physics-beam operation and was followed by a five-day technical stop, the first of the year.

The restart from 27 April onwards was slowed down by several technical faults that led to low machine availability and the ramp back up in intensity took longer than initially planned. LHC operation was further hampered by higher than usual beam losses in the ramp and squeeze. These required time to investigate the causes and to implement mitigation measures.

On 10 May the machine began running again with 1380 bunches and a couple of days later saw one of the year’s best fills, lasting for 13 hours and delivering an integrated luminosity of 120 pb–1 to ATLAS and CMS. By 15 May, after careful optimization of the beams in the injectors, the luminosity was back up to pre-technical-stop levels. The aim now is for steady running accompanied by a gentle increase in bunch intensity in order to deliver a sizeable amount of data in time for the summer conferences.

Hungary to host extension to CERN data centre

CCnew9_05_12

Following a competitive call for tender, CERN has signed a contract with the Wigner Research Centre for Physics in Budapest for an extension to CERN’s data centre. Under the new agreement, the Wigner Centre will host CERN equipment that will substantially extend the capabilities of Tier-0 of the Worldwide LHC Computing Grid (WLCG) and provide the opportunity to implement solutions for business continuity. The contract is initially until 31 December 2015, with the possibility of up to four one-year extensions thereafter.

The WLCG is a global system organized in tiers, with the central hub being Tier-0 at CERN. Eleven major Tier-1 centres around the world are linked to CERN via dedicated high-bandwidth links. Smaller Tier-2 and Tier-3 centres linked via the internet bring the total number of computer centres involved to more than 140 in 35 countries. The WLCG serves a community of some 8000 scientists working on LHC experiments, allowing seamless access, distributed computing and data-storage facilities.

The Tier-0 at CERN currently provides some 30 PB of data storage on disk and includes the majority of the 65,000 processing cores in the CERN Computer Centre. Under the new agreement, the Wigner Research Centre will extend this capacity with 20,000 cores and 5.5 PB of disk storage, and will see this doubling after three years.

Silicon sensors go 3D

Schematic cross-sections

Three-dimensional silicon sensors are opening a new era in radiation imaging and radiation-hard, precise particle-tracking through a revolutionary processing concept that brings the collecting electrodes close to the carriers generated by ionizing particles and that also extends the sensitive volume to a few microns from the physical sensor’s edge. Since the summer of 2011, devices as large as 4 cm2 with more than 100,000 cylindrical electrodes have become available commercially thanks to the vision and effort of a group of physicists and engineers in the 3DATLAS and ATLAS Insertable B-Layer (IBL) collaborations who worked together with the original inventors and several processing laboratories in Europe and the US. This unconventional approach enabled a rapid transition from the R&D phase to industrialization, and has opened the way to being able to use more than 200 such sensors in the first upgrade in 2014 of the pixel system in the ATLAS experiment.

Radiation effects

Silicon sensors with 3D design were proposed 18 years ago at the Stanford Nanofabrication Facility (SNF) to overcome the limitations of the poor signal-efficiency of gallium-arsenide sensors, a problem that affects silicon sensors after exposure to heavy non-ionizing radiation. The study of microscopic and macroscopic properties of irradiated silicon was, and still is, the subject of extensive studies in several R&D groups and has led to the identification of stable defects generated after exposure to neutral or charged particles. The presence of such defects makes the use of silicon as a detector challenging in the highly exposed inner trackers of high-energy-physics experiments. The studies have discovered that while some of these defects act as generation centres, others act as traps for the moving carriers generated by incident particles produced in the primary collisions of accelerator beams. The three most severe macroscopic consequences for silicon-tracking detectors that have been found concern linearly proportional increases in the leakage current and in the effective doping concentration with increasing fluence, as well as severe signal loss that arises from trapping.

Apart from applications in high-energy physics, 3D sensor technology has potential uses in medical, biological and neutron imaging

However, other studies have found evidence that the spatial proximity of the p+ and n+ electrodes in the pin junction not only allows it to be depleted with a reduced bias-voltage but also that the highest useful electric field across the junction can be applied homogeneously to reduce the trapping probability of generated carriers after radiation-induced defects are formed. This leads to less degradation of the signal efficiency – defined as the ratio of irradiated versus non-irradiated signal amplitudes – after exposure at increasing radiation fluences.

What now makes 3D radiation sensors one of the most radiation-hard designs is that the distance between the p+ and n+ electrodes can be tailored to match the best signal efficiency, the best signal amplitude and the best signal-to-noise or signal-to-threshold ratio to the expected non–ionizing radiation fluence. Figure 1 indicates how this is possible by comparing planar sensors – where electrodes are implanted on the top and bottom surfaces of the wafer – with 3D ones. The sketch on the left shows how the depletion region between the two electrodes, L, grows vertically to become as close as possible to the substrate thickness, Δ. This means that there is a direct geometrical correlation between the generated signal and the amplitude of the depleted volume. By contrast, in 3D sensors (figure 1, right) the electrode distance, L, and the substrate thickness, Δ, can be decoupled because the depletion region grows laterally between electrodes whose separation is a lot smaller than the substrate thickness. In this case, the full depletion voltage, which depends on L and grows with the increase of radiation-induced space charge, can be reduced dramatically.

For the same substrate thickness – before or at moderate irradiation – the amount of charge generated by a minimum-ionizing particle is the same for both types of sensor. However, because the charge-collection distance in 3D sensors is much shorter – and high electric fields as well as saturation of the carrier velocity can be achieved at low bias-voltage – the times for charge collection can be much faster. Apart from making applications that require high speeds easier, this property can counteract the charge–trapping effects expected at high radiation levels. A 3D sensor reaching full depletion at less than 10 V before irradiation can operate at just 20 V and provide full tracking efficiency. After the heavy irradiation expected for the increased LHC luminosity, the maximum operational bias-voltage can be limited to 200–300 V. This has a crucial impact on the complexity of the biasing and cooling systems needed to keep the read-out electronics well below the temperatures at which heat-induced failures occur. By comparison, the voltages required to extract a useful signal when L increases, for example in planar sensors, can be as high as 1000 V.

These 3D silicon sensors are currently manufactured on standard 4-inch float-zone-produced, p-type, high-resistivity wafers, using a combination of two well established industrial technologies: micro-electro-mechanical systems (MEMS) and very large scale integration (VLSI). VLSI is used in microelectronics and in the fabrication of traditional silicon microstrip and pixel trackers in high-energy-physics experiments, as well as in the CCDs used in astronomy and in many kinds of commercial cameras, including those in mobile phones. A unique aspect of the MEMS technology is the use of deep-reactive ion etching (DRIE) to form deep and narrow apertures within the silicon wafer using the so-called “Bosch process”, where etching is followed first by the deposition of a protective polymer layer and then by thermal-diffusion steps to drive in dopants to form the n+ and p+ electrodes.

Two methods

Full 3D Design

Currently, two main 3D-processing options exist. The first, called Full3D with active edges, is based on the original idea. It is fabricated at SNF at Stanford and is now also available at SINTEF in Oslo. In this option, column etching for both types of electrodes is performed all through the substrate from the front side of the sensor wafer. At the same time, active ohmic trenches are implemented at the edge to form so-called “active edges”, whereas the underside is oxide-bonded to a support wafer to preserve mechanical robustness. This requires extra steps to attach and remove the support wafer when the single sensors are connected to the read-out electronics chip. An additional feature of this approach is that the columns and trenches are completely filled with poly-silicon (figure 2, left).

The second approach, called double-side with slim fences is a double-side process, developed independently in slightly different versions by Centro Nacional de Microelectrónica (CNM) in Barcelona and Fondazione Bruno Kessler (FBK) in Trento. While in both cases junction columns are etched from the front side and ohmic columns from the back side, without the presence of a support wafer, in CNM sensors columns do not pass through the entire wafer thickness but stop a short distance from the surface of the opposite wafer-side (figure 2, centre). This was also the case for the first prototypes of FBK sensors but the technology was later modified to allow for the columns to pass through (figure 2, right).

Signal efficiency versus fluence

While all of the remaining processing steps after electrode etching and filling are identical for a 3D sensor to those of a planar silicon sensor – so that hybridization with front-end electronics chips and general sensor handling is the same – the overall processing time is longer, which limits the production-volume capability for a single manufacturer at a given time. For this reason, to speed up the transition from R&D to industrialization, the four 3D-silicon-processing facilities (SNF, SINTEF, CNM and FBK) agreed to combine their expertise for the production of the required volume of sensors for the first ATLAS upgrade, the IBL. Based on the test results obtained in 2007–2009, which demonstrated a comparable performance between different 3D sensors both before and after irradiation, the collaboration decided in June 2009 to go for a common design and joint processing effort, aiming at a full mechanical compatibility and equivalent functional performance of the 3D sensors while maintaining the specific flavours of the different technologies. Figure 3 demonstrates the success of this strategy by showing a compilation of signal efficiencies versus fluence (in neutron equivalent per square centimetre) of samples from different manufacturers after exposure to heavy irradiation. Their position fits the theoretical parameterization curve, within errors.

All of these 3D-processing techniques were successfully used to fabricate sensors compatible with the FE-I4 front-end electronics of the ATLAS IBL. FE-I4 is the largest front-end electronics chip ever designed and fabricated for pixel-vertex detectors in high-energy physics and covers an area of 2.2 × 1.8 cm2 with 26,880 pixels, each measuring 250 × 50 μm2. These will record images of the production of the primary vertex in proton–proton collisions, 3.2 cm from the LHC beam in the IBL. Each 3D sensor uses two n electrodes tied together by an aluminium strip to cover the 250 μm pixel length. This means that each sensor has more than 100,000 holes.

3D wafer

Currently more than 60 wafers of the kind shown in figure 4 made with double-sided processing – which do not require support-wafer removal and have 200 μm slim fences rather than active edges – are at the IZM laboratory in Berlin, where single sensors will be connected with front-end electronics chips using bump-bonding techniques to produce detector modules for the IBL. Each wafer hosts eight such sensors, 62% of which have the required quality to be used for the IBL.

What’s next?

Following the success of the collaborative effort of the 3DATLAS R&D project, the industrialization of active-edge 3D sensors with even higher radiation hardness and a lighter structure is the next goal in preparation for the LHC High-Luminosity Upgrade beyond 2020. Before that, 3D sensors will be used in the ATLAS Forward Physics project, where sensors will need to be placed as close to the beam as possible to detect diffractive protons at 220 m on either side of the interaction point. Apart from applications in high-energy physics, where microchannels can also be etched underneath integrated electronics substrates for cooling purposes, 3D sensor technology is used to etch through silicon vias (TSV) in vertical integration, to fabricate active edge with planar central electrodes and has potential use in medical, biological and neutron imaging. The well defined volume offered by the 3D geometry is also ideal for microdosimetry at cell level.

The DarkSide of Gran Sasso

CCdar1_05_12

A programme of experiments based on innovative detectors aims to take dark-matter detection to a new level of sensitivity.

Dark energy and dark matter together present one of the most challenging mysteries of the universe. While explaining the first seems to be within the reach of only cosmologists and astrophysicists, the latter appears to be accessible also to particle physicists. One of the most recent and innovative experiments designed for the direct detection of dark-matter particles is DarkSide, a prototype for which – DarkSide 10 – is currently being tested in the Gran Sasso National Laboratory in central Italy. The first detector for physics – DarkSide 50 – is scheduled for commissioning underground in December this year.

Astronomical observations suggest that dark matter is made of a new species of non-baryonic particle, which must lie outside the Standard Model. These particles must also be neutral, quite massive, stable and weakly interacting – hence the acronym WIMPs, for weakly interacting massive particles. One of the most promising candidates for a dark-matter particle is the neutralino, the lightest particle that is predicted in theories based on supersymmetry. However, constraints from recent measurements by experiments at CERN’s LHC suggest that WIMPs may have a different origin.

Several potential background sources can mimic the interaction between dark-matter particles and nuclei.

A powerful way of detecting WIMPs directly in the local galactic halo is to look for the nuclear recoils produced when they collide with ordinary matter in a sensitive detector. However, WIMP-induced nuclear recoils are difficult to detect. Theory indicates that they would be extremely rare, with some 10 events expected per year in 100 kg of liquid argon for a WIMP mass of 50 GeV/c2 and a WIMP–nucleon cross-section of 10–45 cm2. They would also produce energy deposits below the order of 100 keV. Moreover, there are several potential background sources that can mimic the interaction between dark-matter particles and nuclei.

Sources of background

In a typical target, there are three main sources of background at energies up to tens of kilo-electron-volts: natural β and γ radioactivity, which induces electron recoils; α decays on the surface of the target in which the daughter nucleus recoils into the target and the α particle remains undetected; and nuclear recoils produced by the elastic scattering of background neutrons. This latter process is nearly indistinguishable from the signals expected for WIMPs and requires an efficient neutron veto in the apparatus.

CCdar2_05_12

DarkSide is a new experiment that uses novel techniques to suppress background sources as much as possible, while also understanding them well. The programme centres on a series of detectors of increasing mass, each making possible a convincing claim for the detection of dark matter based on the observation of a few well characterized nuclear-recoil events in an exposure of several years. The design concept involves a two-phase, liquid-argon time-projection chamber (LAr-TPC) in which the energy released in WIMP-induced nuclear recoils can produce both scintillation and ionization. Arrays of photomultiplier tubes at the bottom and top of the cylindrical active volume detect the scintillation light. A pair of novel transparent high-voltage electrodes and a field cage provide a uniform drift field of about 1 kV/cm to extract the ionization produced. A reflective, wavelength-shifting lining renders the scintillation light from the argon (wavelength 128 nm) visible to the photomultipliers.

In a two-phase argon TPC, rejection of background comes from three independent discrimination parameters: pulse-shape analysis of the direct liquid-argon scintillation signal (S1); the ratio of ionization produced in an event to scintillation, where the former is read out by extracting ionization electrons from the liquid into the gaseous argon phase, where they are accelerated and emit light through electroluminescence (S2); and reconstruction of the event’s location in 3D using the TPC. The z co-ordinates for the event are determined by the time delay between S2 and S1, while the transverse co-ordinates are determined through the distribution of the S2 light across the layer of photomultiplier tubes.

As in other experiments searching for rare events, DarkSide’s detectors will be constructed using materials with low intrinsic radioactivity. In particular, the experiment uses underground argon with extremely low quantities of 39Ar, which is present in atmospheric argon at levels of about 1Bq/kg as a result of the interaction of cosmic rays, primarily with 40Ar. The DarkSide collaboration has developed processes to extract argon from underground gas wells, where the proportion of 39Ar is low. A particularly good source of underground argon is in the Kinder Morgan Doe Canyon Complex in Colorado. The CO2 natural gas extracted there contains about 600 ppm of argon. The DarkSide collaboration has operated an extraction facility at the Kinder Morgan site since February 2010; it has to date extracted some 90 kg of underground depleted argon and subsequently distilled 23 kg to about 99.99% purity. (The throughput is about 1 kg/day, with 99% efficiency.) Studies of the residual 39Ar content of the distilled gas with a low-background detector at the Kimballton Underground Research Facility, Virginia, give an upper limit for the 39Ar content equivalent to 0.6% of the 39Ar in atmospheric argon.

It is not only the argon that has to have low intrinsic radioactivity. Nuclear recoils produced by energetic neutrons that scatter only once in the active volume form a background that is, on an event-by-event basis, indistinguishable from dark-matter interactions. Neutrons capable of producing these recoil backgrounds are created by radiogenic processes in the detector material. In detectors made from clean materials, the dominant source of the radiogenic neutrons is typically the photodetectors, so ultralow background photodetectors are another important goal for DarkSide. A long-term collaboration with the Hamamatsu Corporation has resulted in the commercialization of 3-inch photomultiplier tubes with a total γ activity of around only 60 mBq per tube, with a further 10-fold reduction foreseen in the near future.

To measure and exclude neutron background produced by cosmic-ray muons, the DarkSide TPC will be deployed within an active neutron veto based on liquid scintillator, which will in turn be deployed within 1000 m3 of water in a tank 10 m high and 11 m in diameter, which was previously used in the Borexino Counting Test Facility at Gran Sasso. The liquid-scintillator neutron veto is a unique feature of the DarkSide design and is filled with ultrapure, boron-loaded organic scintillator, which has been distilled using the purification system of the Borexino experiment. The water serves as a Cherenkov detector to veto muons. Monte Carlo simulations suggest that with this combined veto system, the number of neutron events generated by cosmic-rays at the depth of the Gran Sasso Laboratory should be negligible, even for exposures of the order of tonne-years.

The DarkSide programme will follow a staged approach. The collaboration has been operating DarkSide 10, a prototype detector with a 10 kg active mass, in the underground laboratory at Gran Sasso since September 2011. This has been a valuable test bed during the construction of the veto system. It has allowed the light-collection, high-voltage and TPC field structures – and the data-acquisition and particle-discrimination analysis systems – to be optimized using γ and americium-beryllium sources. The first physics detector in the programme, DarkSide 50, should be deployed inside the completed veto system in the Gran Sasso Laboratory by the end of 2012. Looking forward to the second generation, upgrades to the underground argon plants are planned, and the nearly completed veto system has been designed to accommodate a DarkSide-G2 detector, which will have a fiducial mass of 3.5 tonnes.

CO2 cooling is getting hot in high-energy physics

CCco1_05_12

Cooling with carbon dioxide has benefits that are making it the preferred choice for the latest generation of silicon detectors.

The demand for efficient cooling systems that employ relatively small amounts of material – i.e. “low mass” systems – is becoming increasingly important for the new silicon detectors that are being used in high-energy physics. One solution that is gaining popularity is to use evaporative cooling with carbon dioxide (CO2). Currently, two detectors are cooled this way: the Vertex Locator (VELO) in the LHCb detector; and the silicon detector of the orbiting AMS-02 space experiment on board the International Space Station. The CO2 cooling system for VELO has been working since 2008 and the one on AMS has operated in space since May 2011. Both systems have so far functioned without any major issues and both are stable at their design cooling temperature, –30°C for the VELO and 0°C for AMS.

The benefit of using CO2 cooling is that it becomes possible to use much smaller cooling pipes compared with other methods used to cool particle detectors. The secret of CO2 is based on the fact that evaporation takes place at much higher pressures than other two-phase refrigerants. In general, the volume of vapour created stays low while it remains compressed, which means that it flows more easily through small channels. The evaporation temperature of high-pressure CO2 in small cooling lines is also more stable because the pressure drop has a limited effect on the boiling pressure. Savings in the mass of the cooling hardware in the detector when using CO2 can be as high as an order of magnitude compared with other methods used to date.

The thermal performance of a cooling tube is based on two components: the temperature gradient along the tube caused by the changing boiling pressure, and the temperature gradient from the wall of the tube into the fluid – which depends on the heat-transfer coefficient. It is difficult to compare different fluids with each other because the combination of these two performance indicators leads to different results for different tube geometries, heat-load densities and cooling temperatures. To show the benefits of CO2, a specific case is plotted in figure 2 for a 1-m-long tube with a heat load of 500 W at –20°C. As efficiency in terms of the amount of matter in the cooling system is the driving factor in particle detectors, the cooling efficiency is plotted in terms of thermal conduction per cooling-tube volume. The benefit of using CO2 is clear, especially in tubes with a small diameter. The general tendency for high-pressure fluids to have the best performances is also clear – only ammonia in this example seems to deviate from this trend.

Apart from its outstanding thermal performance, CO2 is also a practical fluid. It is neither flammable nor toxic, although it will asphyxiate when released in larger quantities. In general, the small systems used in laboratories have smaller volumes than a standard fire extinguisher and are not dangerous if the CO2 contents were to leak out. The larger systems used in detectors, however, must be designed with proper safety precautions. Some additional benefits of using CO2 are the low utilization costs, the fact that it is a natural gas and, importantly, compatibility with sensitive instruments – contact with CO2 is in general not damaging to electronics or other equipment. CO2 does not exist as a liquid in ambient conditions and when released it is vented as the solid–gas mixture known as dry ice. CO2 evaporates from its liquid phase between –56°C and +31°C and a practical range of application is from –45°C to +25°C.

For LHCb and AMS, a special CO2-cooling method has been developed that is different from ordinary two-phase cooling systems. The best performance of the evaporative CO2 method is achieved with an overflow of liquid, rather than evaporating the last drop. A liquid-pumped system with external cooling is preferable to a compressor-driven vapour system of the kind used in refrigerators. A big advantage is that a liquid-pumped CO2 system is relatively simple, which is useful when integrating it into a complex detector. The CO2 condensing can be done externally using a standard industrial cooler.

The method that has been developed for cooling detectors is called 2PACL, for 2-Phase Accumulator Controlled Loop. Accumulator control being a proven method in existing two-phase cooling systems for satellites, this method was initially developed for AMS by Nikhef in an international collaboration led by the Netherlands National Aerospace Laboratory, NLR. The novelty is precise pressure regulation with a vessel containing a two-phase CO2 mixture. The benefit of using this system with detectors is that the cooling plant containing all of the active components can be set up some distance away. The cooling plant can be designed to be remote from the inaccessible detector, leaving only tubing of small diameter inside or near the detector.

CCco3_05_12

Figure 3 shows the thermodynamic cycle for the 2PACL system in a pressure–enthalpy diagram – a useful representation of the cycle in evaporative-cooling systems. Figure 4 shows the 2PACL principle used in detectors, with the node numbers corresponding to those used in figure 3. For AMS, the external cooler was replaced by cold radiator panels mounted on the outside of the experiment (see figure 1).

CCco4_05_12

The 2PACL concept was also successfully applied by Nikhef for cooling the LHCb’s VELO with CO2 and it has become the baseline concept for future detectors that are under development. The pixel detectors for ATLAS and CMS phase-1 upgrades are being designed to be cooled by the 2PACL CO2 system and the same technology is also under consideration for the silicon detectors for the full phase-2 upgrades for ATLAS and CMS. Elsewhere, CO2 cooling is under development for the Belle-2 detector at KEK and the IL-TPC detector for a future linear collider. Industrial hi-tech applications are also showing interest in the technique as an alternative cooling method.

CCco5_05_12

Currently, CERN and Nikhef are developing small, laboratory CO2 coolers for multipurpose use (figure 5). The units, called TRACI, for Transportable Refrigeration Apparatus for CO2 Investigation, are relatively low cost and optimized for a wide operating range and user-friendly operation. Five prototypes have been manufactured and the hope is that results from these units will lead to a design that can be outsourced for manufacture by external companies. In this way, the many research laboratories investigating CO2 for their future detectors could be supplied with test equipment.

bright-rec iop pub iop-science physcis connect