Bluefors – leaderboard other pages

Topics

Superkamiokande resumes operation

cernnews5_1-03

Just over a year ago, an accident at the Superkamiokande detector in Japan brought neutrino experiments there to a temporary conclusion. Some 60% of the detector’s 11,200 photomultipliers were destroyed. However, the installation of 5200 new photomultipliers is now complete, and data-taking resumed on 8 October 2002.

The instalment of photomultipliers was complete by the end of September, allowing the detector to be refilled with water starting on 3 October. Data-taking resumed as soon as the first row of photomultipliers was immersed, and the refill was complete by mid-December.

To prevent a similar accident from happening again, the new photomultipliers have been encased in rigid bubbles. Their faces are covered with transparent acrylic, while the rest is made of fibre-reinforced plastic.

LANL develops bespoke cavities for low-energy applications

cernnews7_1-03

Superconducting (SC) RF cavities are becoming common in accelerators for high-energy and nuclear physics, and the technologies needed to obtain high fields and high-quality factors in elliptical cavities for electron acceleration have come close to maturity, for example in the TESLA project. However, because mechanical weakness causes some difficulty in adopting elliptical cavities for lower-velocity particles, there is a demand for developing different types of SC cavities, in particular to reduce the costs of future low-energy facilities, such as spallation neutron sources, rare isotope accelerators, and accelerator-driven waste transmutation systems.

cernnews8_1-03

One of the promising candidates is the spoke cavity, invented in the late 1980s by Jean Delayen and Ken Shepard at the Argonne National Laboratory. With this, it is easier to extend the acceleration length of half-wave coaxial resonators by adding more spokes in one cavity. A benefit is that for the same frequency a spoke cavity is about half the size of an elliptical one; conversely, a spoke cavity would operate at half the frequency of an elliptical cavity of similar size. This increases the active length by a factor of two, and allows an operating temperature of 4.5 K, with resulting savings in the installation and operating cost of the cryoplant.

In 2002, the Los Alamos National Laboratory (LANL) began developing spoke cavities as part of its Advanced Accelerator Applications (AAA) programme to develop technology for an accelerator-driven waste transmutation system. The LANL team has designed a 350 MHz two-gap spoke cavity, with b (fraction of light velocity) = 0.175, and procured two cavities from the Italian firm Zanon SpA. The diameter of the cavity is 40 cm, the beam aperture is 5 cm and the accelerating length is 10 cm. The two cavities have reached 12.9 MV/m and 13.5 MV/m respectively at 4 K, exceeding the present AAA design goal of 7.5 MV/m by up to 80%. This will help achieve the very high reliability required for the waste transmutation application. Although there are still issues to be discussed, such as drive couplers, multipacting and higher-order modes, this result has encouraged the LANL team to strive for further development of multispoke cavities, which may also prove to be a cheaper and better option for medium velocity (b~0.6) particles.

Canadians set record for long-distance data transfer

A Canadian team has succeeded in transferring 1 TByte of data over a newly established “lightpath” extending 12,000 km from TRIUMF in Vancouver to CERN in Geneva in under 4 h – a new record rate of 700 Mbps on average. Peak rates of 1 Gbps were seen during the tests, which took place in conjunction with the iGRID 2002 conference held in Amsterdam in late September. The previous record for a transatlantic transfer was 400 Mbps.

The achievement is particularly notable because the data were transferred from “disk to disk”, making it a realistic representation of a practical data transfer. The data started on disk at TRIUMF and ended up on disk at CERN, where in principle they could be used for physics analysis. The data transferred were the result of Monte Carlo simulations of the ATLAS experiment, being constructed at CERN to take data at the Large Hadron Collider.

The transfer used a new technology for network data transfer, called a lightpath. Lightpaths establish a direct optical link between two remote computers, essentially positioning them in a “local-area network” that is anything but local. This avoids the need for more complicated arbitration (or routing) of the network traffic. The link used here to connect TRIUMF and CERN is the longest-known single-hop network.

Karlsruhe Grid computing centre is inaugurated

cernnews9_1-03

The inauguration colloquium for the Grid Computing Centre Karlsruhe (GridKa) was held on 30 October at the Forschungszentrum Karlsruhe (FZK). FZK hosts the German Tier 1 centre for the Large Hadron Collider (LHC) experiments (ALICE, ATLAS, CMS and LHCb), as well as four other particle physics experiments (BaBar at the Stanford Linear Accelerator Laboratory, CDF and D0 at Fermilab, and COMPASS at CERN).

To cope with the computational requirements of the LHC experiments, a worldwide virtual computing centre is being developed – a global computational grid of tens of thousands of computers and storage devices. About one-third of the capacity will be at CERN, with the other two-thirds in regional computing centres spread across Europe, America and Asia. At the end of 2001, the German HEP community proposed FZK as the host of the German regional centre for LHC computing, and as the analysis centre for BaBar, CDF, D0 and COMPASS. FZK, a German national laboratory of similar size to CERN, accepted the challenge and established GridKa.

After just nine months a milestone has been reached – more than 300 processors and about 40 TByte of disk space are available for physicists from 41 research groups of 19 German institutes. The application software of the eight experiments, as well as grid middleware, has been installed. BaBar was the pilot user and is still the main customer. CDF and D0 have started to use GridKa for the analysis of Tevatron data. During the summer of 2002, ATLAS and ALICE used the centre for their worldwide distributed data challenges. The University of Karlsruhe CMS group uses the centre for analysis jobs.

On 29-30 October, the first GridKa users’ meeting was held. On the first day, more than 50 participants attended tutorials about grid computing, the Globus toolkit, software of the European DataGrid project, and the ALICE grid environment AliEn. The second day continued with presentations on GridKa and the status and plans of the experiments. An important contribution was a talk by CERN’s Ingo Augustin, who discussed the “European Data Grid: First Steps towards Global Computing”.

The highlight of the users’ meeting was the inauguration colloquium for GridKa, with almost 200 representatives from science, industry and politics. After an introduction by Reinhard Maschuw of FZK, there were talks about grid computing by Hermann Schunck of the German Federal Ministry of Education and Research, Marcel Kunze of FZK, Tony Hey representing the UK e-Science Initiative, Siegfried Bethke of the Max Planck Institute in Munich, Michael Resch of the University of Stuttgart, Philippe Bricard of IBM France, and CERN’s Les Robertson. The central theme of all of the talks was the conviction that grid computing will be an important part of the computing infrastructure of the 21st century. The particle physics community will drive the first large-scale deployment of a worldwide grid, which will have a significant impact on future scientific and industrial applications.

Looking forward to physics at Tevatron Run II

cerntev1_1-03

Tevatron Run II is under way at Fermilab, with upgraded detectors addressing some of the most important questions in particle physics. What is the structure and what are the symmetries of space-time? Why is the weak force weak? What is cosmic dark matter? Why is matter-antimatter symmetry not exact? Until CERN’s Large Hadron Collider (LHC) turns on, the Tevatron is the world’s only source of top quarks. It is the only place where we can directly search for supersymmetry, for the Higgs boson, and for signatures of additional dimensions of space-time. And it is also the most likely place to directly observe something totally unexpected.

After a somewhat frustrating year, recent progress with the accelerator has been gratifying. Records are regularly being broken for peak and weekly luminosities. The complex is now performing well in excess of its Run I (1992-1995) bests, and is delivering increased energy (1.96 TeV). These improvements have come from well understood modifications, and there is a detailed plan for the next steps. As Steve Holmes, Fermilab’s associate director for accelerators, has said: “There is no silver bullet to be found.” Rather, we have to make a large number of 10-15% improvements; Holmes notes that (1.15)10 = 4. The major areas to be tackled are: transfer and acceleration efficiencies; emittance dilution; beam lifetimes in the Tevatron before acceleration; and the role of long-range interactions between the beams. Significant help from other laboratories and from physicists in other parts of Fermilab has also been an important contribution. Fermilab and CERN have set up an exchange programme; Frank Schmidt and Frank Zimmerman from CERN are helping Run II to become a success, and in the future, Fermilab will send machine physicists to CERN to help with LHC commissioning.

cerntev2_1-03

The CDF and D0 detectors are both working well, emphasizing data-taking efficiency, and are recording physics-quality data. Performance of the tracking, calorimeter and muon detectors is good, and beam-induced backgrounds are under control. CDF is now running a trigger for B-mesons using displaced tracks from the silicon detector. This is a first at a hadron collider, and has already yielded some very impressive heavy flavour samples. Processing the huge quantities of data from modern experiments is a challenge in itself, as is making it available to the large and widely distributed collaborations. There is a natural synergy between these challenges and current ideas about “Grid” computing. Both CDF and D0 are already making something like a Grid a reality, using a Fermilab-developed data distribution system called SAM to send their data out to their collaborators. They are also exploring ways for remote physicists to assist in monitoring detector operations.

Physics in Run II

The physics goals of Run II involve direct searches for as yet unknown particles and forces, including both those that are predicted or expected (like the Higgs boson and supersymmetry) and those that would come as a surprise. At the same time, we confront the Standard Model through precise measurements of the strong interaction, the quark-mixing matrix, and the electroweak force and properties of the W boson, the Z boson and the top quark. The experiments already have first results in all of these areas. Given the amount of data collected so far in Run II, they are not yet competitive with the results already published from Run I or the experiments at CERN’s LEP electron-positron collider, but they show that all the analysis tools are in place and ready.

cerntev3_1-03

As the world’s highest-energy collider, the Tevatron is the most likely place to directly discover a new particle or force. We know the Standard Model is incomplete; theoretically the most popular extension is to make it a part of a larger picture called supersymmetry (which is a basic prediction of superstring models). Here each known particle has a so-far unobserved and more massive partner, to which it is related through a change of spin. If it exists, the lightest supersymmetric particle would be stable, and vast numbers of them would pervade the universe, explaining the astronomers’ observations of dark matter. The Tevatron is the only place now available to directly search for supersymmetry. In Run II, the opportunities for discovery include squarks and gluinos, in final states with missing energy (ETmiss) and jets (and lepton(s)); charginos and neutralinos through multilepton final states; gauge-mediated SUSY in ETmiss + photon(s) channels; stop and sbottom; and R-parity violating models. Searches for other new phenomena include leptoquarks, dijet resonances, new heavy W´ and Z´ bosons, massive stable particles, and monopoles.

The Tevatron allows us to experimentally test the new and exciting idea that gravity may propagate in more than four dimensions of space-time. If there are extra dimensions that are open to gravity, but not to the other particles and forces of the Standard Model, then we could not perceive them in our everyday lives. However, particle physics experiments at the TeV scale could see signatures such as a quark or gluon jet recoiling against a graviton, or indirect indications like an increase in high-energy electron-pair production. These studies use the Tevatron to literally measure the shape and structure of space-time.

While it is good to be guided by theory, one should also remain open to the unexpected. Therefore both experiments have developed quasi-model-independent (signature-based) searches, which look for significant deviations from the Standard Model. In the Run I dataset, no significant evidence for new physics was found. Perhaps revealing different psychologies, D0 has quantified its agreement with the Standard Model at the 89% confidence level, while CDF has preferred to highlight some potential anomalies as worth pursuing early in Run II.

The experiments have already embarked on a number of searches using Run II data. Work has started on understanding the ETmiss distribution in multijet events as a prelude to squark and gluino searches; trilepton candidates are also being accumulated. At D0, a gauge-mediated SUSY search has set a limit on the cross section for pp-Æ ETmiss + gg. Also at D0, virtual effects of extra dimensions are being sought in e+e, m+m and gg final states, and limits on the scale of new dimensions at the 0.9 TeV level can already be set. A search for leptoquarks decaying to electron + jet has been carried out. So far, none of the cross sections or mass limits is better than published Run I results, but it serves as a demonstration that the pieces are all in place.

In the Standard Model, the weak force is weak because the W and Z bosons interact with a field (the Higgs field) that permeates the universe. This same field gives masses to all the fundamental fermions. It should be possible to excite this field and observe its quanta – the long sought-after Higgs boson. It is the last piece of the Standard Model, and also the key to understanding any beyond-the-Standard-Model physics like supersymmetry. Finding it is a very high priority. Right now, we are developing the foundations needed for Higgs physics in Run II: good jet resolution; high b-tagging and trigger efficiencies; and a good understanding of all the backgrounds. One area that can be attacked with relatively modest luminosities in 2003 is to search for one or more of the extended suite of Higgs bosons that are predicted in supersymmetric models. Associated production of a SUSY Higgs together with a bb- pair is enhanced at high tanb, and we will be able to improve on present limits with only a few hundred inverse picobarns.

cerntev4_1-03

In Run II, we will complement our direct searches for new phenomena with indirect probes. New particles and forces can be seen indirectly through their effects on electroweak observables. The tightest constraints will come from improved determination of the masses of the W boson and the top quark. Both experiments now have preliminary results from their Run II samples of W and Z candidates. They have measured the cross sections at the Tevatron’s new centre of mass energy of 1.96 TeV and used the ratio of the W to the Z to indirectly extract the W width. CDF has also taken a first look at the forward-backward asymmetry in e+e production in Run II. Currently, the W mass is known to be mW = 80 451 ± 33 MeV; the measurement is dominated by LEP data. Our Run I results fixed the W mass at the 60 MeV level, but it will take a Run II dataset of order 1 fb-1 before we can significantly improve the world knowledge of mW – not a short-term prospect. Given 2 fb-1 we will be able to drive the uncertainty down to the 25 MeV level per experiment, with an ultimate capability of 15 MeV per experiment.

The Tevatron collider is the world’s only source of top quarks. The top quark was discovered by CDF and D0 in 1995 on the basis of a few tens of events – they are now gearing up to study top quarks in the thousands. The top is the heaviest known quark and, alone among quarks, couples strongly to the Higgs. We need to test its properties and decays with sufficient precision to confirm the Standard Model – is the top really top? Here we can look forward to significant improvements in the short term, because the Run I dataset was so statistically limited. Both D0 and CDF are on the road to “rediscovering” top for the spring 2003 conferences, and both experiments have candidate events. Per inverse femtobarn, we will collect roughly 500 b-tagged top-pair events in the lepton + jets final state. As well as improving the cross-section and mass measurements, we will look for top-antitop spin correlations which can tell us if the top is really the spin-1/2 object it should be, and observe single top production (which allows a model-independent measurement of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix element |Vtb|). New techniques are also being developed: D0 has a new, preliminary determination of the top mass from Run I data that uses more information per event, obtains a better discrimination between signal and background than the published 1998 analysis, and improves the statistical error equivalently to a factor 2.4 increase in the number of events. Run II will also test beyond-the-Standard-Model theories that predict unusual top properties, states decaying into top, and anomalously enhanced single top production.

The mixing between the three generations of quarks results in subtle violations of the so-called CP symmetry relating particles and antiparticles. Understanding this symmetry will help to explain why the universe is filled with matter, not antimatter. In the decays of B-mesons, these symmetry violations can be large, and so B-hadrons have become an important laboratory to explore the “unitarity triangle”, which relates the elements of the CKM quark-mixing matrix. In Run II, we want to confront the CKM matrix in ways that are complementary to the electron-positron B-factories. CP violation is now established in the B system through the decay Bd Æ J/y Ks. The measured mixing angle is consistent with the Standard Model, but cannot exclude new physics by itself. The BaBar and BELLE experiments can and will do much more with their data, but the Tevatron can uniquely access the Bs meson, which is not produced at the B-factories, and has therefore been called the “El Dorado” for hadron collider B-physics. By measuring the mixing rate between Bs and B-s, we can determine the length of one of the sides of the unitarity triangle and complement the B-factories’ measurements of its angles. CDF expects to be sensitive to Standard Model mixing with a few hundred inverse picobarns. It will also be interesting to see if there is sizeable CP violation in Bs Æ J/y f (it is expected to be small); while the decay Bs Æ KK at the Tevatron complements B d Æ pp that is measured at the B-factories. Together they can pin down the triangle angle g. There are many other opportunities, such as Lb properties and searches for rare decays. CDF already has most impressive results from Run II, building on its Run I experience together with new detector capabilities (silicon vertex trigger and time-of-flight detector). Lepton-triggered signals for B± Æ J/y K±, B0 Æ J/y K*0 and Bs Æ J/y f are seen, while using the silicon vertex trigger, the purely hadronic modes B± Æ D0p Æ Kpp and B Æ hadron hadron are being recorded. We can also look forward to CDF exploiting an enormous sample of charm mesons. In D0, the tools are being put in place for a B-physics programme. The inclusive B lifetime has been measured, and B-mesons are being reconstructed. D0 does not exploit purely hadronic triggers, but benefits from its large muon acceptance, forward tracking coverage, and ability to exploit J/y Æ e+e.

No one doubts that quantum chromodynamics (QCD) describes the strong interaction between quarks and gluons. Its effects are all around us – it is the origin of the masses of hadrons, and thus of the mass of stars and planets. This doesn’t mean it is an easy theory to work with. As well as using hadron colliders to test QCD itself, we find that it is so central to the calculation of both signal and background processes that we need to make sure we can have confidence in our ability to make predictions in this framework. We need to resolve some outstanding puzzles and ensure we understand how to calculate the backgrounds to new physics.

Both CDF and D0 have now measured jet-energy distributions from Run II. CDF are making use of their new forward calorimetry to cover the whole range of pseudorapidity. Jet calibrations are not yet final, but already we see events with transverse energies beyond 400 GeV. With the full Run II dataset this will reach as far as 600 GeV, allowing us to pin down the high-energy behaviour of the cross section, and thus the gluon content of the proton (which remains poorly determined at high momentum and a source of uncertainty). Another issue provoking much discussion is the choice of the algorithm used to define jets. D0’s Run I data have shown that the two most popular jet definitions (the geometrically based “cone” and the momentum-based recombination “k^” algorithms) yield different cross sections for collider data; while qualitatively as expected, quantitatively it is not yet clear whether the differences are understood. We will try to address this question with early Run II data.

Run I left many unanswered questions about heavy flavour (charm and bottom) production. Resolving these is important, because many new particles result in heavy flavour signatures. The inclusive B-meson production cross section lies significantly above the QCD prediction, though it can be made to fit better using resummation and retuned fragmentation functions (from LEP data). For charmonium, the measured cross section requires a large colour-octet component, but that is not consistent with the observed J/y polarization. The CDF secondary vertex trigger in Run II is working beautifully, and the resulting huge charm and bottom samples will allow these puzzles to be explored in much more detail. D0 now has preliminary Run II J/y and muon + jet cross sections which are the first steps in measuring the charmonium polarization (and thus production process) and the b-jet cross section.

Another QCD-related puzzle is hard diffraction. In these events, a high-momentum-transfer collision occurs, but one of the incoming beam particles appears to leave the collision intact, instead of being destroyed in the process. This observation is rather surprising and needs to be pinned down better and related quantitatively with similar phenomena observed at HERA. Both CDF and D0 have new instrumentation for diffractive physics in Run II. This will allow us to test some of the basic assumptions that have gone into earlier studies, and will provide a sanity check for ideas of Higgs production through this mechanism at the LHC.

Planning for the future

cerntev5_1-03

At the same time as the Fermilab accelerator experts have been working to improve Tevatron operations, they have been trying to incorporate the lessons learned into a solid plan for the future. The planning for the accelerator complex is in two phases. The first focuses on US fiscal year 2003, which ends in September 2003. A full plan and schedule are in place. The US Department of Energy (DOE) recently convened a high-level international review committee, chaired by David Sutter of the DOE, to look at this plan. The committee complimented the laboratory on its recent luminosity progress and its focus on the colliding beam programme, and reported that the goals for luminosity in 2003 are highly likely to be met. By summer 2003, each experiment should have recorded around 200 pb-1 of Run II data (almost twice the Run I dataset). The centrepiece will be a greatly increased top quark sample, thanks to the higher beam energy and the much improved b-tagging capabilities of the detectors. A first look at Bs mixing will be possible, together with lifetimes and branching ratio measurements from the B, Bs, Lb and charm samples. Jet distributions at the highest energies will constrain proton structure, and searches will follow up on Run I anomalies and extend the Run I reach for many extensions to the Standard Model.

The second phase covers 2004 and beyond. It is now clear that it will take somewhat longer than had been anticipated to accumulate the large datasets ultimately foreseen for Run II: such is the price of realism. As long as the Tevatron remains the world’s highest-energy collider, it is a unique facility that must be exploited to its limit; this will remain true until the LHC experiments start producing competitive physics results. We are preparing to run the Tevatron until the end of the decade in order to fully realise its physics potential. The Run II physics programme is a broad and deep one, and will answer crucial questions about the universe. There is no threshold at which this starts (figure 1). There is compelling physics to be done each year and with each doubling of luminosity, starting now with a few hundred inverse picobarns, and to the end of the decade with multifemtobarn data sets. To explore the 5-15 fb-1 domain calls for upgrades to the CDF and D0 detectors. Primarily, these involve new trigger systems to handle more than 10 interactions per crossing at the expected luminosity, and new silicon detectors that make use of LHC R&D to sustain the high radiation doses. These upgrades were successfully reviewed by the DOE in September, and are now moving towards approval, with installation planned for 200502006.

cerntev6_1-03

In summary, Run II at the Tevatron provides extraordinary opportunities for the advancement of our knowledge of particle physics. A measure of the increased sensitivity, using top quark production as an example, is shown in Table 1. With a factor 500 increase in sensitivity, the CDF and D0 collaborations are eager to thoroughly explore the energy frontier before passing the baton to the ATLAS and CMS experiments at the LHC.

As Jay Marx from Lawrence Berkeley National Laboratory pointed out at the accelerator review, Run II is a marathon and not a sprint. The combination of high accelerator energy, excellent detectors, enthusiastic collaborations and data samples that are doubling every year guarantees interesting new physics results at each step. Each step answers important questions, and each leads on to the next. This is how we will lay the foundations for a successful LHC physics programme – and hopefully a linear collider to follow.

Radiation hard silicon detectors lead the way

Silicon is the material of choice for the tracking detectors being made for experiments at CERN’s Large Hadron Collider (LHC). Silicon offers reliable, fast and cheap detectors. Segmented detectors, processed using microelectronic planar technology, have been used successfully to precisely image the tracks of charged particles in many experiments. The operation of such devices, however, is compromised when they are irradiated with high fluences (above 5 ¥ 1014 particles/cm2) of neutrons or high-energy hadrons, corresponding to about 5 years of LHC operation at a luminosity of 1034 cm-2 s-1 for detectors closest to the beam. Radiation-induced defects are introduced into the crystal lattice, completely transforming its electrical properties. A dramatic result of these changes is the loss of charge released by a traversing particle, produced for example by a proton-proton interaction in the LHC. This loss of information compromises the reconstruction of important events like the secondary vertices that would be produced by the decay of Higgs bosons into b quarks and antiquarks. Studies by a number of CERN-based research and development collaborations carried out in the past 10 years have helped physicists to understand and reduce radiation damage effects in silicon. The fundamental results obtained by these collaborations have contributed much to approximately doubling the useful life of trackers in the LHC’s ATLAS, CMS, ALICE and LHCb experiments to almost 10 years of operation.

An order-of-magnitude increase of the luminosity after the initial phase of the LHC experiments is the natural next step to improve the statistics for rare events. This would involve a reduction of the proton-proton bunch crossing to 12.5 ns, and an increase of beam intensity producing a consequent increase of the radiation rate (Gianotti et al. 2002). A new form of silicon sensor whose fabrication makes use of micromachining technology as well as the standard processes of planar technology, used for many years both for sensors and their readout chips, can satisfy these severe requirements.

3D solution

3D and planar detector design parameters

3D sensors proposed by Sherwood Parker of the University of Hawaii and colleagues in 1995 (figure 1), initially to solve the problem of charge loss in gallium arsenide detectors, have been fabricated using silicon (Parker et al. 1997). Active-edge 3D sensors, proposed in 1997 and also indicated in figure 1, should have efficient charge collection to within a few microns of their physical edges (Kenney et al. 2001). In this new configuration, the p+ and n+ electrodes penetrate through the silicon bulk, rather than being limited to the silicon wafer surface.

The advantages of 3D design, compared with the traditional planar design, are shown schematically in figure 2. Since the electric field is parallel (rather than orthogonal) to the detector surface, the charge collection distance can be several times shorter, the collection time considerably faster, and the voltage needed to extend the electric field throughout the volume between the electrodes (full depletion) an order of magnitude smaller, for 300 mm thick silicon (table 1).

This technology has many potential applications, for example in extreme radiation environments, luminosity monitors, and medical and biological imaging (Kenney et al. 2001, Parker et al. 2001).

Radiation effects in silicon detectors

Insulating layers on the surface of detectors charge up when traversed by charged particles. This can be tolerated. Damage to the bulk by both charged and neutral particles, however, is more difficult to combat. It takes just 25 eV to knock a silicon atom out of its lattice point. This leads to the formation of defects, in some instances involving impurity atoms in the material. Defects can be electrically active, leading to increased space-charge, leakage current and charge trapping. Increased space-charge prevents the electric field from penetrating the material unless high bias voltages are used. Moreover, radiation-induced space-charge can increase after the radiation source is removed, a phenomenon called reverse annealing. It has proven necessary to cool between -10°C and -20°C to reduce the leakage current and reverse annealing. The addition of oxygen into wafers has been found to reduce the space-charge build-up and improve reverse annealing for damage caused by charged particles. The use of multiguard structures, where biased guard rings surround the active detector area, has allowed high-voltage operation at the expense of larger inactive regions at the detector edge. Guard structures are used for achieving long-term stability, to reduce the current in the active area, and to prevent avalanche breakdown when high bias voltage is required. Cooling below 200 K also reduces the radiation-induced space-charge, which aids detector operation – a phenomenon called the Lazarus effect (Palmieri et al. 1998).

The electric field inside a silicon detector must be as large as possible

The electric field inside a silicon detector must be as large as possible. The risk of trapping of the carriers decreases if the electric field is large (figure 3). The effective drift length of a carrier (Ldrift = vdrift ¥ ttr) depends on the electric field value via the drift velocity vdrift. ttr is the trapping time of the carriers (the time taken by an electron or a hole to travel towards the collecting electrode before being trapped by a defect). It is clear that electrons will make a greater contribution to the signal, since their drift length is three times longer than that of holes. Drift lengths decrease linearly with fluence, making devices with a large collection distance inefficient at high radiation levels (DaVia’ and Watts 2002).

Prior to irradiation, electrons and holes contribute equally to the signal in the case of a pad detector where both electrodes have the same area. For a detector with a segmented collecting electrode, the larger fraction of the signal is produced by the carrier that travels towards it. This can be derived using a famous theorem due to Simon Ramo and (independently) William Shockley (Ramo 1939; Shockley 1938). Since electrons are harder to trap, it is important that the amplifier is connected to the n+ electrode, which collects electrons in a segmented detector. This is true for the ATLAS and CMS pixel detectors.

An example of the current state of the art is well illustrated by the pixel layers of the ATLAS vertex detector, which will be placed as close as 4 cm from the interaction point. The signal is collected on the n+ side of the detector. The n+ on n design is feasible by “spraying” a thin, moderately doped p layer between the n+ contacts, to prevent them from being shorted by electrons attracted to trapped positive charge at the interface between the field oxide and the silicon. The active thickness of the detector is reduced from the standard 300 mm to an average of around 230 mm to enhance the penetration of the electric field in the active area. This thickness, together with multiguard structure, allows operation up to around 600 V bias. This is a field of about 3 V/mm. Oxygenation of the wafer prior to detector processing reduces the radiation-induced space-charge, and consequently allows the detector to be fully depleted even after a fluence of 1015 particles/cm2. Under such conditions, about 98% of the signal charge generated in the detector is collected.

This result demonstrates that a combination of oxygenation and electron collection leads to efficient operation at reasonable bias voltages. This is important for power dissipation and thermal runaway, and has been further demonstrated by recent results from a group at Liverpool University, where similar conclusions were reached using a p-type silicon substrate (Casse et al. 2002).

3D detectors

Deep reactive ion etching has been developed for micromechanical systems. This allows microholes to be etched in silicon with a thickness-to-diameter ratio as large as 20:1. In the 3D detectors presently processed at Stanford, US, by a collaboration involving scientists from Brunel University in the UK, as well as Hawaii and Stanford, this technique is used to etch holes as deep as several hundred microns, at distances as short as 50 mm from one another. These holes are then filled with polycrystalline silicon doped with either boron or phosphorus, which is then diffused into the surrounding single-crystal silicon to make the detector electrodes (figure 4). The silicon substrate used for this process is p-type, and the crystal orientation is <100>, where 1,0,0 represent the crystal plane co-ordinates. Silicon atoms line up in certain directions in the crystal. <100> corresponds to having a particular crystal plane at the surface, and is preferred for a better surface quality. Once the electrodes are filled, the polycrystalline silicon is removed from the surfaces, and the dopant is diffused. Aluminium can be deposited in a pattern that will depend on how the individual electrodes are to be read out.

Oscilloscope traces of ionizing particle signals

The response of a 3D detector, where all the electrodes have been connected together by an aluminium microstrip, is shown in the oscilloscope traces of figure 5. The fast radiation hard electronics used for this test were designed by the CERN microelectronics group (Anelli et al. 2002). The fast response, observed after 1015 protons/cm2 at room temperature and 40V bias voltage, confirms that the combination of short collection distance and high electric field can improve the radiation tolerance of silicon detectors by possibly a factor of 10 compared with planar devices. Improvement factors from better materials should multiply this geometric factor.

Active edges

Photomicrograph

An example of a 3D sensor in the process of fabrication is shown in figure 6. The electrodes in this case are distributed in a hexagonal pattern, and the edges are completed by an active trench, doped appropriately to set the electric field distribution inside the detector. In planar devices, the conducting cut edge of the sensor must be prevented from shorting the bias voltage between the two surfaces by spacing the top and/or bottom of the electrodes away from the edges. Guard rings along the edges may also be added to intercept edge leakage current, and to drop the voltage in a controlled way. The resulting dead region at the edge will be at least comparable to the thickness and can be three or four times as large, so a space must be allowed for a series of guard rings.

In 3D devices, the voltage at corresponding points on the top and bottom surfaces is equal, so there is no voltage drop across the edges. Etched trenches, filled with suitably doped polycrystalline silicon, can then be used to make the edge into an electrode, with depletion possible to within a few microns of the physical edges. The freedom of such detectors from insensitive edge regions can be of great advantage when several devices are combined to cover large areas, or when the detector needs to be placed very close to the beam.

Silicon detectors are a good example to demonstrate how the particle physics community can benefit from the technology developed by the microelectronics industry. 3D geometry and active edges would have been an impossible dream only 10 years ago, and now they provide a natural way to construct imagers for charged particles and X-rays. The structural molecular biology community will take advantage of 3D detectors to study protein folding, while research is ongoing to apply this technology to X-ray mammography. Other groups are exploring alternative methods for fabricating 3D structures (Pellegrini et al. 2002).

RHIC limbers up for a new heavy-ion run

cernnews2_12-02

Brookhaven’s Relativistic Heavy Ion Collider (RHIC) began its cool-down on 1 November ready for the first injection of beam in December. Following a successful run earlier in the year, which included the first polarized proton running, RHIC is set to start the new run with deuterium on gold collisions. This provides a reference point for the gold-gold collisions that RHIC experiments were designed to study, since any departure from simple scaling between proton-gold and gold-gold collisions would point to new physics. Deuterium has been substituted for protons for practical reasons. The large-aperture dipole magnets that bring RHIC’s beams into collision would require realignment to handle proton-gold collisions, whereas deuterium-gold can be handled without intervention.

BES accumulates 14 million y(2S) events

This year, the Beijing Spectrometer (BES) experiment running at the Beijing Electron Positron Collider (BEPC) completed a run at the energy of the y(2S) resonance. The run began in November 2001 and lasted until March 2002, a total of 111 days. BES accumulated 14 million events, which is the world’s largest sample of y(2S) events produced from electron-positron annihilation. BES collected the previous largest data sample of 4 million events in 1993-1995. The new sample will allow the properties and decays of the y(2S) and other charm quark bound states produced by y(2S) decays to be studied with increased precision.

The y(2S) resonance was discovered in 1974 by the Mark I experiment at California’s Stanford Linear Accelerator Center (SLAC) shortly after the discovery of the J/y. Both resonances are composed of a charm quark and an anticharm quark. The discovery of the J/y and y(2S) particles was crucial in establishing the quark model, in which almost all observed mesons and baryons can be described as composite objects made of quarks held together by the strong force. The theory that describes this is quantum chromodynamics (QCD), in which the carriers of the strong interaction are gluons, and the quarks are held together by gluon exchange.

cernnews3_12-02

The y(2S) is an excited state of the J/y. Although some properties are similar, the y(2S) is more massive than the J/y, so it can decay into other charm-anticharm states (such as the spin-zero hc and the three spin-1 cc states, as well as the J/y itself). This allows the physics of many charmonium states to be studied using the y(2S) sample (figure 1). Indeed, the study of the y(2S) sample collected in 1993-1995 at BES has proved to be very fruitful and important in testing QCD calculations on quarkonium production and decay dynamics. The clean signature and the large production rates of the cc states in y (2S) decays are big advantages of this study. Furthermore, the sample can also be used to study light hadron spectroscopy from the decay products of the cc states, where the quantum numbers of the initial states are well defined and are different for cc0, cc1 and cc2.

The biggest mystery in y(2S) decays is the so-called r p puzzle. In perturbative QCD, the decays of the charmonium states, J/y and y(2S), into light hadrons are expected to be dominated by the annihilation of the charm and anticharm quarks into three gluons. In this simple picture, the partial width for decays into any exclusive hadronic state is proportional to the wave function at the origin squared, y(0)2, which is well determined from dilepton decays. Since the strong coupling constant does not change much between the J/y and y(2S) masses, it is reasonable to expect that for any exclusive hadronic state (h), the J/y and y(2S) decay branching fractions will scale as B(y(2S)Æh)/B(J/yÆh) = B(y(2S)Æe+e)/B(J/yÆe+e) = 12%.

This relationship is known as the 12% rule. Although it works reasonably well for a number of specific decay modes, it fails seriously in the case of y(2S) two-body decays to the vector-pseudoscalar meson final states, r p and K*-K. This anomaly was discovered by SLAC’s Mark II experiment. In addition, the BES group has reported violations of the 12% rule for vector-tensor decay modes. Although a number of theoretical explanations have been proposed, most of them do not provide a satisfactory solution. The large y(2S) event sample will allow more precise measurements of the branching ratios to better test the surviving theories.

cernnews4_12-02

During the BEPC run, a peak luminosity of 1.1 ¥ 1031 cm-2s-1 was reached, and a record 208 000 y(2S) events were accumulated in one day. The first round of reconstruction of all y(2S) events has been completed. Careful offline calibration of the data shows that the BES detector performed well, with a barrel time-of-flight resolution of 200 ps, dE/dx resolution of 8.5%, and a momentum resolution of 1.7*(1+p2)% (figure 2 gives a glimpse of detector performance).

Earlier, BES obtained a sample of 58 million J/y events, which is the world’s largest J/y sample from electron-positron collisions. The J/y and y(2S) samples are complementary, and together will provide information on a wide range of topics, as well as testing the 12% rule.

CMS assembly enters its next phase

cernnews5_12-02

Assembly of the Compact Muon Solenoid (CMS) detector, being built for physics at CERN’s Large Hadron Collider, reached an important turning point in October. The return yoke for the detector’s 4 T superconducting solenoid is now completely assembled, with a central section supporting the 7.6 m diameter, 13 m long outer shell of the solenoid’s vacuum tank. Attention is now shifting towards installation of the solenoid and sub-detectors.

As components arrive at CERN from around the world, an assembly team coordinated by Zurich’s ETH and CERN ensures that each new piece of the puzzle is carefully slotted into place. A magnet test is foreseen for early 2005, by which time several CMS subdetector systems, including the hadronic calorimeter and muon system, will be largely complete, cabled and tested.

So far, this giant 3D puzzle has produced few difficulties, thanks largely to the establishment of an engineering and integration centre, jointly supported by ETH and CERN, 6 years ago. Staffed by designers and engineers from both partners, and supplemented by visiting engineers from other CMS institutes, the centre scrupulously controls the space allocated to each subsystem and the no-go zones separating them, ensuring smooth integration of detector elements.

Work on the hadron calorimeter is well advanced, with the barrel nearing completion and the brass absorber of the first hadron endcap calorimeter gradually taking shape. Barrel and endcap muon chambers were successfully installed during the summer, and a dry run of the solenoid coil insertion was also recently conducted. This involved the inner shell of the vacuum tank, modified to simulate the coil, being rotated from vertical to horizontal and inserted into the outer shell. Particular attention is now being paid to the layout and connections of the hundreds of kilometres of cables, gas lines and water pipes that will supply the CMS detector.

With the first letter of the CMS acronym very much in mind, and hermeticity a watchword for achieving good physics performance, the CMS detector is being built to exacting mechanical tolerances. Muon chamber supports on the 15 m diameter endcap yoke disks, for example, are positioned to an accuracy of 0.2 mm. CMS is placing great emphasis on planning, and so it was with some satisfaction that the collaboration received the news from the committees charged with overseeing installation that “there is every reason to believe that CMS will have a working detector ready for first collisions in April 2007”.

Accelerator conference showcases diversity

cernepac1_12-02

The eighth European Particle Accelerator Conference (EPAC’02), held in Paris on 3-7 June, brought an unprecedented number of accelerator physicists and users together. With about 850 participants, including more than 150 from the US and nearly 60 students, and more than 1000 contributed or invited papers, this year’s gathering was the biggest EPAC to date. Representatives of the European Union and the high-energy physics community gave the opening and summary talks. Presentations ranged from beam dynamics to technology transfer and industrial spin-off, making the conference a remarkable international and interdisciplinary event.

Looking ahead

One of the highlights of the conference came from CERN, where the Compact Linear Collider (CLIC) study group has been investigating the use of materials other than copper for normally conducting accelerating structures. At the fields that would be needed to produce the high-accelerating gradients that the CLIC project aims to achieve, copper structures suffer severe surface damage. The CLIC team believes that this arises from field-emitted electrons that are accelerated from one side of the structure to the other, causing melting and erosion of the coupling irises (regions of smaller radius separating the cells of an accelerating structure). By replacing copper with tungsten irises, the CLIC team has succeeded in making structures able to withstand very high gradients for several thousands of hours without any surface damage.

CLIC’s novel two-beam acceleration scheme is a technology for the long-term future. Closer to the present is CERN’s Large Hadron Collider (LHC), and there were reports of activities at the laboratory’s existing accelerator complex to prepare for the new machine. Notable among these was a so-called beam-scrubbing run with LHC-type beam in the Super Proton Synchrotron (SPS), which will be the last link in the LHC injector chain. The idea of the scrubbing run, which took place in May, was to bombard the walls of the SPS vacuum chamber with electrons created by ionization of residual gas, accelerated by the beam, and multiplied by secondary emission (the electron cloud phenomenon). This had the effect of forcing outgassing and reducing the secondary electron yield, thereby improving the vacuum in the accelerator and allowing the SPS to achieve the beam intensities required for the LHC.

Returning to the present, there were reports outlining the excellent performance of the KEK-B and PEP-II B-factories in Japan and the US, as well as from the Italian Frascati laboratory’s DAPHNE accelerator, which supports the KLOE CP-violation experiment. Significant progress was reported from Brookhaven’s Relativistic Heavy Ion Collider (RHIC), where polarization levels of 40% have been maintained to top energy with proton beams.

Reports from Germany’s DESY laboratory and Fermilab in the US brought home the challenges of major luminosity upgrades. At DESY, background problems arising partly from back-scattering from masks behind the interaction points are limiting beam intensity, while at Fermilab’s Tevatron, growth of emittance (a measure of beam size times divergence) in the antiproton accumulator coupled with long-range beam-beam encounters are factors currently limiting the machine’s luminosity. Fermilab’s recycler ring, designed to retrieve, store and recycle antiprotons that would otherwise have been lost, is still in the commissioning phase. When it is fully operational, Tevatron luminosity is set to improve.

Light sources

cernepac2_12-02

Other news from DESY concerned the TESLA Test Facility (TTF), which has been established with the goal of providing a test bed for the TESLA linear collider concept and a free-electron laser. The TTF has achieved lasing down to a wavelength of 80 nm. This follows the announcement at last year’s particle accelerator conference (PAC) in Chicago that the low-energy undulator test line at Argonne National Laboratory’s advanced photon source had lased from the visible down to 130 nm. It marks a significant milestone on the road towards a free-electron laser in the hard X-ray region.

Other highlights from light sources include the first observations of steady state coherent synchrotron radiation in the far infrared at Berlin’s BESSY II synchrotron, and the successful commissioning of the Swiss Light Source (CERN Courier April p24). Emerging trends in this area are the growing number of free-electron laser projects, and many third-generation synchrotron sources under construction. Reports were given from Soleil in France, the Spanish synchrotron that will be built in Barcelona, and the UK’s Diamond machine.

Novel techniques

Several interesting new possibilities were reported from the Stanford Linear Accelerator Center (SLAC) in California, where work on bunch compression is under way at the laboratory’s 2 mile linac. This is motivated by the linear coherent light source proposal to build a 1-15 Å free-electron laser using linac beams of up to 15 GeV. The bunch compression work will allow the study of light emission in the so-called self-amplified spontaneous emission mode in a free-electron laser. It also opens up avenues for studies of plasma acceleration and wakefield studies, as well as the intriguing possibility of adding an “after-burner” to the linac. According to this idea, a dense plasma could double the energy of SLAC’s electron beams.

Fourth-generation light sources were a strong theme at the conference, with many proposed machine architectures being put forward. A light source for fast X-ray science proposed by Berkeley would be based on a recirculating linac and on dipole RF cavities plus gratings for photon pulse compression. Furthermore, short-pulse generation schemes based on energy recovery linacs, advanced by Cornell in particular, promise levels of performance well exceeding the present third generation of synchrotron light sources.

Induction accelerating devices (in which acceleration is achieved by changing the strength of a magnetic field in magnetic material encircling the beams) are being developed for high-intensity proton synchrotrons. Such devices could also create long super-bunches colliding with large crossing angles in future machines such as a very large hadron collider. They could also be used for a luminosity upgrade at the LHC.

Superconducting magnet developments based on the so-called wind-and-react technique for niobium-3 tin superconductor, or on new high critical temperature (Tc) materials were reported from Fermilab, Berkeley and Brookhaven. The wind-and-react technique overcomes the intrinsic brittleness of niobium-3 tin by winding the coil before inducing the reaction that forms the superconducting compound.

Beam dynamics

cernepac3_12-02

Among the hot topics presented in beam dynamics, several speakers reported new theoretical and experimental studies of the beam-beam interaction, including long-range encounters. These covered multiparticle simulations, wavelet approaches and observations at the B-factories. The associated reduction of dynamic aperture and the possible loss of collisionless (Landau) damping for coherent beam-beam modes may limit the performance of present and future hadron colliders. Compensation schemes based on non-linear lenses have been proposed and are being tested at Fermilab and at CERN. Electron cloud effects have been observed and are being intensively studied at the B-factories, at CERN accelerators and recently also at RHIC. Weak solenoids can successfully cure the problem in the field-free regions. Sophisticated simulation codes for electron cloud build-up and associated beam instabilities have been developed in several labs, and comparisons of predictions and observations were reported in many contributions.

The EPAC conferences, which cover all aspects of accelerator physics, technology and applications, are organized by the Interdivisional Group on Accelerators of the European Physical Society (EPS-IGA), an active group currently with about 200 members. The next conference, EPAC’04, will be held in Lucerne, Switzerland, on 5-9 July 2004. Next year’s major accelerator conference, PAC 2003, will be held on 12-16 May in Portland, Oregon, US.

bright-rec iop pub iop-science physcis connect