Bluefors – leaderboard other pages

Topics

Aging workshop provides a useful review of the long-term use of gaseous detectors

cernnews7_12-01

Detectors, like people, encounter problems as they get older. However, with foresight and experience, these problems can be minimized and even overcome. The recent international workshop on aging phenomena in gaseous detectors, which was held at DESY in Hamburg on 2-5 October, saw some 90 experts from 17 countries grappling with the dreaded “aging effects” that occur in gaseous particle detectors.

The workshop provided a long-overdue exchange of experiences concerning this specialized topic, which is of increasing importance in the high-rate era of today’s collider projects. The previous workshop on the subject had been held at Berkeley more than 15 years ago. Through 50 talks and posters the participants reviewed aging effects that can seriously impede the operation of gaseous detectors or even render them inoperable. Besides the well known, yet still poorly understood, aging due to gas polymerization, lesser-known anode swelling and etching effects in gas mixtures containing oxygen and fluorine compounds (CF4, for example) were widely discussed. Presentations on gases and materials for detectors and gas systems stressed that very careful selection and testing of all system components are imperative to ensure the survival of detectors.

The long-term experience with large drift chamber operation in experiments at various colliding beam machines – HERA (DESY), LEP (CERN) and the Tevatron (Fermilab) – were reviewed as well as more recent high-rate results from HERA-B detectors and prototype detectors for LHC experiments at CERN. Aging in classical drift chambers and tubes, and in various forms of microstrip gas detectors, was discussed. Special sessions dealt with aging problems in photosensitive detectors and resistive plate chambers, in the BaBar and BELLE experiments at the new B-particle factories, for example.

Channelling produces new sources of positrons

cernnews8_12-01

For future electron-positron linear colliders, high-intensity electron and positron beams are needed. These must be sufficiently well defined (low-emittance) in order to reach a high luminosity at the collision point. While intense electron beams can be produced without major difficulty, the production of intense positron beams is more of a problem.

A significant R&D effort is under way in many laboratories to find a positron source satisfying the requirements of intensity and emittance and being reliable over long periods of time. Recently an experiment on a special kind of positron source carried out at CERN yielded promising results.

The basic route to creating positrons is via the production of hard photons (gamma rays) by electron-positron pairs in a material. In conventional sources, a powerful electron beam hits an amorphous target (without any particular crystal orientation). In such a target the electrons are attracted by the nuclei and radiate photons (bremsstrahlung). These in turn produce electron-positron pairs in the target. The rates for these two successive steps increase by the square of the atomic number of the target, so that heavy nuclear materials, such as tungsten, are preferred.

Another approach is to use a crystal, the atomic rows of which are aligned with the incident electron beam, instead of an amorphous target. Here the electrons will be attracted not only by the individual nuclei but also by many successive nuclei of a same row, as though the atomic mass were multiplied by the number of successive nuclei. This gives more intense radiation (coherent bremsstrahlung).

cernnews9_12-01

An electron can even revolve around the atomic row many times. It is then “channelled” and radiates as though it were a helicoidal undulator, the period of which would be in the order of 1 µm, with a field equivalent to thousands of Teslas. This channelling radiation, which is even more intense than coherent bremsstrahlung, gives more electron-positron pairs.

Crystal targets are therefore thinner than amorphous ones that give the same number of positrons. This is useful for limiting the energy deposited in the target and, hence, the heating.

The aim of the WA103 experiment, carried out at CERN in 2000 and 2001 after similar experiments at Orsay (France) and KEK (Japan), was to observe and measure the enhancement of positron production by a crystalline source and to measure the energy and angular distributions of the emitted positrons.

An electron beam of 5-40 GeV was used in the West Hall of CERN’s SPS synchrotron. Two incident energies were selected – 6 GeV and 10 GeV – the former corresponding to the choice of the Next Linear Collider; the latter to the Japanese Linear Collider.

The experimental set-up accommodated different kinds of target – all crystal (8 mm) or compound (4 mm crystal followed by 4 mm of amorphous target). The emitted positrons are detected in a drift chamber partially in a magnetic field. The part of the chamber outside the magnetic field provides the positron emission angle, while that in the magnetic field provides the positron momentum. The emitted photons are monitored in a preshower detector and in a “spaghetti” calorimeter (scintillating fibres in lead).

Different laboratories took part: LAL-Orsay (acquisition electronics and goniometer); IPN-Lyon (simulations and goniometer control); the Max-Planck Institute, Stuttgart (tungsten crystals); the Budker Institute, Novosibirsk (conception and realization of the drift chamber, track reconstruction programme and simulations); and the institutes of Kharkov and Tomsk (photon detectors). The spaghetti calorimeter was provided by LNF-Frascati. The tests on the detectors were carried out at LAL-Orsay with the participation of the Budker Institute and IPN Lyon physicists. Data-taking was done by Franco-Russian teams in which the physicists from the Budker Institute played an essential role. In this long collaboration, physicists from the College de France-Paris participated in the initial simulations.

The first results showed the channelling enhancement, which was close to that predicted by simulations. Comparison of the positron energy spectra obtained for a tungsten crystal oriented on its <111> axis and for an amorphous target of the same thickness showed a positron yield boosted by a factor of 3-4 for a 4 mm target and a 10 GeV electron beam.

A boost slightly larger than 2 is seen for the 8 mm crystal target. A large number of soft positrons were also seen, which is very interesting for the accelerator acceptance downstream of the target. The observations made on the photon detectors confirmed the enhancement of the number of photons and of the radiated energy.

This appears to be a very promising avenue for future linear colliders.

Antares detector deployment progress is on target for 2004

The Antares undersea neutrino experiment, scheduled for deployment 2400 m down in the Mediterranean off the south coast of France, successfully laid its sea cable at the beginning of October. This is the first important milestone on the way to the deployment of a full detector by 2004.

Antares (Between neutrinos and the deep blue sea) will study neutrinos by detecting muons produced through neutrino interactions in the rock or seawater near the detector. In its first incarnation it will have 10 strings of photomultipliers anchored to the seafloor in a 0.1 km2 array. These will all be powered from and read out through the standard underwater telecommunications cable that was laid in October. A sonar system will monitor the relative positions of the photomultipliers and, somewhat unusually for a particle physics experiment, bioluminescent fish will form an important source of background.

A proof-of-principle experiment was successfully carried out from 1996 to 1999 and, all being well, the Antares collaboration hopes to scale up to a full 1 km3 after completion of the first phase of the experiment.

Green light for massive increase in computing power for LHC data

cernnews1_11-01

The first phase of the impressive Computing Grid project for CERN’s future Large Hadron Collider (LHC) was approved at a special meeting of CERN’s Council, its governing body, on 20 September.

CERN is gearing up for an unprecedented avalanche of data from the large experiments at the LHC (CERN Courier October p31). After LHC commissioning in 2006, the collider’s four giant detectors will be accumulating more than 10 million Gbytes of particle-collision data each year (equivalent to the contents of about 20 million CD-ROMs). To handle this will require a thousand times as much computing power than is available to CERN today.

Nearly 10,000 scientists, at hundreds of universities round the world, will group in virtual communities to analyse this LHC data. The strategy relies on the coordinated deployment of communications technologies at hundreds of institutes via an intricately interconnected worldwide grid of tens of thousands of computers and storage devices.

The LHC Computing Grid project will proceed in two phases. The first, to be activated in 2002 and continuing in 2003 and 2004, will develop the prototype equipment and techniques necessary for the data-intensive scientific computing of the LHC era. In 2005, 2006 and 2007, Phase 2 of the project, which will build on the experience gained in the first phase, will construct the production version of the LHC Computing Grid.

Phase 1 will require an investment at CERN of SwFr 30 million (some EURO 20 million) which will come from contributions from CERN’s member states and major involvement of industrial sponsors. More than 50 positions for young professionals will be created. Significant investments are also being made by participants in the LHC programme, particularly in the US and Japan, as well as Europe.

This challenge of handling huge quantities of data now being confronted by CERN will be faced subsequently by governments, commerce and other organizations. The LHC will be a computing testbed for the world.

Openlab attracts big names

To push the LHC computing effort, CERN has set up the openlab for DataGrid applications. Already, three leading information technology firms – Enterasys Networks, Intel and KPNQwest – are collaborating on this project in advanced distributed computing. Each firm will invest SwFr 2.5 million (EURO 1.6 million) over three years.

CERN already coordinates one major Grid computing effort – the EU-funded DataGrid project (CERN Courier March p5). An important aim of the CERN openlab is to take the results of these projects and apply them in the LHC Computing Grid.

The World Wide Web, which was developed at CERN during the run-up to research at the LEP collider, allows easy access to previously prepared information. Grid technologies will go further, searching out and analysing data from tens of thousands of interconnected computers and storage devices across the world.

This new capability will enable data stored anywhere to be exploited much more efficiently. Particle physics is blazing a scientific Grid trail for meteorologists, biologists and medical researchers.

See http://www.cern.ch/openlab.

International team breaks through to maximum light amplification

cernnews2_11-01

In a major boost for future plans, an international team based at DESY has achieved maximum light amplification from a free-electron laser (FEL) for ultraviolet radiation. The amplification of 10 million is the theoretically expected peak performance for such a device and is over a thousand times the brightness so far achieved in this region of the electromagnetic spectrum.

The FEL at DESY produces ultraviolet light with wavelengths of between 80 and 180 µm – the shortest wavelengths produced in this way. Maximum light amplification (“saturation”) was obtained with a wavelength of 98 µm.

The latest results were produced at DESY’s TESLA Test Facility (TTF) using the self-amplified spontaneous emission principle (SASE), first proposed and investigated elsewhere in the early 1980s. In SASE, electrons brought to high energies in a suitable accelerator subsequently traverse a slalom-like course of magnets, emitting laser-like bundles of radiation as they do so.

The electrons and the emitted radiation act on each other. The tiny bunches of electrons try to match the wavelength of the radiation, thereby becoming denser and radiating more intensely. This microbunching continues until all of the electrons oscillate in unison. Unlike traditional lasers, the SASE approach is not limited to specific wavelengths and it can be scaled appropriately.

The SASE FEL at DESY has shown for the first time that the self-amplifying effect does lead to the theoretically calculated amplification in the ultraviolet regime. Similar amplification factors were demonstrated last year at institutes in the US in the visible light range (500 µm).

Soon, the existing DESY TTF will be upgraded to a 300 m FEL to attain wavelengths of less than 6 µm – the regime of soft X-rays. As well as a research facility in its own right, it will serve as a testbed in the international TESLA project for a superconducting linear electron-positron collider. As well as supplying beams for particle physics research, this will also apply SASE technology to produce ever shorter wavelength X-rays.

Table-top device makes crystalline beams

A model set up at Munich’s Ludwig-Maximilians University has for the first time achieved a high level of “crystallization” of particle beams in a ring.

The particles orbiting in conventional circular accelerators are controlled by carefully arranged electric and magnetic fields. One long-sought-after aim is to freeze out the relative motion of the particles in the ring so that the accelerator fields hold them firmly in place, similar to atoms in a crystal lattice, giving beams of unprecedented brilliance.

The first signs of such behaviour were seen some 20 years ago at Novosibirsk using the then new electron cooling approach, which was pioneered at Novosibirsk. The behaviour has since been reproduced – notably at the ESR storage ring at Darmstadt’s GSI laboratory. The extremely precise momentum definition needed to achieve such conditions makes for very accurate mass spectrometry.

However, the beam ordering seen under these conditions is far from complete and can be understood as particles losing their ability to overtake each other. Three-dimensional arrangements of ions have been achieved in particle traps using additional techniques, such as laser cooling to damp the residual motion of particles.

Physicists at Ludwig-Maximilians University built a special table-top storage ring that uses the radiofrequency quadrupole (RFQ) technique normally used in linear accelerators. Their 12 cm diameter PALLAS ring can be likened to a ring-shaped quadrupole ion guide. The radial confinement of the ions, as well as the bending, is provided by the RFQ ring electrodes. The ring is additionally equipped with drift tubes uniformly distributed around the circumference. These generate static electric fields, which can be used to transport and position ions longitudinally and to bunch the beam.

The ion beam attained a velocity of 2800 m/s – equivalent to a beam energy of 1 eV. The number of ions involved, some 105, is small by the standards of particle accelerators and storage rings, but the ideas could be scalable to larger machines.

Experts share news of radiofrequency progress

cerncry1_9-01

Radiofrequency (RF) electric fields provide the motive power for high-energy accelerators. In the continuing bid for higher energies, superconducting techniques are increasingly being used to obtain maximum electronvolts from the wall plug.

The traditional biennial meeting of experts in RF superconductivity reflects the continual and impressive progress being made. The tenth workshop was jointly organized by the Japanese KEK and JAERI laboratories and held in Tsukuba on 6-11 September under the chairmanship of Shuichi Noguchi.

The first day’s sessions were devoted to laboratory review talks in the traditional alphabetic order. First to deliver was Argonne National Laboratory, where R&D to design and finalize accelerating structures for the Rare Isotope Accelerator (RAI) is gaining momentum. The last of the reviews was from Wuppertal University, where high critical-temperature materials are being examined both for their RF properties and for possible applications, such as in superconducting RF filters.

Evidently, all laboratories in the TESLA collaboration for a superconducting linear electron-positron collider had put in a major effort to finalize the technical details for their machine proposal with its incorporated X-ray Free Electron Laser presented earlier this year (CERN Courier June p20). The very ambitious goal of $2000 per superconducting MV, pronounced long ago by the late Bjorn Wiik, who was one of the driving forces for TESLA, has nearly been achieved. Reliable but inexpensive fabrication, final surface treatment and assembly techniques are essential for building the required 20,000 cavities. Manufacturing by hydroforming or by spinning several cells from one tube were reviewed extensively. For surface treatment, electropolishing and high-pressure water rinsing are now standard.

Similar techniques will be applied for the improved cavities for the upgrade of the Jefferson Laboratory’s CEBAF machine to 12 GeV. CERN’s effort to produce the LHC 400 MHz superconducting modules in niobium copper technology was covered. CERN also collaborates with several laboratories using its existing facilities from the increased energy effort at LEP2. These remain operational for the preparation and future maintenance of cavities for CERN’s future LHC collider.

Proton machines

Another effort is focused on high-current superconducting proton (or light ion) linacs. Beam energies (per nucleon) will largely exceed those reliably delivered for many years by ATLAS (Argonne) or ALPI (INFN Legnaro), coming close to or even exceeding 1 GeV. This covers a large number of projects or studies: the Spallation Neutron Source (SNS) under construction at Oak Ridge, for which the Jefferson Laboratory is in charge of cavity production in close collaboration with Los Alamos; the “joint project” of JAERI and KEK; the European Spallation Source; the French 700 MHz proton linac; the Advanced Accelerator Applications project at Los Alamos; the Italian TRASCO; and CERN’s SPL.

These proton linacs and the RAI at Argonne now unify the superconducting RF community, which has been split until now into “low beta” (below say ß = 0.2, where ß is the ratio between the beam particle velocity and that of light) and “high beta” (practically ß = 1) applications, with nothing in between. To study fully superconducting options for these machines, the length of the typically “elliptical” ß = 1 cavities – such as those used for LEP2 – is squeezed down to about ß = 0.5, where mechanical stability problems start to arise.

At the other end of the range the typical low-ß spoke resonators or H-mode structures are being “stretched” to about the same ß, hence supplying superconducting resonators suitable for all ß. This includes even a very-low energy superconducting radiofrequency quadrupole under development at INFN-Legnaro in Italy.

Cornell makes plans to alter its course

cerncor1_9-01

The Cornell electron-positron storage ring (CESR) and the associated particle physics detector, CLEO, completed their latest very successful physics run in June. Running at collision energies on or near the Y(4S) resonance at 10.6 GeV (a bound state of the fifth “b” quark and its antiquark), the accelerator achieved its highest-ever luminosity (a measure of the particle collision rate) of 1.3×1033cm-2s-1. This figure has been surpassed by the new B-factories at SLAC, Stanford, and KEK, Japan, but for a long time CESR held the world record for electron-positron collision rate.

To accomplish this, CESR stored a total beam current of 740 mA, with each beam having nine trains of particles and five bunches per train. Crucial for obtaining these high currents was the use of superconducting radiofrequency cavities to provide power to the beams. Beginning in June 2000, CESR produced a total integrated luminosity of 13.3 fb-1for the run. At its best, the accelerator was delivering 1.5 fb-1/month – twice that of any previous run.

In anticipation of the run, the CLEO detector (now designated CLEO III) had undergone extensive modifications. A completely new 47-layer central drift chamber was installed. This chamber has endplates with a stepped “wedding cake” profile to allow for the eventual insertion of superconducting quadrupole magnets close to the interaction region and an outer radius smaller than the previous CLEO drift chamber to allow room for a particle-identification detector.

This latter detector is a ring-imaging Cherenkov counter (RICH) consisting of a solid 1 cm thick lithium fluoride radiator, followed by a 15.7 cm expansion space to allow the Cherenkov cone to enlarge, and then a thin-gap multiwire proportional chamber filled with a mixture of TEA and methane gas as the photodetector. By detecting on average 12 photoelectrons from the Cherenkov ring of each charged particle, the RICH allows the identification of pions and kaons with an efficiency of roughly 85% and a fake rate of less than 1% for a momentum of below 2.0 GeV/c rising to about 10% at 2.5 GeV/c.

cerncor2_9-01

A new four-layer, double-sided silicon vertex detector was installed directly around the beam pipe. Covering 93% of the solid angle, at radii of 2.5-10.2 cm from the beam, the silicon detector contains 125 000 channels of read out. Finally, the CLEO trigger and data acquisition systems were completely redesigned and rebuilt to handle the higher CESR luminosity.

Apart from some efficiency problems with the silicon detector, all of the CLEO III components, new and old, performed exceedingly well during the run, and the experiment accumulated a total integ-rated luminosity of 9.2 fb-1. Of this, 6.9 fb-1 was obtained at the Y(4S) resonance, corresponding to more than 7 million decays into B particle pairs.

While the new detector was accumulating luminosity, analyses of data collected during previous incarnations of the experiment (CLEO II and CLEO II.V) were continuing. During the year 2000 the CLEO collaboration published 30 papers from these analyses and it has already published 22 more in 2001. These papers cover a broad range of topics, including the discovery of six new charmed-baryon states, the observation of 10 new B decay modes, new limits on neutral D particle mixing and B flavour-changing neutral-current decays, a precise measurement of the Lc lifetime, and the improved measurements of the two-photon widths of several charmonium states.

Biology research becomes crystal clear

cernimag1-10-01

At the front line of medical research, molecular and cellular biologists engineer new molecular probes, including genes and proteins. Having produced them, the next task is to investigate what happens when they are inserted into living tissue.

In biology-speak, the researchers want to know how and where the new genes “express” themselves. In pharmaceutical research, the effects of potential new drugs have to be established as quickly and as cost effectively as possible.

In the past, results have been established in vitro either by sacrificing the animal and analysing samples, or by taking biopsies. Until recently there has been no easy way of studying the effects of genetic manipulation or drug administration in living animals and in real time.

Now researchers have found how imaging techniques used in medical diagnosis can be adapted for genetic or drug research, providing an immediate picture of how the modified tissue behaves while it is still alive – in vivo and in situ. The techniques being used are magnetic resonance imaging (MRI), and positron emission tomography (PET).

Creating new images

In 1952 the Nobel Prize for Physics was awarded to Felix Bloch, who in 1954 became CERN’s first director general, and Edward Purcell for their discovery of nuclear magnetic resonance (NMR). In this technique, nuclei gripped by a strong magnetic field are exposed to microwave radiation.

NMR was used initially with uniform magnetic fields to study molecular structure. More than 25 years later, researchers began to realize how the use of non-uniform magnetic fields could produce position-dependent signals, from which a computer could reconstruct a three-dimensional structural image. The time resolution of MRI was soon improved, so that, for example, heart function could be monitored.

The other technique, PET, works by administering harmless but selective radioisotopes that emit positrons – the antiparticles of normal atomic electrons. These isotopes are taken up by molecules involved in the metabolic functions of cells or organs and they work their way into the part of the organism being studied (such as the brain). Eventually the emitted positrons annihilate with atomic electrons, each annihilation producing a characteristic back-to-back pair of 511 keV photons (gamma rays).

Monitoring the distribution of these gammas reveals the detailed structure of where the isotope becomes localized. For instance, cancer cells are known to have a more rapid metabolism than normal cells, consuming more energy in the form of glucose. After injection, a fluorine-18 positron emitter in the form of fluorodesoxyglucose molecules (FDG) is taken up by cancer cells and accurately reveals primary cancers and metastasic activity. PET also provides a “moving image” of the metabolic function of the tissue or organ. This is particularly useful for genetic and drug research, showing how the organism is being affected as well as where.

cernimag2-10-01

Since its inception, PET technology has continually benefited from new developments in radiation detection, first using sodium iodide crystals, then benefiting from the improved performance from bismuth germanate (BGO) and, more recently, exploiting superior materials such as lutetium orthosilicate or aluminate, which are faster and more effective than BGO.

Monitoring gamma rays is a vital task in any major physics detector, particularly for the component studying the absorption of energy via electromagnetic processes – the electromagnetic calorimeter. In this module of the detector, tiny scintillations of light produced by electromagnetic interactions are collected, amplified if necessary and analysed.

As part of the preparations for the experimental programme for CERN’s Large Hadron Collider (LHC), the Crystal Clear research and development collaboration was established in 1990 by Paul Lecoq from CERN to look into the development of new scintillating materials.

After input from different disciplines (crystallography and solid-state physics, as well as particle physics), early accomplishments led to the decision by the collaboration for the CMS experiment at the LHC to use lead tungstate (PbWO4) for its electromagnetic calorimeter. The production of the 80,000 crystals for CMS is the subject of a major international collaboration.

In parallel, the collaboration, now led by Stefaan Tavernier from the Vrije University in Brussels, established the use of cerium fluoride (CeF3) as another high-energy physics standard, and worked with specialist companies in the US, China and the Czech Republic to ensure its production on the industrial scale. Heavy scintillating fluoride glasses is another growth area, where the collaboration works with a French company. The Crystal Clear collaboration also pioneered the use of new compounds based on lutetium.

Working with the Czech Crytur company (also involved in work for cerium fluoride), the collaboration has developed scintillating crystals of yttrium aluminate perovskite (YAP) doped with increasing amounts of lutetium, which gives twice as much light as BGO and over a broader range of wavelengths. YAP is extensively used in medical instrumentation and in screens for electron microscopes but its density is marginal for high sensitivity PET applications that have to stop 511 keV photons. Replacing up to 100% of yttrium by lutetium ions gives a very bright and fast scintillator, LuAP, with the unprecedented density of 8.34g/cm3.

Smaller instrumentation

Physics detectors are normally large – even a small one is several metres long. However, PET cameras for medical use are more compact. Even so, the spatial resolution of commercially available cameras meant that PET analysis was, until recently, limited to relatively large subjects, such as humans and other large animals. New PET techniques extend this approach to smaller specimens, and this is where the technique becomes interesting for genetic laboratory studies.

In the Crystal Clear collaboration, YAP, lead tungstate and lutetium aluminate are all being investigated for their possible use in small PET scanners. On the read-out side, additional amplification can complement the traditional photomultiplier read-out of scintillation light. This can be achieved by avalanche photodetectors using applied bias voltages or by imaging silicon pixel array (ISPA) tubes using internal photocathodes and semiconductor read-out. Italian groups have achieved submillimetre resolution using YAP-based cameras.

New PET scanners (ClearPET) for biological research are the subject of collaboration agreements between the Crystal Clear collaboration and specialist local centres in France, Belgium, Switzerland and Germany. A project for a mammography PET camera (ClearPEM) is under discussion with Portugal.

The work and achievements of the Crystal Clear collaboration are a good example of how particle physics expertise can seed important developments in other areas of science. Rather than being left to happen fortuitously, such symbiosis is now being encouraged and nurtured at CERN.

Getting up to speed in the year AD 2

cernnews1-10-01

After coming into operation last year, CERN’s Antiproton Decelerator (AD) has got up to speed for physics this year. Changes to the AD since its debut mean that the three experiments – ASACUSA, ATHENA and ATRAP – now enjoy more intense antiproton beams.

The AD is a unique machine. Its job is to decelerate, not accelerate, antiparticle beams, and it has to handle energies that decrease by an unprecedented factor of 35 from the injection ceiling at 3.57 GeV to the ejection floor at 100 MeV.

In its first incarnation in 1987 as a collector of antiprotons, precooling the particles before they passed to the accumulator ring for CERN’s proton-antiproton collider, it was designed to operate at fixed energy, so this factor of 35 presented a big challenge.

The AD team’s design goal was to hang onto a quarter of the injected antiprotons through their vertiginous fall in energy, and to repeat the deceleration cycle once a minute. Recent AD improvements have put the team well on the way to reaching this target.

One important new feature is in the electron cooling system, adapted from that used for CERN’s LEAR low energy antiproton ring, which closed in 1996. Electron cooling gives the antiproton beam a final “cold shower” after the initial stochastic cooling, keeping the antiprotons tightly bunched at the lowest energies. Improvements have also been made to the radiofrequency deceleration system. This summer the AD succeeded in decelerating an injected antiproton beam without losing a single precious particle.

Meanwhile the three AD experiments are getting to the heart of the antimatter (see Weighing the antiproton).

bright-rec iop pub iop-science physcis connect