Comsol -leaderboard other pages

Topics

Thinking big: the next generation of detectors

This time last year, it became clear at the Neutrino 2004 conference that results from experiments on solar and atmospheric neutrinos are converging with those from accelerators (in particular, KEK to Kamioka, or K2K, in Japan) and reactors (as in KamLAND, also in Japan) in pointing to a definite neutrino deficit due to an oscillation mechanism. However, further understanding will require new experiments, aimed at making precision measurements of all the parameters of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS or MNSP) leptonic mixing matrix that describes the oscillation mechanism.

The major challenge will be to detect a potential violation of charge-parity (CP) invariance in the leptonic sector, which might in turn make a crucial contribution to explaining the matter-antimatter asymmetry in our universe. Such experiments will require the use of huge “mega-detectors”.

CCEbig1_07-05

The first generation of large-volume detectors was initially designed to measure proton decay. By pushing up the limits on the lifetime of the proton by two orders of magnitude, these experiments made it possible to exclude a minimal SU5 theory as the theory for grand unification. A new, second generation of experiments would make it possible to increase the sensitivity to the proton lifetime by two further orders of magnitude, and check the validity of a significant number of supersymmetry theories. The kind of detector required would also be well suited to the study of those major events in the history of the universe that we know as supernovae.

It is therefore quite appropriate for the same conference to address the detection of neutrinos, the measurement of the proton lifetime and issues relating to cosmology, as in this year’s meeting on the Next Generation of Nucleon Decay and Neutrino Detectors (NNN), held near the Laboratoire Souterrain de Modane (LSM). Originally the site of a detector to study proton instability, the LSM is now a potential site for hosting a mega-detector, capable of receiving a low-energy neutrino beam from CERN, 130 km away. Thus, on 7-9 April 2005, around 100 participants, mainly from Europe, Japan and the US, came together for a conference at the CNRS’s Paul Langevin Centre at nearby Aussois organized by IN2P3 (CNRS) and Dapnia (DSM/CEA), with financial support from photomultiplier manufacturers Hamamatsu, Photonis and Electron Tubes Ltd (ETL).

The first day of the meeting was dedicated to theory, physics motivations and future experimental projects to be pursued at underground sites. John Ellis from CERN opened the conference with a striking plea in favour of this type of physics; he insisted that it complemented collider physics, and emphasized the potential discoveries to be made with a mega-detector.

CCEbig2_07-05

Specialists in the field explained that proton decay, which has not yet been discovered, is still the key to grand unified theories. Recalling that the detectors built to measure the proton lifetime had made it possible to detect neutrinos from supernovae for the first time, subsequent presentations addressed potential approaches to supernova physics, about which little is known, through the high-statistics detection of the neutrinos from these stellar explosions. As Gianluigi Fogli of Bari and Sin’ichiro Ando of Tokyo explained, such a detector would make it possible to extend to neighbouring galaxies the study of these major events in the evolution of the universe, be they in the future or in the distant past.

The afternoon sessions moved on to consider future detectors that could be sited at locations where the detection of neutrino beams, at some distance from an accelerator, could be combined with the observation of proton decay and astrophysical neutrinos. These presentations took stock of the progress of large-scale detector projects in the US, Asia and Europe.

On the face of it, the most accessible technology (the best known and simplest to implement) uses the Cherenkov effect in water, as proposed for the Hyper-Kamiokande project in Japan and the Underground Nucleon Decay and Neutrino Observatory (UNO) project in the US. The most ambitious technology is without doubt that for a large liquid-argon time-projection chamber (100 kt), a bold derivative of the ICARUS detector currently under preparation in the Gran Sasso Laboratory. Further promising alternatives look to organic scintillating liquids, as in the Low Energy Neutrino Astronomy project (LENA), and even a magnetized iron calorimeter as in the India-based Neutrino Observatory (INO).

CCEbig3_07-05

Precision measurements of the θ13 mixing angle in the PMNS matrix, with a value that conditions the possibility of obtaining a measurement of CP violation, require high-intensity neutrino beams. The following day, the conference heard presentations on the worldwide status of experiments using a beam to verify the results obtained with solar or atmospheric neutrinos. For Japan – in addition to the K2K experiment, which has already successfully launched such a programme – the opportunities offered by a successor, Tokai to Kamioka (T2K), at the new Japan Proton Accelerator Research Complex (J-PARC) were reviewed. For the US, following the report of the first results after the successful launch of the Main Injector Neutrino Oscillation Search (MINOS), presentations highlighted the opportunities for measuring θ13 at Fermilab with experiments using off-axis beams to the Soudan mine, as well as the very-long-distance projects from Brookhaven towards several prospective sites.

Moving on to CERN, and Europe more generally, the opportunities for beams to the Gran Sasso Laboratory, which hosts the OPERA and ICARUS experiments, were defined. A series of contributions also demonstrated the validity and physics potential of longer-term projects that are likely to be of direct interest to the CERN community. These are based on the superbeams and neutrino beta-beams produced by the beta-decays of certain light nuclei, such as helium or neon, and even by the decay of dysprosium, which has been met with enthusiasm since the recent discovery of this possibility.

The afternoon session of the second day was mainly devoted to the complex but encouraging R&D efforts in fields as varied as the study of the different physics and instrumental backgrounds, and photo detection. In particular, the presence and support from principal actors in the field of photomultiplier manufacture led to a series of promising technical presentations, in addition to those by physicists on the efforts underway in laboratories in the field of photo detection. The clear objective is to build photomultipliers able to cover large surface areas. The synergies with other fields of research, such as geophysics and rock mechanics, were also underlined.

On one hand, the conference sought to follow in the footsteps of its predecessors; on the other it aimed to ensure that such meetings were held on a more regular basis, and to rationalize their agendas. With this in mind, the day concluded with a round-table discussion, where the participants included Alain Blondel (Geneva), Jacques Bouchez (Saclay), Gianluigi Fogli (Bari), Chang Kee Jung (Stony Brook), Kenzo Nakamura (Tsukuba), André Rubbia (Zurich) and Bernard Sadoulet (Berkeley). It was moderated by Michel Spiro (IN2P3), who proposed making the NNN an annual event and improving coordination of the community’s R&D efforts. This would be done by setting up an inter-regional committee, consisting of several members for each region (Europe, North America, Japan and so on), with a view to validating the construction of a very large detector in around 2010. The committee would also maintain contacts with the steering group for ECFA Studies of a European Neutrino Factory and Future Neutrino Beams, which is chaired by Blondel.

On the last day, before the organized visit to the LSM, an entire session was devoted to a series of presentations from Japan, the US and France. Taking an engineering point of view, this session examined the potential caverns for housing a megatonne detector. Several possible sites are being considered in the US, and the Japanese are presenting the results of their studies for the Kamioka sites. In Europe, the Fréjus site on the Franco-Italian border could host a megatonne detector, so long as the preliminary studies, which have already begun, yield positive results.

The various presentations given throughout the NNN05 conference clearly highlighted the possible areas for exchanges between the different regions and communities, which until now have tended to pursue distinct paths. The next NNN conference will be held in the US in 2006, and the following meeting has already been scheduled to take place on 2 October 2007 at Hamamatsu in Japan, the Japanese “shrine” for photomultipliers.

Telescope takes next step to high-energy frontier

On 9 April 2005, another sunny and bitterly cold day on the southwest shore of Lake Baikal in Siberia, NT200+ was commissioned as the successor to the neutrino telescope NT200. With an effective volume of 10 million tonnes, NT200+ forms one of a trio of large high-energy neutrino telescopes currently in operation, together with Super-Kamiokande in Japan and the Antarctic Muon and Neutrino Detector Array (AMANDA) at the South Pole.

CCEice1_07-05

Every year in February and March, the Baikal Neutrino Telescope is hauled up close to the surface of the thick layer of ice that covers the lake in winter for routine maintenance. Then, in early April, in a race against the steadily warming environment, the ice camp with all its containers and winches is dismantled and stored on shore. The telescope is re-deployed to its operational depth of 1.1 km below the surface and switched back on for another year of operation. With a stable ice cover on the lake lasting well into April, nature has been kind this year to the 50 physicists and technicians, who have struggled over two Siberian winters to accomplish their ambitious programme to upgrade NT200.

The existing NT200 telescope consists of 192 glass spheres, 40 cm in diameter, each housing a 37 cm phototube. The first, smaller stage of the telescope was commissioned in 1993, and became the first stationary underwater Cherenkov telescope for high-energy neutrinos in a natural environment (CERN Courier September 1996 p24). The full array was completed in 1998 and has been taking data ever since.

CCEice2_07-05

The glass spheres are arranged in pairs along eight vertical strings that are attached to an umbrella-like frame at a depth of 1.1 km. The phototubes record the Cherenkov light emitted by charged particles as they pass through the water. Three electrical cables, 5 km long with seven wires each, connect NT200 to the shore 3.5 km away and enable the array to be operated throughout the year. Two of these cables were changed in 2004 and 2005. The reliability and performance of the telescope were also improved during this period, with embedded high-performance PCs installed underwater. In addition, new modems operating at 1 Mbit/s have increased the transfer rate to shore by two orders of magnitude.

NT200 looks at the sky for sources of high-energy cosmic neutrinos. Galactic candidates for high-energy sources include supernova remnants and micro-quasars, while extragalactic sources include active galactic nuclei and gamma-ray bursts. If individual sources are too weak to produce an unambiguous directional signal, the integrated neutrino flux from all sources might still produce a detectable “diffuse signal”. This flux could be identified by an excess of particles at high energies above the background – which is dominantly muons produced in the atmosphere above the detector, with a small contribution from muons generated in the interactions of atmospheric neutrinos.

CCEice3_07-05

The most important result of the first four years of NT200 comes from a search for such a diffuse neutrino flux. It is based on a principle that works only in media with small light scattering, such as water. The idea is not only to watch the geometrical volume of the detector, but also to look for bright events in the large volume between the detector and the bottom of the lake. Because of the small light scattering, wave fronts are preserved over 100 m or more. This results in good pattern recognition for bright particle cascades occurring far outside the geometrical volume, and it enables distant high-energy cascades generated by neutrinos to be distinguished from bright bremsstrahlung showers along the much more frequent downward-going muons. No such events in excess of background have been found.

This result can be transformed into a limit on the flux of cosmic neutrinos, for a given spectral distribution. Assuming a reference spectrum that falls with the inverse square of the neutrino energy, four years of Baikal data yield the flux limit shown in figure 1. For comparison, the limits obtained in one year with the much larger AMANDA telescope are shown. Both experiments have entered new territory and exclude several models for sources of cosmic neutrinos.

CCEice4_07-05

It is this success that motivated the upgrade to NT200+. In the new configuration, three 140 m strings with 12 photomultipliers each are arranged at a radius of 100 m from NT200, so that they surround most of the sensitive volume (figure 2). This enables a much better determination of the shower vertex and dramatically improves the energy resolution. As a result, the upgrade, which adds only 36 photomultipliers to the existing 192, yields a fourfold rise of the sensitivity at 10 PeV – certainly a cost-effective way to do better physics.

CCEice5_07-05

The results from NT200 have demonstrated that a deep underwater detector with an instrumented volume of 80 kt can reach an effective volume of a few megatonnes at peta-electron-volt energies. NT200+, with its moderate but cleverly arranged additional instrumentation, will boost the effective volume to more than 10 Mt. If successful, this could become the prototype for an even larger, sparsely instrumented detector for high energies.

• The Baikal Telescope is a joint Russian-German project, with the Institute for Nuclear Research (INR) in Moscow, the Moscow State University, the Joint Institute for Nuclear Research, Dubna, the Irkutsk State University (all Russia) and DESY (Germany).

ClearPET offers improved insight into animal brains

Crystal Clear is an international collaboration of research institutes, working to develop new generations of scanners for positron emission tomography (PET). The members are CERN, Forschungszentrum Jülich, the Institute of Nuclear Problems in Minsk, the Institute for Physical Research in Ashtarak, the Laboratório de Instrumentação e Física Experimental de Partículas (LIP) in Lisbon, Sungkyunkwan University School of Medicine in Seoul, the Université Claude Bernard in Lyon, the Université de Lausanne and the Vrije Universiteit Brussel (VUB).

Together with a number of guest laboratories, the institutes provide expertise in different domains of physics instrumentation, biology and medicine. Their research activities have led to the design and construction of three prototypes of a new generation of PET scanners for small animals, which provide depth-of-interaction (DOI) information. This machine has now been commercialized by the German company Raytest GmbH under the name ClearPET.

In PET, a molecule involved in a metabolic function of an organ or tumour is labelled by a positron-emitting radioisotope. Once injected, it is taken up by the cells or organs under study. The emitted positrons annihilate with electrons in the surrounding atoms to produce a back-to-back pair of gamma rays. Detecting this gamma radiation reveals the detailed distribution of the isotope.

In the prototype scanners developed by the Crystal Clear collaboration, the detector heads are based on an 8 x 8 matrix of scintillation crystal elements, read out by a multi-anode photomultiplier tube (figure 1). Each element consists of a phosphor sandwich, or phoswich, made up of two layers of crystals with different decay times. One layer is formed from cerium-doped lutetium yttrium orthosilicate (LYSO) scintillator material; the other contains cerium-doped lutetium yttrium aluminate perovskite (LuYAP) scintillator, specially developed by the Crystal Clear collaboration and now commercially available from several companies.

The phoswich arrangement yields DOI information that can be used to correct parallax errors, resulting in a more uniform spatial resolution across the field of view. The crystal elements have an area of 2 x 2 mm and are 8 or 10 mm long; they are separated by 300 μm Tyvek, a highly reflecting material.

The detector modules, which are installed on a rotating gantry, consist of four detector heads mounted in line together with readout electronics. A complete ring system contains 20 detector modules. Because the gantry rotates during a scan, not all of the 20 need to be present. This allows the option of designing a cost-effective system based on a partial ring configuration. Two versions of the scanner are being produced, differing only in the mechanics of the gantry. ClearPETNeuro is optimized for small primates and features a gantry that can be tilted to allow the animal to be imaged in a sitting position, while ClearPET Rodent is optimized for rats and mice.

The performance of the ClearPET prototypes has been studied in various tests. The spatial resolution was measured by imaging a point source of the positron emitter 22Na. Figure 3 shows that the spatial resolution is close to the predictions made in detailed Monte Carlo simulations using GATE, the Geant4 Application for Tomographic Emission (CERN Courier January/February 2005 p27). At the centre of the field of view the resolution is 1.35 mm FWHM, and it remains constant around 1.8 mm FWHM for objects within 20 mm of the scanner axis.

A general feeling for the ClearPET’s performance was obtained by imaging a phantom – a model that measures the characteristics of a medical imaging system. An ad hoc Derenzo phantom was used, consisting of capillary tubes with diameters varying between 1.0 and 2.0 mm, arranged like slices in a pie. Rods of the phantom were filled with 0.5 mCi 18F, a positron emitter regularly used, for example, in PET scans of the brain. It was scanned for 6 min. Figure 4 shows a picture of the phantom and a reconstruction using the ordered-subsets expectation maximization (OSEM) method. Tubes with diameters as small as 1.6 mm are still clearly distinguishable.

The prototypes have also been tested with real subjects. Figure 5 (a) shows one of the rat images obtained with the ClearPETNeuro of the Forschungszentrum Jülich. A 400 g rat was injected with 0.5 mCi of 18F-labelled fluorodeoxyglucose ([18F]FDG), which can be used to observe sugar metabolism in the brain. A 24 min scan was started 30 min after the injection. The reconstructed image shows FDG uptake in the head of a rat. Figure 5 (b) depicts the anatomy of a rat brain. Note the good identification of the small olfactory bulb in front of the brain. These images were obtained using the library of Software for Tomographic Image Reconstruction (STIR) at Hammersmith Hospital, London.

These measurements meet the ClearPET design specifications, and the first images obtained with a rat support these encouraging results. The ClearPETNeuro of the Forschungszentrum Jülich and the ClearPETRodent at the Vrije Universiteit Brussel are nearing completion, and will soon be used in several biomedical research projects.

• “ClearPET” has been registered as a trademark and the technology is licensed to Germany’s Raytest GmbH, which is commercializing a small animal PET system based on the ClearPET Rodent developed by Crystal Clear. See www.raytest.de/index2.html.

LHC cryogenic unit keeps its cool

The cryogenic system for the Large Hadron Collider (LHC) at CERN reached a major milestone on 7 April by achieving operation of the unit at Point 8 at its nominal temperature of 1.8 K. The LHC and its superconducting magnets are designed to operate at this very low temperature, making the 27 km accelerator the coldest large-scale installation in the world. Although acceptance tests performed on the surface had already reached the required temperature in 2002, this is the first time that the nominal temperature has been achieved in situ.

CCEnew1_06-05

The LHC cryogenics system is hugely complex, with 31 kt of material (compressor stations, cold boxes with expansion turbines and heat exchangers, and interconnecting lines) requiring 700 kl of liquid helium passing through 40,000 pipe junctions.

Although normal liquid helium at 4.5 K would be able to cool the magnets so that they became superconducting, the LHC will use superfluid helium at the lower temperature of 1.8 K. Superfluid helium has unusually efficient heat-transfer properties, allowing kilowatts of refrigeration to be transported over more than 1 km with a temperature drop of less than 0.1 K.

Eight cryogenic installations distributed around the LHC ring, with a total power exceeding 140 kW, will cool the helium in two stages, first to 4.5 K and then to the final 1.8 K. Four units built by the Japanese-Swiss consortium IHI-Linde have already been installed; the other four units, made by the French company Air Liquide, are currently being installed and will be tested in 2006.

CMS VPT production reaches 10,000 mark

The CMS experiment, under construction for the Large Hadron Collider (LHC) at CERN, recently took delivery of its 10,000th vacuum phototriode (VPT), to be used in the Electromagnetic Endcap Calorimeter. The occasion was marked by a seminar organised in St Petersburg by the VPT manufacturer, National Research Institute Electron. The manufacturing programme is scheduled for completion in early 2006, when a total of 15,500 devices will have been delivered.

CCEnew2_06-05

The VPT is a single-stage photomultiplier, developed for CMS by groups at the Rutherford Appleton Laboratory, Brunel University and the Petersburg Nuclear Physics Institute, Gatchina. In CMS, each VPT will be bonded to a scintillating lead-tungstate crystal supplied by the Bogoroditsk Techno-Chemical Plant, also in Russia. Each CMS endcap will contain 7324 such crystals and VPTs.

The LHC will provide a very demanding environment for the detectors: they must operate for 10 years under intense gamma and neutron irradiation, and in a magnetic field of 4 T.

In addition, the beam-crossing rate of 40 MHz means that the VPTs must respond to light signals on a timescale of a few nanoseconds. Only a few manufacturers in the world are able to meet the technical requirements of the CMS experiment.

The seminar in St Petersburg was attended by representatives of CERN, the CMS experiment, NRI Electron, and OJSC Russian Electronics, the holding company of both NRI Electron and Bogoroditsk Techno-Chemical Plant. At the end of the seminar, the Russian Academy of Engineering Science gave a special award to Hans Rykaczewski, the CMS-ECAL resources manager, to recognize his contribution to the collaboration between CERN and Russian industry.

PET and CT: a perfect fit

video-capture of David Townsend

David Townsend is a professor in the Department of Medicine, University of Tennessee Medical Center in Knoxville, Tennessee (TN). The winner of the 2004 Clinical Scientist of the Year Award from the Academy of Molecular Imaging, he is an internationally renowned researcher with 30 years’ experience as a physicist working in the field of positron emission tomography (PET). Townsend began his eight years at CERN in 1970. While working at the Cantonal Hospital in Geneva from 1979 to 1993, he recognised the importance of combining the functionality of PET with that of computed tomography (CT). During that same period, Townsend also worked with Georges Charpak, CERN physicist and 1992 Nobel laureate in physics, on medical applications of Charpak’s multi-wire chambers.

After Townsend moved to Pittsburgh in 1993, his group in the US helped to develop the first combined PET/CT scanner; more than 1000 are now used worldwide to image human cancer. In 1999, Townsend received the Image of the Year Award from the Society of Nuclear Medicine in the US, for an image he produced using the first prototype scanner combining state-of-the art PET with true diagnostic-quality CT.

Current research objectives in instrumentation for PET include advances in PET/CT methodology and the assessment of the role of combined PET/CT imaging for a range of different cancers. The PET/CT combination, pioneered by Townsend and Ron Nutt, CEO and president of CTI Molecular Imaging in Knoxville, TN, is a milestone in these developments, revealing in particular the role of the physicist and engineer in bringing such developments into clinical practice and exploring how they affect patient care.

The past 20 years have seen significant advances in the development of imaging instrumentation for PET. Current high-performance clinical PET scanners comprise more than 20,000 individual detector elements, with an axial coverage of 16 cm and around 15% energy resolution. Can you identify the most important factors that have contributed to this remarkable development in PET?

This impressive progress is due essentially to developments in detector construction, new scintillators, better scanner designs, improved reconstruction algorithms, high-performance electronics and, of course, the vast increase in computer power, all of which have been achieved without an appreciable increase in the selling price of the scanners.

The PET/CT image is one of the most exciting developments in nuclear medicine and radiology, its significance being the merging not simply of images but of the imaging technology. Why is the recent appearance of combined PET and CT scanners that can simultaneously image both anatomy and function of particular importance?

the first mouse image taken in 1977

Initial diagnosis and staging of tumours are commonly based on morphological changes seen on CT scans. However, PET can differentiate malignant tissue from benign tissue and is a more effective tool than CT in the search for metastases. Clearly, valuable information can be found in both, and by merging the two it is possible now to view morphological and physiological information in one fused image. To acquire the PET/CT image, a patient passes through the CT portion of the scanner first and then through the PET scanner where the metabolic information is acquired. When the patient has passed through both portions, a merged or fused image can be created.

Let’s take a step back. The history of PET is rich, dynamic and marked by many significant technological achievements. Volumes of books would be required to record the history of PET developments and its birth still remains quite controversial. Could you identify the most important events that have shaped modern PET?

You are indeed correct that the birth of PET is somewhat controversial. One of the first suggestions to use positron-emitting tracers for medical applications was made in 1951 by W H Sweet and G Brownell at Massachusetts General Hospital, and some attempts were made to explore the use of positron-emitting tracers for medical applications in the 1950s. During the late 1950s and 1960s, attempts were made to build a positron scanner, although these attempts were not very successful. After the invention of the CT scanner in 1972, tomography in nuclear medicine received more attention, and during the 1970s a number of different groups attempted to design and construct a positron scanner.

S Rankowitz and J S Robertson of Brookhaven National Laboratory built the first ring tomograph in 1962. In 1975, M Ter-Pogossian, M E Phelps and E Hoffman at Washington University in St Louis presented their first PET tomograph, known as Positron Emission Transaxial Tomograph I (PETT I). Later the name was changed to PET, because the transaxial plane was not the only plane in which images could be reconstructed. In 1979, G N Hounsfield and A M Cormack were awarded the Nobel Prize for Physiology and Medicine in recognition of their development of X-ray CT.

Since the very early development of nuclear-medicine instrumentation, scintillators such as sodium iodide (NaI) have formed the basis for the detector systems. The detector material used in PET is the determining factor in the sensitivity, the image resolution and the count-rate capability.

The only detector of choice in the mid-1970s was thallium-activated NaI – NaI(Tl) – which requires care when manufactured because of its hygroscopic nature. More importantly, it also has a low density and a low effective atomic number that limits the stopping power and efficiency to detect the 511 keV gamma rays from positron annihilation. Which other scintillators have contributed to modern PET tomography?

Thanks to its characteristics, bismuth germanate, or BGO, is the crystal that has served the PET community well since the late 1970s, and it has been used in the fabrication of most PET tomographs for the past two decades. The first actual tomograph constructed that employed BGO was designed and built by Chris Thompson and co-workers at the Neurological Institute in Montreal in 1978.

Although the characteristics of BGO are good, a new scintillator, lutetium oxyorthosilicate (LSO) (discovered by C Melcher, now at CTI Molecular Imaging in Knoxville, TN), is a significant advance for PET imaging. BGO is very dense but has only 15% of the light output of NaI(Tl). LSO has a slightly greater density and a slightly lower effective atomic number, but has five times more light output and is seven times faster than BGO. The first LSO PET tomograph, the MicroPET for small animal imaging, was designed at the University of California in Los Angeles (UCLA) by Simon Cherry and co-workers. The first human LSO tomograph, designed for high-resolution brain imaging, was built by CPS Innovations in Knoxville, TN, and delivered to the Max Planck Institute in February 1999.

What were your key achievements in PET during your career at CERN? Did CERN play a role in its birth?

a PET/CT scan revealing malignancy in two pericaval nodes

In 1975, I was working at CERN when Alan Jeavons, a CERN physicist, asked me to look at the problem of reconstructing images from PET data acquired on the small high-density avalanche chambers (HIDACs) he had built for another application with the University of Geneva. We got the idea for using the HIDACs for PET because a group in Berkeley and University of California, San Francisco (UCSF) was using wire chambers for PET. I developed some software to reconstruct the data from Jeavons’ detectors, and we took the first mouse image with the participation of radiobiologist Marilena Streit-Bianchi in 1977 at CERN.

The reconstruction methods I developed at CERN were further extended mathematically by Benno Schorr (a CERN mathematician), Rolf Clackdoyle and myself from 1980 to 1982. We used those, and other algorithms developed by Michel Defrise in Brussels and Paul Kinahan in Vancouver, in 1987 and 1988 to reconstruct PET data from the first CTI [Computer Technology and Imaging Inc, renamed CTI Molecular Imaging in June 2002] multi-ring PET scanner installed in London at Hammersmith Hospital. PET was not invented at CERN, but some essential and early work at CERN contributed significantly to the development of 3D PET, and then to a new scanner design, the Advanced Rotating Tomograph (ART).

The prototype of the ART scanner, the Partial Ring Tomograph (PRT), was developed at CERN from 1989 to 1990 by Martin Wensveen, Henri Tochon-Danguy and myself, and evaluated clinically at the Cantonal Hospital within the Department of Nuclear Medicine under Alfred Donath. The ART was a forerunner of the PET part of the combined PET/CT scanner, which has now had a major impact on medical imaging.

What has to happen for us to reach a more highly performing PET/CT combination?

The sensitivity of the PET components must be improved in order to acquire more photons in a given time. That is still a challenge, because the axial coverage of current scanners is only 16 cm, whereas after injection of the radiopharmaceutical, radiation is emitted from everywhere in the patient’s body where the radiopharmaceutical localizes. So, if the detector covered the whole body, the patient could be imaged in one step. However, building such a system would be very expensive.

Do you think it is still possible to have other combinations with other imaging techniques?

Yes, absolutely, but only if there is a medical reason to do it – such a development won’t be driven by advances in technology alone. When we looked at building a PET/CT scanner, we found that most whole-body anatomical imaging for oncology is still performed with CT, whereas in brain and spinal malignancies, anatomical imaging is performed with magnetic resonance (MR).

PET/CT is less technologically challenging than combining PET with MR. PET and CT modalities basically do not interfere with each other, except maybe when they are operated simultaneously within the same gantry. The combined PET/CT scanner provides physicians with a highly powerful tool to diagnose and stage disease, monitor the effects of treatment, and potentially design much better, patient-specific therapies.

What is the actual cost of a PET/CT scanner?

The cost of the highest-performing system is about $2.5 million [€1.98 million], but it may be significantly less if a lower-performance design is adequate for the envisaged application.

  • This article was adapted from text in CERN Courier vol. 45, June 2005, pp23–25

Energy-recovering linacs begin maturing

In March, 159 scientists from around the world gathered at the US Department of Energy’s (DOE’s) Jefferson Lab (JLab) in Newport News, Virginia, for ERL2005, the first international workshop dedicated to energy-recovering linear accelerators (ERLs). The workshop was conceived during accelerator discussions preceding the publication in 2003(p13) of the DOE’s Facilities for the Future of Science: A Twenty-Year Outlook.

CCEene1_06-05

Those discussions initially focused on the need to develop high-brightness, high-current injectors, but soon expanded to include the ERLs then beginning to be implemented on three continents. Planning ensued for ERL2005, which was approved by the International Committee for Future Accelerators (ICFA) as an Advanced ICFA Beam Dynamics Workshop, and interest quickly grew. Other sponsors included three institutions building or planning to build superconducting radio-frequency (SRF) ERLs: Cornell University and Brookhaven National Laboratory in the US and the Council for the Central Laboratory of the Research Councils’ (CCLRC’s) Daresbury Laboratory in the UK.

The growth of ERLs

ERLs began to come of age in 1999 at a light source at JLab – the SRF ERL-driven free-electron laser (FEL). At present, several ERL projects around the world are under design or construction, and test facilities at several laboratories have been funded. Unlike the recycling of electrons in a synchrotron or a storage ring, an ERL uses a conceptually simple phasing technique to recycle the electrons’ energy. On a path measuring exactly an integer multiple of the linac RF wavelength plus a half-wavelength, an ERL’s accelerated beam travels through an experiment and re-enters the linac to yield back its energy, via the RF field, to the beam being accelerated. The decelerated beam is then dumped at low energy.

An obvious advantage of ERLs is economic. Consider, for example, the ERL-driven 4th Generation Light Source (4GLS) facility planned for Daresbury, where a prototype ERL is under construction. In its May 2003 issue (p7), Physics World reported that without energy recovery, “4GLS would consume roughly the output of a large commercial power station”. Energy recovery also simplifies spent-beam disposal.

CCEene2_06-05

The overall promise of ERLs has been distilled in a paper by JLab’s Lia Merminga, who chaired ERL2005 with Swapan Chattopadhyay, also from JLab. Together with co-authors D R Douglas and G A Krafft, Merminga wrote: “At the most fundamental level, beam-energy recovery allows the construction of electron linear accelerators that can accelerate average beam currents similar to those provided by storage rings, but with the superior beam quality typical of linacs. Such an ability to simultaneously provide both high current and high beam quality can be broadly utilized in, for example, high-average-power free-electron laser sources designed to yield unprecedented optical beam power; light sources extending the available photon brilliance beyond the limits imposed by present-day synchrotron light sources; electron cooling devices which would benefit from both high average current and good beam quality to ensure a high cooling rate of the circulating particles in a storage ring collider; or, possibly, as the electron accelerator in an electron-ion collider intended to achieve operating luminosity beyond that provided by existing, storage-ring-based colliders” (Merminga et al. 2003).

Realizing these prospects will require overcoming the technical challenges that the workshop was convened to discuss. These include polarized and unpolarized photoinjectors with high average current and low emittance; optimized lattice design and longitudinal gymnastics; beam stability and multibunch/multipass instabilities; beam-halo formation and control of beam loss; SRF optimization for continuous-wave, high-current applications; higher-order-mode (HOM) damping and efficient extraction of HOM power; RF control and stability; synchronization; and high-current diagnostics and instrumentation.

CCEene3_06-05

Neither the energy-recovery idea nor its close association with SRF is new. In 1965, Cornell’s Maury Tigner suggested a possible collider combining the then-novel concept of the superconducting linear accelerator with what he called “energy recovery” – an “artifice”, he wrote, that “might also be useful in experiments other than the clashing-beam type” (Tigner 1965). Energy recovery was demonstrated as early as the mid-1970s, but the first ERL with high average current drove the first kilowatt-scale FEL from 1999 to 2001 at JLab.

That FEL, which was later substantially upgraded, gave users infrared light at 3-6 μm for 1800 hours – the most achievable with available funding – and led to publications by some 30 groups. Research topics included nanotube production, hydrogen-defect dynamics in silicon, and protein energy transport. The experimentation influenced thinking about linear and nonlinear dynamical processes. Moreover, the ERL itself directly produced broadband light in the terahertz region between electronics and photonics, at over four orders of magnitude higher average power than anywhere before. In Nature, Mark Sherwin of the University of California, Santa Barbara (UCSB) predicted “new investigations and applications in a wide range of disciplines” (Sherwin 2002).

CCEene4_06-05

At 5 mA and 42 MeV, JLab’s original SRF ERL was a small but much-higher-current cousin of CEBAF, the five-pass, 6 GeV recirculating linac that enables the laboratory’s main mission of research in nuclear physics. The ERL/FEL has now been upgraded to produce light at 10 kW in the infrared, with a 1 kW capability imminent in the ultraviolet (figure 1). For infrared operation, the average beam current has been doubled to 10 mA. In the further evolution of ERLs, high average current will be crucial. Optimal performance, in fact, is a trade-off between that and beam degradation. Envisaged ERL projects involve average currents about an order of magnitude higher than those demonstrated so far.

In his plenary speech at the workshop, Todd I Smith of Stanford University summarized the status and outlook for ERL-based FELs. After mentioning electrostatic machines at UCSB, the College of Judea and Samaria in Israel, the Korea Atomic Energy Research Institute (KAERI) in South Korea, and FOM Nieuwegein in the Netherlands, he moved on to JLab and the other two operational RF linac FELs – an SRF machine at the Japan Atomic Energy Research Institute (JAERI) and a room-temperature ERL at the Budker Institute for Nuclear Physics (BINP), Novosibirsk. Smith said that energy-recovering RF-linac-based FELs are proliferating at a rate both “astonishing” and “satisfying”. Among those being planned are machines at KAERI, at Saclay in France and 4GLS at Daresbury. In Florida, in partnership with JLab and UCSB, the National High-Field Magnetic Laboratory has proposed initial steps toward a 60 MeV SRF ERL to drive a kilowatt FEL spanning a wavelength range of 2-1000 μm.

Let there be light

All existing hard X-ray synchrotron radiation facilities are based on storage rings. A half-century ago, first-generation synchrotron-light devices tapped particle accelerators parasitically. Then came a second generation of light sources that were based on dedicated storage rings, followed, in the 1990s, by third-generation machines with high brightness. Third-generation facilities include short-wavelength hard X-ray sources (such as the European Synchrotron Radiation Facility in Grenoble, the Advanced Photon Source at Argonne, and SPring-8 in Japan) and long-wavelength soft X-ray sources (such as the Advanced Light Source at Berkeley, the Synchrotrone Trieste in Italy, the Synchrotron Research Center in Taiwan, and the Pohang Light Source in South Korea). Fourth-generation X-ray light sources based on FELs driven by linacs are under development at DESY, SLAC and RIKEN’s Harima Institute in Japan. The idea of an X-ray synchrotron light source based on ERLs was advocated in 1998 by G Kulipanov, N Vinokurov and A N Skrinsky at BINP, with their pioneering MARS proposal, and later by JLab’s Geoffrey Krafft.

Serious pursuit of a design for an ERL light source by Cornell has recently yielded funding from the US National Science Foundation (NSF) to begin developing a major ERL-based upgrade of the Cornell High Energy Synchrotron Source at the Cornell Electron Storage Ring. ERLs also constitute “a natural and cost-effective upgrade path” for storage-ring light sources, according to Charles K Sinclair of Cornell. At the workshop, Sinclair characterized the potential improvements in ERLs in brightness, coherence and pulse brevity as “transformational”. In one of his examples of applications, he noted that on the timescale of hundreds of femtoseconds, an ERL can enable experimenters to follow the structure of ultrafast chemical reactions. With the NSF funding, Cornell is developing an injector to deliver low-emittance beams at 100 mA.

At JLab – Cornell’s partner in preparing the NSF proposal – collaborative experiments are being conducted concerning other issues in ERL development: beam break-up in the ERL/FEL and RF control in both the ERL/FEL and CEBAF. To complement the FEL’s demonstration of high average current, CEBAF was specially configured briefly during 2003 for a single-pass proof-of-principle study of energy recovery at the giga-electron-volt scale. JLab’s assets for developing SRF-driven ERLs also include the Center for Advanced Studies of Accelerators (CASA) and the Institute for SRF Science and Technology, housed in a test laboratory with a substantial complement of SRF R&D facilities.

As a first step in the 4GLS project, Daresbury is building a 50 MeV prototype ERL that will supply electron beams to a test FEL using an infrared wiggler on loan from JLab. Eventually, with a 600 MeV ERL, 4GLS would complement the UK’s higher-energy X-ray light source, Diamond, which is under construction at the CCLRC’s Rutherford Appleton Laboratory. The 4GLS facility is planned to exploit the sub-picosecond regime and to combine exceptionally high transverse and longitudinal brightness. Central to the plan are a variety of opportunities for pump-probe experiments and the combining of spontaneous and stimulated sources at a single centre. Two photocathode guns are planned, one for high average current, the other for high peak current (figure 2).

For physics research conducted at colliders, ERLs offer the promise of providing electron cooling for hadron storage rings and high-current, low-emittance electron beams for high-luminosity electron-ion colliders. In the cooling process, which brings higher luminosity to ion-beam collisions, an ion beam and an electron beam are merged. The electron beam’s energy is chosen to match the ion beam’s velocity, enabling the electron beam to remove thermal energy from the ion beam. An R&D ERL designed for 0.5 A average current is under construction at Brookhaven. It serves as a prototype for the electron cooler designed for RHICII, the proposed upgrade that could increase the luminosity of the Relativistic Heavy Ion Collider (RHIC) by an order of magnitude. It is also a prototype for an envisaged RHIC upgrade called eRHIC, in which an ERL would provide electron beams for electron-ion collisions. A similar concept, ELIC, envisages the upgrade of CEBAF at JLab for energy-recovering acceleration of electrons to use in collisions with light ions from an electron-cooled ion-storage ring.

A new ERL/FEL concept known as the “push-pull FEL” was presented at ERL2005 by Andrew Hutton of JLab. This proposal, which in some ways resembles a high-energy collider configuration mentioned in Tigner’s 1965 paper, calls for two sets of superconducting cavities with two identical electron beams travelling in opposite directions. Each set of cavities accelerates one electron beam and decelerates the other. This arrangement allows the energy used to accelerate one beam to be recovered and used again for the other. The difference compared with other energy-recovery proposals is that each electron beam is decelerated by a different structure from the one that accelerated it, so this is energy exchange rather than energy recovery. The push-pull approach can lead to a compact layout (figure 3).

The continued success of ERLs would most likely accelerate interest in Chattopadhyay’s call for “practical, affordable yet unique and exciting new accelerator facilities” at the “mezzo scale”. Such successes would also, as Merminga and colleagues concluded, “set the stage for high-energy machines at the gigawatt scale, providing intense, high-quality beams for investigation of fundamental processes as well as the generation of photon beams at wavelengths spanning large portions of the electromagnetic spectrum”. Toward such ends, said Chattopadhyay, “Jefferson Lab is advancing the ERL field at the fastest pace possible and is committed to working in partnership with the international community to promote the development of ERLs further as the next-generation instrument of science wherever it is feasible”. He added that “the successful emergence of the Cornell and Daresbury facilities, both collaborators with Jefferson Lab, signals a bright future ahead”.

HERA and LHC workshops help prepare for the future

After the major luminosity upgrade of DESY’s electron-proton collider HERA in 2001, experiments at the accelerator are now producing data for Run II, which will last until the end of HERA operation in 2007. The results obtained by the two collider experiments H1 and ZEUS will have a profound impact on the physics to be explored at CERN’s Large Hadron Collider (LHC). Since March 2004, members of the communities working at HERA and preparing for the LHC have been meeting regularly at CERN and DESY in a series of workshops intended to promote co-operation between the two communities. The aim of the six “HERA and the LHC” workshops, the final meeting of which was held at DESY during the week before Easter 2005, was to investigate the exact implication of HERA results on the physics at the LHC.

CCElhc1_06-05

The goals of the series of workshops, which had more than 200 registered participants, were as follows:
• to identify and prioritize those measurements to be made at HERA that have an impact on the physics reach of the LHC;
• to encourage and stimulate the transfer of knowledge between both communities and establish an ongoing interaction;
• to encourage and stimulate theoretical and phenomenological efforts;
• to examine and improve theoretical and experimental tools;
• to increase the quantitative understanding of the implication of HERA measurements on LHC physics.

At the final meeting of the series, the speakers summarized the results and presented the conclusions from studies and discussions carried out during the past year by working groups on parton density functions, multijet final states and energy flows, heavy quarks, diffraction and Monte Carlo tools. In general it was made very clear that there is a strong interest from the LHC physics community in detailed studies at HERA. Several general talks on physics at the LHC and HERA outlined the importance of the results obtained at HERA, with special emphasis on the measurements that have still to be done and that will have a significant impact on the physics reach of the LHC. “Clearly, to calculate properly the production rates of Higgs and supersymmetry we absolutely need to understand quantum chromodynamics [QCD] as well as possible,” said John Ellis from CERN. It also became evident that much more theoretical, phenomenological and experimental investigations would be desirable, and to this end several projects were launched during the workshop.

Speakers repeatedly stressed the importance for LHC physics of precise measurements of the parton densities, i.e. the densities of the various types of quarks and the gluons within the proton. In particular, the whole issue of parton density functions (PDFs), from the standard integrated ones to unintegrated and generalized PDFs and eventually to diffractive PDFs, is a rich field for theoretical and experimental studies. These include not only a precise experimental determination of the PDFs, but also address the more fundamental question of the universality of the PDFs and in particular whether those obtained at HERA can be applied to the LHC without further modification beyond evolution effects in QCD.

In the multijet working group, one of the main topics was the issue of multiple scatterings and underlying events. The understanding of these effects has an impact on, for example, the Higgs cross-section measurements in the boson-fusion channel at the LHC. A major step towards a deeper understanding of multiple scatterings is their relation to diffractive scattering: they are simply different facets of the large density of partons at high energies. The dynamics of these high-density systems require extensions of the concept of parton densities from transverse-momentum dependent (unintegrated) to generalized and diffractive parton densities, which can be measured precisely at HERA. These parton densities will be essential for analysing diffractive Higgs production at the LHC, a very clean and promising channel. However, to study this process and also problems of parton dynamics at low x that are still unsolved, the forward region of the LHC detectors needs further instrumentation. This is a task for which the experiments at HERA have accumulated both technical and physics experience over recent years.

Heavy quark production at the LHC is also interesting in terms of QCD. The densities of heavy quarks will play an important role at the LHC, for example in Higgs production channels, and they will be accurately measured at HERA in the high-luminosity programme. In the forward production of heavy quarks, as will be the case in the LHCb experiment, effects coming from high parton densities and the saturation of the cross-section might be observed directly.

All of these studies require adequate tools and simulation programs. The working groups made measurements from HERA, the Tevatron at Fermilab and the SPS at CERN available in the form of easy-to-use computer codes. These will be useful for any tuning of Monte Carlo generators. New concepts were also investigated and user-friendly interfaces to simulation programs were developed.

A unique machine

During the year of the workshops, co-operation between experimenters at HERA and the LHC and the interest from the theoretical and phenomenological side have continuously increased. It has become clear that not only can the LHC profit from HERA (i.e. with exact measurements of parton densities), but also that HERA will profit from investigations carried out for the LHC, such as the application of next-to-leading (NLO) calculations in Monte Carlo event generators (MC@NLO).

HERA is a unique machine; it is the only high-energy electron-proton collider in the world. During the workshop meetings, it became obvious that for many topics, it is the only place today where many of the necessary measurements and studies can be performed. HERA is a machine for precision QCD measurements, just as the Large Electron-Positron collider was for the electroweak sector, with the difference that QCD is richer but also more difficult. Many questions are still unanswered, for example those concerned with the understanding of diffraction and issues in parton evolution with all its consequences for the LHC.

The workshops have critically assessed the physics programme of HERA and made suggestions for further measurements and investigations, in particular those that will be important for the physics reach of the LHC and that cannot be performed anywhere other than at HERA. One example is the precise measurement of the gluon density using the longitudinal structure function FL, which is important for clarifying uncertainties in the present knowledge of the gluon density and the formulation of QCD at high parton densities.

In view of the prospects for further progress emerging from the high-statistics HERA Run II data, a continuation of the workshop series is now planned on an annual basis. The next meeting will be held at CERN in March 2006.

Records fall at Cornell

Improvements in our understanding of the mechanisms that limit accelerating gradients or electric fields, together with technological advances from worldwide R&D, have steadily increased the performance of superconducting cavities over the past decade.

The TESLA collaboration is now achieving accelerating gradients of 35 MV/m in 1 m-long superconducting structures suitable for the proposed 500 GeV International Linear Collider. The best single-cell cavities at many laboratories reach 40-42 MV/m. At these gradients, energy losses from the superconducting microwave cavity resonators are still miniscule, with “intrinsic Q” values exceeding 1010, i.e. it takes 1010 oscillations for the stored energy in the resonator to die out. If Galileo’s original pendulum oscillator had possessed a similar Q value, it would still be oscillating now, 400 years later.

CCEcor1_06-05

One goal of future R&D programmes is to push accelerating gradients and Q values even higher either to reach tera-electron-volt energies or to save on costs. However, above 40 MV/m the magnetic field at the surface of the resonator approaches the fundamental limit where superconductivity breaks down. One way to circumvent this limit is to modify the shape of the cavity to reduce the ratio between the peak magnetic field and the accelerating field.

About two years ago, Valery Shemelin, Rongli Geng and Hasan Padamsee at the Cornell University Laboratory for Elementary- Particle Physics (LEPP) introduced a “re-entrant” shape, which lowers the surface magnetic field by 10%. Figure 1 compares the re-entrant cavity shape and the shape of the TESLA cavity. The downside of the new shape is the higher accompanying surface electric fields, which enhance “field emission” of electrons from the regions of high electric field. Field emission does not present a “brick wall” limit, however, because techniques such as high-pressure rinsing with high-purity water at pressures of about 100 bar eliminate the microparticle contaminants that cause field emission.

Another important aspect of cavity shape is beam aperture. When a bunch of charge passes through an accelerating cavity it leaves behind a wakefield, which disrupts oncoming bunches. Smaller apertures produce stronger wakefields. The re-entrant shape has the same aperture as the TESLA shape; nevertheless, reducing the aperture, say from 70 to 60 mm, would yield higher accelerating gradients because it would allow a surface magnetic field 16% lower. Further studies are in progress to evaluate the trade-off between higher wakefields and higher potential accelerating gradients.

New ideas are usually proved in single-cell cavities before the technical challenges of multi-cell accelerating units are addressed. The first 70 mm-aperture re-entrant single-cell cavity fabricated at Cornell reached a world record accelerating field of 46 MV/m at a Q value of 1010, and 47 MV/m in the pulsed mode, which is suitable for a linear collider. Figure 2 shows how Q varies with accelerating field for the cavity. To reach these record performance levels, the cavity was made from high-purity, high-thermal-conductivity niobium (with residual resistivity ratio of 500) to avoid thermal breakdown of the superconductivity. Electropolishing provided an ultra-smooth surface.

High-pressure rinsing at 100 bar thoroughly scrubbed the surface free of the microparticles that cause field emission. Final assembly took place in a Class-100 clean-room environment. All these are now standard techniques for the best superconducting cavity preparation. In addition, baking at 100 °C for 50 h promoted a redistribution of the oxygen in the radio-frequency (RF) layer, which is known to avoid premature RF losses.

Record operating Qs

When operating an accelerating cavity with beam, another important Q value is the “operating” or “loaded” Q. This is determined by the power lost to the beam, whereas the “intrinsic Q” is determined by the ohmic power loss in the cavity walls. Intrinsic Q values are 1010 or higher as discussed above. For applications with minimal beam loading, the closer loaded Q is to intrinsic Q, the smaller the overall RF power investment and operating costs.

CCEcor2_06-05

The state of the art for structures designed to accelerate velocity-of-light particles is operation at a loaded Q of 2 x 107. Higher loaded Qs are extremely challenging because the resulting bandwidth of the cavity resonance is only a few hertz (out of a typical 1.5 GHz), making the field in the cavity extremely sensitive to any perturbation of the resonance frequency due to microphonics or detuning of Lorentz forces. However, Qs above 108 are highly desirable for future applications, in particular for energy-recovery linacs (ERLs) for future high-flux, high-brilliance light sources. These are being pursued by many laboratories around the world, including Cornell. No control system has ever met the amplitude and phase-stability requirements of the RF field at a loaded Q of 108.

Building on techniques developed at DESY for the Tesla Test Facility, researchers at Cornell, under the direction of Matthias Liepe, have developed a new digital RF control system that provides great flexibility, high computational power and low latency for a wide range of control and data-acquisition applications. Recently Cornell tested this system in two extreme regimes of loaded Q. First, in the Cornell Electron Storage Ring (CESR), the system stabilized the vector-sum field of two of the ring’s superconducting 500 MHz cavities at a loaded Q of 2 x 105 with a beam current of several hundred milliamps. Several months of continuous operation proved the system’s high reliability and the field stability surpassed design requirements.

In a more crucial and demanding test, a team from Cornell and Jefferson Laboratory (JLab) connected the system to a cavity with a loaded Q greater than 108 at JLab’s infrared free-electron laser and tested it with beam in the energy-recovery mode, in which the effective beam current is practically zero. In continuous operation, excellent field stability – about 2 x 10-4 rms in relative amplitude and 0.03 degrees rms in phase – was achieved at a loaded Q of 1.4 x 108 in full energy-recovery mode. This sets a new record for loaded Q operation of linac cavities. At the highest loaded Q, less than 500 W of klystron power was required to operate the cavity at a field of 12 MV/m in energy-recovery mode with a beam current of 5 mA. At the more usual loaded Q of 2 x 107, about 2 kW is required.

The control system used includes digital and RF hardware developed in-house; very fast feedback and feed-forward controls; automatic start-up and trip recovery; continuous and pulsed-mode operation; fast quench-detection; and cavity-frequency control. The cavity-frequency control relied on a fast tuner based on a piezoelectric tuning element, which proved effective in keeping the cavity on resonance. As an added bonus, the ramp-up time to high gradients was less than 1 s, instead of the more usual minutes.

MICE project gets the green light

On 21 March, the UK’s science and innovation minister announced the approval and funding of the Muon Ionisation Cooling Experiment, MICE, at the Rutherford Appleton Laboratory (RAL). MICE will use a new, dedicated muon beam line at the laboratory’s pulsed neutron and muon source, ISIS.

CCEnew1_05-05

MICE is an essential step in accelerator R&D towards the realization of a neutrino factory, in which an intense neutrino beam is obtained from the decay of muons in a storage ring. The unique feature of such a facility is that it can produce intense and well defined beams of electron-(anti)neutrinos at high energies, well above the production threshold for tau particles. This should allow measurements of the “appearance” of both muon- and tau-neutrinos from electron-neutrinos. Neutrino factories are therefore the ultimate tool for precision studies of neutrino oscillations and of leptonic charge-parity (CP) violation, a measurement that might prove decisive in understanding the matter-antimatter asymmetry of the universe.

The largest novelty of a neutrino factory in terms of accelerator physics is probably muon ionization cooling, which improves performance by a factor of four to ten, depending on the design; it also represents a large fraction of the neutrino factory’s estimated cost. Although proposed more than 20 years ago and generally considered as sound, the ionization cooling of muons has never been demonstrated.

CCEnew2_05-05

Muons are born in a rather undisciplined state at a few hundred million electron-volts from interactions of proton beams, and need to be cooled before they can be accelerated – to about 20 GeV – and stored to produce neutrinos. Known beam-cooling techniques (electron, stochastic or laser cooling) are much too slow, considering that muons live only a few microseconds before they decay.

A method that is expected to work instead is to cool the transverse phase-space of the beam by passing it through energy-absorbing material and accelerating structures embedded within a focusing magnetic lattice. The muons lose energy in both the transverse and longitudinal direction when they pass though the absorbers, while the acceleration increases only their longitudinal momentum. This technique, based on a principle first described by Russian pioneers Gersh Budker and Alexander Skrinsky in the early 1970s, is known as ionization cooling.

Unfortunately, although its mathematics is simple on paper, ionization cooling is in practice a delicate mix of technologies involving liquid hydrogen (the best absorber material), strong radio-frequency (RF) electric fields (to re-accelerate the muons in an orderly fashion) and magnetic fields for containment. This combination is extremely challenging. The windows of the vessel for the liquid hydrogen need to be as thin as possible to prevent multiple scattering, while ensuring safety in the confined space between potential sources that could ignite the highly inflammable hydrogen. The operation of RF cavities at high gradient in high magnetic fields is still unproven. Finally, the precise study of cooling requires measuring the beam properties with unprecedented accuracy; each muon will be measured using techniques from high-energy physics rather than standard beam diagnostics.

The size and complexity of this undertaking require the close collaboration of the accelerator and experimental particle-physics communities. MICE comprises some 140 physicists and engineers from Belgium, Italy, the Netherlands, Japan, Russia, Switzerland, the US and the UK. The proposed schedule for MICE envisages that the technical feasibility of muon ionization cooling will be established by 2008/9. The path will then be clear for a detailed proposal for a neutrino factory.

bright-rec iop pub iop-science physcis connect