Most of the world’s computing power is no longer concentrated in supercomputer centres and machine rooms. Instead it is distributed around the world in hundreds of millions of PCs and game consoles, of which a growing fraction are connected to the Internet.
A new computing paradigm, “public-resource computing”, uses these resources to perform scientific supercomputing. This enables previously unfeasible research and has social implications as well: it catalyses global communities centred on common interests and goals; it encourages public awareness of current scientific research; and it gives the public a measure of control over the directions of science progress.
The number of Internet-connected PCs is growing and is projected to reach 1 billion by 2015. Together they could provide 1015 floating point operations per second (FLOPS) of power. The potential for distributed disk storage is also huge.
Public-resource computing emerged in the mid-1990s with two projects: GIMPS (looking for large prime numbers); and Distributed.net, (solving cryptographic codes). In 1999 our group launched a third project, SETI@home, which searches radiotelescope data for signs of extraterrestrial intelligence. The appeal of this challenge extended beyond hobbyists; it attracted millions of participants worldwide and inspired a number of other academic projects as well as efforts to commercialize the paradigm. SETI@home currently runs on about 1 million PCs, providing a processing rate of more than 60 teraFLOPS. In contrast, the largest conventional supercomputer, the NEC Earth Simulator, offers in the region of 35 teraFLOPs.
Public-resource computing is effective only if many participate. This relies on publicity. For example, SETI@home has received coverage in the mass-media and in Internet news forums like Slashdot. This, together with its screensaver graphics, seeded a large-scale “viral marketing” effect.
Retaining participants requires an understanding of their motivations. A poll of SETI@home users showed that many are interested in the science, so we developed Web-based educational material and regular scientific news. Another key factor is “credit” – a numerical measure of work accomplished. SETI@home provides website “leader boards” where users are listed in order of their credit.
SETI@home participants contribute more than just CPU time. Some have translated the SETI@home website into 30 languages, and developed add-on software and ancillary websites. It is important to provide channels for these contributions. Various communities have formed around SETI@home. A single, worldwide community interacts through the website and its message boards. Meanwhile, national and language-specific communities have their own websites and message boards. These have been particularly effective in recruiting new participants.
All the world’s a computer
We are developing software called BOINC (Berkeley Open Infrastructure for Network Computing), which facilitates creating and operating public-resource computing projects. Several BOINC-based projects are in progress, including SETI@home, Folding@home and Climateprediction.net. BOINC participants can register with multiple projects and can control how their resources are shared. For example, a user might devote 60% of his CPU time to studying global warming and 40% to SETI.
We hope that BOINC will stimulate public interest in scientific research. Computer owners can donate their resources to any of a number of projects, so they will study and evaluate them, learning about their goals, methods and chances of success. Further, control over resource allocation for scientific research will shift slightly from government funding agencies to the public. This offers a uniquely direct and democratic influence on the directions of scientific research.
What other computational projects are amenable to public-resource computing? The task must be divisible into independent parts whose ratio of computation to data is fairly high (or the cost of Internet data transfer may exceed the cost of doing the computation centrally). Also, the code needed to run the task should be stable over time and require a minimal computational environment.
Climateprediction.net is a recent example of such an effort in the public-resource computing field. Models of complex physical systems, such as global climate, are often chaotic. Studying their statistics requires large numbers of independent simulations with different boundary conditions.
CPU-intensive data-processing applications include analysis of radiotelescope data, and some applications stemming from high-energy physics are also amenable to public computing: CERN has been testing BOINC in house to simulate particle orbits in the LHC. Other possibilities include biomedical applications, such as virtual drug design and gene-sequence analysis. Early pioneers in this field include Folding@home from Stanford University.
In the long run, the inexorable march of Moore’s law, and the corresponding increase of storage capacity on PCs and the bandwidth available to home computers on the Internet, means that public-resource computing should improve both qualitatively and quantitatively, which should open an ever-widening range of opportunities for this new paradigm in scientific computing.
With a peak luminosity that now exceeds 1.3 x 1034/cm2/s, KEKB, the KEK B-factory, is delivering more than 1 fb-1 per day to the Belle experiment. This peak luminosity is equivalent to the production of 14 B-meson – anti-B-meson pairs every second, and Belle is now accumulating approximately one million B pairs every day.
KEKB, which consists of an 8 GeV electron ring and a 3.5 GeV positron ring, started operation in 1999. Since then the performance of the facility has been steadily improved by increasing the currents of the electron and positron beams that are stored in the rings, and by using solenoidal coils wound over the entire positron ring to suppress the photoelectron cloud that was previously producing an instability. Since January this year the machine has been operating in a “continuous injection mode”, where beam particle losses are compensated by injecting beam from the linac injector without interrupting data taking at Belle. This new mode of operation has successfully enabled KEKB to deliver 30% more integrated luminosity to Belle, and led to the new record of 1 fb-1 per day. Belle has already accumulated a total of more than 260 fb-1 since the beginning of the experiment.
Thanks to this huge data rate, Belle reported charge-parity (CP) violation in the B-meson system in 2001, at the same time as the BaBar experiment at SLAC, and has continued to improve the precision of sin2φ1 (sin2ß), the fundamental CP violation parameter of the Standard Model. In addition, last year Belle measured a value for the CP violation parameter in B → φKS decay that differs from the Standard Model prediction by 3.4 standard deviations. A more precise measurement based on more data will be reported this summer. Since a deviation from the Standard Model prediction for this parameter would be an unambiguous indication of new physics, Belle’s new result is eagerly awaited by the particle-physics community.
Accelerators and light sources rely increasingly on superconducting magnets, superconducting accelerating structures and other very-low-temperature equipment. At the same time there has also been progress in understanding the design, operation and optimization of the refrigeration plants needed to provide the cooling. To speed the evolution of this understanding and to foster its practical application, cryogenics engineers have instituted a new biennial meeting. The first Workshop on Cryogenics Operations was held on 30 March – 2 April at the US Department of Energy’s Thomas Jefferson National Accelerator Facility (Jefferson Lab) in Newport News, Virginia, with participants attending from Europe and America.
A cryogenics plant’s operating expenses include manpower, electricity, helium refrigerant gas and liquid nitrogen. At Jefferson Lab, for example, the 2 K plant supporting the Continuous Electron Beam Accelerator Facility (CEBAF) and a free-electron laser user-facility requires several staff and $3.5 million (€2.8 million) per year for electricity, plus another $850,000 (€690,000) for liquid nitrogen and $400,000 (€325,000) for helium.
Recent cryogenics upgrades at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) illustrate how such operating costs can be controlled and how system reliability can be raised. RHIC’s system maintains the superconducting magnets in two collider rings at or below 4.6 K. At the workshop Ahmed Sidi-Yekhlef from the RHIC project described how a new approach to process control, coupled with hardware modifications, yielded a power reduction of about 20% (1.8 MW) while boosting system reliability, stability and flexibility, with less human intervention. In RHIC’s new process control system the refrigerant charge pressure of the cryogenic system is continuously varied automatically to match the imposed cryogenic load of the superconducting magnets. The lower system charge pressures reduce the mechanical loading and wear on the refrigerator’s main helium gas compressors, which are electrically driven. The result is savings in electrical power, maintenance and repairs.
The workshop participants took special note of how cryogenics operations of this kind of increasing sophistication are being applied to coming generations of machines. As discussed at the meeting, the first stage of the refrigeration system for the Large Hadron Collider is now installed and ready for cooling tests in 2005. Presentations and discussions during the workshop also described progress with the cryogenics systems at ISAC-II, the superconducting-linac-based upgrade of the radioactive beam facility at TRIUMF in British Columbia, and at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory in Tennessee.
The SNS will serve as the next-generation neutron-scattering facility for the US, providing the most intense pulsed neutron beams in the world for scientific research and industrial development. With a total cost of $1.4 billion (€1.1 billion), construction of the SNS began in 1999 and will be completed in 2006. At the heart of the new facility is a superconducting accelerator cooled by a 2400 W, 2.1 K helium cryogenic system, with a shield load of 8300 W at 38 K. The system was designed and built in partnership with Jefferson Lab for unattended operation with greater than 99% reliability. At the workshop, Dana Arenius of Jefferson Lab and Donald Richied of SNS reported on how the design phase had focused on this new optimization approach, which uses automatic pressure-reduction control to match the variations in the facility’s cryomodule load. The efficiency of the plant is maintained with lower operating costs while extending the service lifetimes of major components. Comparable technology for unattended operation has been proven in operation at Michigan State University’s National Superconducting Cyclotron Laboratory.
Arenius chaired the workshop, and there were substantial contributions from Ganni Rao, also from Jefferson Lab. Because the laboratory expects to participate in future projects much as it has in the construction of the SNS, the Accelerator Division has been especially motivated to initiate and stimulate critical thinking concerning cryogenics optimization. Raymond L Orbach, who directs the US Department of Energy’s Office of Science, recently announced a prioritized list of more than two dozen major future scientific facilities and upgrades for the next 20 years. High-field superconducting magnets or superconducting microwave technology will figure in around a third of these, including the Rare-Isotope Accelerator and the Linac Coherent Light Source, for which the prospects for cryogenics operation were discussed during the meeting.
John Weisend of SLAC will chair the second Workshop on Cryogenics Operations, to be held in 2006, probably with Japanese participation, and CERN is contemplating hosting in 2008. The workshop complements the biennial Cryogenic Engineering Conference, which occurs in odd-numbered years and focuses on the broad field of breakthroughs in cryogenic technology and cryogenic industrial products.
Over the past decade micropattern gas detectors (MPGDs) have become increasingly important, not only in high-energy physics but also in other applications where high spatial resolution is required together with operation at high rates. MPGDs are position-sensitive proportional counters, descendents of the multiwire chambers that amplify and collect charge released as ionizing particles pass through a volume of gas. The difference with MPGDs is that the electrodes that sense the avalanche of charge are constructed using microelectronics, thin-film or advanced printed circuit board (PCB) techniques. With such methods, feature sizes of just a few microns can be achieved, leading to detectors that have excellent spatial resolution and fast charge collection.
One attractive class of MPGD is the gas electron multiplier (GEM) detector, which can fully decouple the charge-amplification structure from the read-out structure. The GEM concept uses a thin sheet of metalized plastic pierced by a regular array of tiny, closely spaced holes. When a voltage is applied across the device, the high electric field at the holes causes an avalanche of charge, which can be collected by a read-out electrode. In this way, the charge amplification and read-out can be independently optimized. For example, by organizing the read-out plane in a multipixel pattern it is possible to obtain true 2D imaging capability (see figure 1). The high granularity of the pixelated read-out plane preserves the intrinsic resolving power of the device and its high rate capability, which are otherwise unavoidably lost if a conventional projective read-out approach is used – for example, with a read-out on x and y axes (Bellazzini and Spandre 2003).
However, when the pixel size is small (less than 100 µm) and the number of pixels is large (more than 1000), it is virtually impossible to bring the signal charge from individual pixels in the GEM read-out to a chain of external read-out electronics, even if advanced, fine-line, multilayer PCB technology is used. The fan-out connecting the segmented anodes that collect the charge to the front-end electronics is the real bottleneck: technological constraints limit the number of independent electronics channels that can be brought to the peripheral electronics. Furthermore, the cross-talk between adjacent channels and the noise that is caused by the high-input capacitance to the pre-amplifiers become significant.
The solution is that, rather than take the signal from the pixel to the read-out electronics, the electronics chain has to be brought to the individual pixel. This concept has been developed recently at INFN Pisa, where deep, submicron VLSI technology was used to build an application-specific integrated circuit (ASIC) to perform both charge collection and read-out. The top metal layer of the ASIC consists of a CMOS array of 2101 active pixels with an 80 µm pitch, which is used directly as the charge-collecting anode of a GEM. Each charge-collecting pad of the array is connected to a full electronics chain (pre-amplifier, shaping amplifier, sample-and-hold, multiplexer) built immediately below the pad using the five remaining active layers of the VLSI structure. With this approach, gas detectors have for the first time reached the level of integration and resolution typical of solid-state pixel detectors (Bellazzini et al. 2004).
The ASIC was created using 0.35 µm, 3.3 V CMOS technology. Figure 2 shows the device layout as seen from the top metal layer. The active matrix, in pink, is surrounded by a passive guard ring of 3-4 pixels, which are set to the same potential as the active pixels. Figure 3 shows the actual chip bonded to its ceramic package.
To build the complete detector, a single GEM MPGD with an active gas volume of less than 1 cm3 is assembled directly over the chip containing the ASIC, which forms both the charge-collecting anode and the pixelized read-out of the MPGD, so that the detector and the read-out electronics become a single unit. This enables the full electronics chain and the detector to be completely integrated without the need for complicated bump bonding.
In the prototype there is a drift region (absorption gap) of 6 mm above the GEM foil, while a 1 mm spacer defines the collection gap between the lower surface of the GEM and the pixel matrix of the read-out chip. The GEM has a standard thickness of 50 µm and holes of 50 µm diameter at 90 µm pitch on a triangular pattern. The entrance window is made from 25 µm thick Mylar foil, aluminized on one side. Typical applied voltages are 1000 V (drift electrode), -500 V (top of GEM) and -100 V (bottom of GEM) – the collecting electrodes being around 0 V. In these conditions the detector operates at a typical gain of 1000.
Thanks to the very low pixel capacitance at the preamplifier input, a noise level of 1.8 mV was measured, which corresponds to around 100 electrons. This means that with the gas gain of 1000, the detector has significant sensitivity to a single primary electron.
The first application of this new MPGD concept is for an X-ray polarimeter, for use in astronomy, operating in the low-energy band (1-10 keV). Information on the degree and angle of polarization of astronomical sources can be derived from the angular distribution of the initial part of the photoelectron tracks when they are projected onto a finely segmented 2D imaging detector.
The algorithm for the reconstruction of the photoelectron path begins with the evaluation of the first moment (M1) of the charge distribution on the read-out pixels, and the maximization of the second moment (M2) of the charge distribution to define the principal axis of the track. In a further step, the asymmetry of the charge release along the principal axis (third moment, M3) is computed and the conversion point is derived by moving along this axis in the direction of negative M3, where the released charge is smaller by a length of around M2. The reconstruction of the direction of emission is then carried out by taking into account only the pixels in a region weighted according to the distance from the estimated conversion point.
The morphology of a real track obtained by illuminating the device with a low-energy radioactive source (5.9 keV X-ray from 55Fe) is shown in figure 4. The small cluster owing to the Auger electron and the initial part of the track can be distinguished from the larger Bragg peak. The plot of the raw signals of all of the channels for the same event shows the optimal signal-to-noise ratio obtained using this detector (figure 5). Around 50,000 electrons from the gas-amplified primary photoelectrons are divided between 53 pixels.
The final design for our X-ray polarimeter application (Costa et al. 2001) will have 16-32 K channels with a pixel size of 60-70 µm and an active area of around 1 cm2. However, many other applications can be foreseen, depending on various factors, such as the size of the pixels and the die, the electronics shaping time, the analogue versus digital read-out, counting versus integrating mode and gas filling. Such developments would surely open new directions in gas proportional detectors, and bring the field to the same level of integration as that of solid-state detectors.
Research on the physical phenomena induced when high-energy beams of charged and neutral particles interact with ordered matter, such as crystal lattices and nanostructures, has seen considerable progress in recent years, from both theoretical and experimental points of view. When charged particles, especially those moving at relativistic speeds, pass through a crystal they feel a strong coherent electric field due to the nuclear charges. Particles can be channelled by the arrangement of atoms in crystals, and specially bent crystals are used in accelerator laboratories to steer high-energy beams.
On 23-26 March this year, INFN’s Laboratori Nazionali di Frascati (LNF) hosted the International Workshop on Relativistic Channelling and Related Coherent Phenomena. Several successful meetings in this field have previously taken place, notably at Maratea in 1986, Protvino (Serpukhov) in 1991 and Aarhus in 1995. This year’s workshop was held at the home of the DAFNE-LIGHT synchrotron radiation facility, which is particularly suitable for experimental work at infrared wavelengths and in X-ray diffraction. The LNF has also been supported by the European Union (EU) as one of the major research infrastructures in Europe to give free access to researchers during the period 2000-2004, and the EU has recently approved a new access to research infrastructure programme at the LNF for 2004-2008.
The main purpose of the workshop was to assess the current state of the art of this fast-growing field and to stimulate research collaboration among the different groups involved, with the aim of prompting the organization and presentation of joint projects in the near future. The success of the workshop can be shown by the number of participants, with around 40 specialists attending from 12 different countries, including Japan, the US and most of the former USSR, and by the high quality of the technical presentations.
Erik Uggerhoj from Aarhus launched a new initiative towards research on strong field effects in ordered matter at multi-TeV energies, entering the territory far above the critical Schwinger field. This could lead to a new multi-TeV electron (positron) beam facility at CERN’s Large Hadron Collider (LHC), providing new opportunities for fixed-target physics and applications. Such research at the LHC would require crystal-assisted extraction of a parasitic beam, a technique based on strong crystal fields. Simulations at the workshop showed that a tiny crystal installed into the collimation system could enhance the efficiency of the LHC collimation by an order of magnitude. The channelling for collimation purposes can then easily be turned into an instrument for beam extraction when needed.
A team from Brookhaven National Laboratory reported on crystal collimation experiments with beams of gold ions at the Relativistic Heavy Ion Collider, which were performed jointly with the Institute for High Energy Physics (IHEP) in Protvino. Channelling efficiencies of about 30% were measured, in good agreement with Monte Carlo simulation. This is a significant step forward, but more work is needed to incorporate a strong crystal field into an accelerator lattice and to benefit from it fully.
Channelling in “bent” crystals is routinely used in experiments at Protvino, where many crystals are installed at six locations around the main ring. Some channelling crystals have been used for extraction for more than 10 years without replacement. Extraction efficiencies of 85% for a 70 GeV beam of 1012 protons have been measured for 2 mm crystals, in excellent agreement with predictions.
In addition to the well known use of bent crystals and focusing crystals demonstrated at IHEP more than a decade ago, crystal undulators are now being introduced to experiments, as Yuri Chesnokov of IHEP reported. Channelling undulators offer sub-millimetre periods and fields of the order of 1000 tesla. Figure 1 shows scanning electron microscope images of an undulator surface that was produced by micromachining a silicon crystal at IHEP. The images, obtained by a team from LNF in collaboration with CNR-IFN (Rome), reveal the undulator’s 50 µm grooves spaced by 200 µm, which produce periodic deformations that propagate in the bulk of the crystal. Samples of this kind, which have been characterized with X-rays and tested with protons, are now ready for the positron beam tests planned for the Beam Test Facility at the LNF, as well as at IHEP and at the Super Proton Synchrotron at CERN.
Positron sources are another application of strong coherent fields. Teams from KEK and Yerevan presented the progress and new ideas in this direction, and a number of talks reported on the theories of coherent radiation and electron-positron pair production in ordered matter.
Nanostructures also offer a new line of research for particle interactions with ordered matter. Several talks were devoted to particle channelling in nanotubes, radiation in periodic nanostructures and the growth of aligned nanostructured arrays. Possibilities for experiments with channelling nanostructures were outlined by teams from LNF and IHEP, where such activities are already underway.
Fixed-field alternating-gradient (FFAG) accelerators, which were intensively studied in the 1950s and 1960s but never progressed beyond the model stage, have in recent years become the focus of renewed attention. Two proton machines have already been built and three more, plus an electron FFAG and a muon phase rotator, are under construction. A variety of designs are also under study for the acceleration of protons, heavy ions, electrons and muons, with applications as diverse as cancer therapy, industrial irradiation, driving subcritical reactors, boosting high-energy proton intensity and neutrino production. These advances have been underpinned by a series of international workshops, the first being held at CERN in 2000, with subsequent meetings at KEK (twice), LBNL, BNL and TRIUMF. The next workshop will again be held at KEK, in October 2004.
With their fixed magnetic fields, modulated radiofrequency (RF) and pulsed beams, FFAGs operate just like synchrocyclotrons – in fact they bear the same relation to classic synchrocyclotrons as isochronous ring cyclotrons (such as at PSI, IUCF, RIKEN and so on) do to the classic Lawrence cyclotron: the central region has been removed and the magnet broken into radial or spiral sectors to provide edge and strong focusing.
The fixed magnetic field leads to a spiral orbit, so the vacuum chamber and magnets tend to be larger than for a synchrotron, but the repetition rate (and hence beam intensity) can be much higher, as it is set purely by RF considerations. High repetition rate and large momentum acceptance are the two features where FFAGs offer advantages over synchrotrons, and it is applications needing one or both of these features that have driven the current surge of interest.
Following the discovery of alternating gradient (AG) focusing in 1952, FFAGs were proposed independently by Tihiro Ohkawa in Japan, Keith Symon in the US and Andrei Kolomensky in the USSR. The most intensive studies were carried out by Symon, Donald Kerst and others at MURA (the Mid-western Universities Research Association) in Wisconsin, and culminated in the construction and successful testing of electron models of radial-sector and spiral-sector designs. But the proposals for proton FFAGs were not fun-ded at that time, nor in the 1980s when 1.5 GeV machines were proposed by the Argonne and Jülich laboratories as spallation neutron sources.
Scaling FFAGs
In order to avoid the slow crossing of betatron resonances associated with conventional low energy-gain per turn, all the FFAGs constructed so far have been based on the “scaling” principle. This means that the orbit shape, optics and betatron tunes are kept fixed, independent of energy, just as in synchrotrons. One implication of this is that the magnets must be built with constant field index (logarithmic gradient). In the case of spiral-sector designs it also implies a constant spiral angle. The need for both strong gradient and high spiral angle (to achieve adequate focusing) makes the spiral magnet design very challenging.
For radial-sector designs the focusing (F) and defocusing (D) gradient magnets must have equal and opposite fields (but different lengths) to give enhanced “magnetic flutter” (rms field variation) and strong edge as well as AG focusing. The two proton FFAGs recently built by Yoshiharu Mori’s group at KEK are of this type. The 1 MeV POP (proof of principle) FFAG has eight sectors, each consisting of a “DFD” radial-sector triplet. It came into operation in 2000 and measurements on the beam have provided valuable confirmation of its predicted behaviour.
The larger 150 MeV FFAG at KEK is a prototype for proton therapy and neutron production. It has 12 sectors, also DFD, with the orbit radius increasing from 4.4 to 5.3 m. Beam from the 12 MeV cyclotron injector has been accelerated to full energy and the extraction system is currently being commissioned.
A key technical innovation by Mori’s KEK group has been the use of Finemet metallic alloy for rapid modulation of the RF frequency at repetition rates higher than are practical with ferrite (250 Hz in the larger machine, with a 1.5 to 4.6 MHz sweep). This material has two advantageous properties: high permeability, permitting short cavities and a high effective accelerating field; and lossiness, making the quality factor very low (Q ~ 1) and so allowing acceleration over a wide frequency range without any need for active tuning.
A 150 MeV FFAG of the same design is also being installed at the Kyoto University reactor, in collaboration with Mitsubishi, to test accelerator-driven sub-critical reactor operation. Two further FFAGs act as injector (a 2.5 MeV betatron with eight spiral sectors) and booster (20 MeV with eight radial sectors). Initially the repetition rate will be 120 Hz, yielding a 1 µA beam, and then later 1 kHz, providing 100 µA.
For heavy-ion therapy the National Institute of Radiological Sciences at Chiba in Japan, home of the Heavy Ion Medical Accelerator (HIMAC), has designed a three-stage FFAG, comprising rings operating at 6, 100 and 400 MeV/u. The largest, with 12 radial sectors, has a circumference of 70 m. The complex is designed to operate at 200 Hz and provide a beam of 2 x 109 C+ions/s. Mitsubishi is also designing heavy-ion FFAGs for therapy, but of spiral-sector design. In this case a 12-sector booster (3.5-4.0 m radius) would accelerate C+ ions to 62 MeV/u, or protons to 230 MeV. The 16-sector main ring (6.6-7.2 m radius) would take the C+ to 400 MeV/u.
Another Mitsubishi project is the construction of a 1 MeV electron FFAG betatron as a scaled-down prototype for industrial irradiation, CT scanning and radiation therapy. This machine, appropriately named “Laptop”, has five spiral sectors, an overall diameter of 10 cm and a magnet weighing all of 2.8 kg!
Muon FFAGs
FFAGs are also of interest for muons at both low and high energies. PRISM (Phase-Rotated Intense Slow Muon Source), based on a 10-cell “DFD” radial-sector FFAG of 6.5 m radius, is under construction at RCNP Osaka for eventual installation at J-PARC. It will collect 5 ns wide bunches of muons at 68 MeV/c ± 30% and use a sawtooth RF field to rotate them in phase space, reducing the momentum spread to ± 3%. With a repetition rate of 100-1000 Hz the muon intensity will be 1011-1012/s, making possible ultra-sensitive studies of rare muon decays. It is also planned to use PRISM for ionization cooling of muons. Another proposal for ionization cooling comes from Al Garren at UCLA and Harold Kirk and Stephen Kahn at Brookhaven, who suggest using a small 12-sector gas-filled FFAG with superconducting magnets (96 cm radius) for cooling muons of 250 MeV/c ± 30%.
The KEK group’s most ambitious plan is to build a neutrino factory at J-PARC based on a sequence of four muon FFAGs with top energies of 1, 3, 10 and 20 GeV. The largest would have a radius of 200 m (with a total orbit spread of 50 cm) and consist of 120 cells, each containing a superconducting DFD triplet. Most of the cells would also contain RF cavities to provide an overall energy gain of around 1 GeV per turn, restricting the losses through muon decay to 50% overall. The use of low-frequency RF (24 MHz) keeps the buckets wide enough to contain the phase drift occurring as the orbit expands. A major advantage of FFAGs over linacs – either single or recirculating – is that their large acceptance obviates the need for muon cooling or phase rotation. There also turn out to be significant cost savings.
Non-scaling FFAGs
The rapid acceleration that is essential for muons allows betatron resonances no time to damage beam quality. The scaling principle can therefore be relaxed, the betatron tunes allowed to vary and lattices explored for properties that are favourable to muons. In particular, in 1999 Carol Johnstone at Fermilab showed that it would be very advantageous to make the positive bend D and the negative F (so that their fields decrease outwards rather than increasing as demanded by scaling), with the Fs weaker as well as shorter than the Ds. The circumference could be shortened; the radial orbit spread reduced, allowing the use of smaller vacuum chambers and magnets; and the orbit length made to pass through a minimum at mid-energy (instead of rising monotonically), thus reducing the variation in orbit time with energy – a vital consideration since there is no time for RF frequency modulation. Moreover, constant field gradients could be used (rather than constant field index), simplifying the magnet design and rendering non-linear resonances harmless.
Lattices along these lines have been developed by Johnstone at Fermilab, by Scott Berg, Ernest Courant, Dejan Trbojevic and Robert Palmer at Brookhaven, by Eberhard Keil at CERN and Andy Sessler at LBNL, and by Shane Koscielniak at TRIUMF.
The latest results from an ongoing cost-optimization study by Berg and colleagues favour the use of linacs up to 2.5 GeV, followed by 2.5-5.0, 5-10 and 10-20 GeV FFAGs. The main ring, to be composed of around 100 doublet or FDF triplet cells, would have a circumference of about 700 m, with orbit lengths varying by only 20 cm. With the orbit time first falling and then rising, Koscielniak and Berg have shown that by exceeding a critical RF voltage an acceleration path can be created that stays close to the voltage peak (crossing it three times), snaking between neighbouring buckets (rather than circulating inside them). By using high-field superconducting 200 MHz cavities it should be possible to accelerate from 10 to 20 GeV in 16 turns, with a decay loss of 10% (25% in the three rings). In order to demonstrate the novel features of such a design – particularly acceleration outside buckets and the crossing of many integer and half-integer resonances – the construction of a 10-20 MeV/c electron model is being considered.
Grahame Rees from the Rutherford Appleton Laboratory is proposing a very different scheme for an 8-20 GeV muon ring using 0BFDFB0 cells in which the F and D fields are profiled to make each of the 16 orbits exactly isochronous, allowing acceleration at peak RF voltage throughout. The price paid is a somewhat larger circumference of 1255 m. This is one of several European FFAG studies that are being coordinated through Beams for European Neutrino Experiments (BENE).
Yet another non-scaling approach has been taken by Alessandro Ruggiero at Brookhaven in a design for a 1.5 GeV proton FFAG to replace the laboratory’s Alternating Gradient Synchrotron (AGS) Booster, or for high-power applications up to 4 MW. Here acceleration is relatively slow (> 1000 turns) so that resonances must be avoided. The tune is therefore kept essentially constant by using a non-linear field profile for which the changes in gradient balance those in flutter, while the non-scaling virtue of low dispersion is retained by using FDF cells with stronger D magnets than F magnets. The 136-cell FFAG, to be located in the AGS tunnel, would accelerate 1014 protons per pulse at 2.5 Hz, providing a 40 µA beam.
With this wide variety of new ideas and projects, it seems that FFAGs have at last come into their own. Rather than merely a historical curiosity from the mid-20th century, they are now revealed as a vital answer to the needs of the 21st.
With the blow of a whistle at 8 a.m. on 30 April, the Beijing Electron Positron Collider (BEPC) finished running and the installation of its major upgrade, BEPCII, began. By the end of October the first stage, including the upgrade of the linac injector and the removal of the Beijing Spectrometer (BES) from the interaction region, will have been carried out. The upgrade will be finished by the end of 2006 and physics running should be resumed by the spring of 2007. To minimize interruption to the users of the Beijing Synchrotron Radiation Facility, the upgrade is planned in three stages, with synchrotron radiation runs in between.
BEPC has been running in the energy region of the tau and charm for more than 15 years, with many notable experimental results. However, to meet the challenges in the precision measurements of the charm energy region, a thorough upgrade is necessary if the facility is to continue productive studies and to lead the world in this research. The Chinese government approved the BEPCII programme, which has a budget of 640 million Chinese yuan ($77 million) and a construction period of three years.
To meet the challenging goal of continuing world-leading studies of charm physics, a double-ring design has been chosen. A storage ring will be added in the existing tunnel so that the electrons and positrons can travel separately in their own rings. The number of positron and electron bunches will be increased from 1 to 93 in each ring, with a large horizontal collision angle of ±11 mrad. In addition, other new technologies have been adopted – such as a superconducting radiofrequency system, superconducting micro-beta quadrupoles and a low-impedance vacuum chamber – so that the performance of BEPCII will be improved by a factor of 100, for a design luminosity of 1033 cm-2s-1 at a centre-of-mass energy of 3.77 GeV. As the circumference of BEPCII is only 240 m and the straight section of the interaction region is rather short, many technical challenges will have to be overcome to meet the design goals.
At the same time, the Beijing Spectrometer is being upgraded to improve its measurement precision and reduce systematic errors, as well as to adapt to the high event rate of BEPCII. The upgraded BESIII includes a CsI calorimeter, a superconducting solenoid magnet and a main drift chamber with small cells and helium-based gas.
The removal of BES and the upgrade of the linac mark the beginning of the BEPCII installation. After the upgrade, the positron injection rate of the linac will reach 50 mA per minute, an improvement of a factor of 10, with full energy injection up to 1.89 GeV. Further milestones will involve the dismantling of the old storage ring and the installation of the new double ring, from April 2005 to January 2006, followed by the moving of BESIII into the interaction region
in October 2006.
The upgraded BEPC should be able to maintain its leading role in charm-physics research, with new results in the search for glueballs, quark-gluon hybrids and exotic particles, precision measurements of the R-value, precision measurements of the Cabibbo-Kobayashi-Maskawa matrix element, the study of the charmonium spectrum and charmonium decay properties, and so on. The hope is that BEPCII will provide a new platform for productive and fruitful physics, not only for Chinese physicists but also those from around the world.
SweGrid, the first national Grid test-bed in Sweden, was inaugurated on 18 March in Uppsala. The Grid nodes, each consisting of a cluster of 100 PCs and 2 Tbyte of disk storage, are located at the six national computer centres in Umeå, Uppsala, Stockholm, Linköping, Göteborg and Lund, and are linked together through the 10 Gbit/s national network SUNET. An additional 60 Tbyte disk storage will be delivered in May and eventually the test-bed will comprise 120 Tbyte disk storage plus 120 Tbyte robotic tape storage in total.
The initiative for this national Grid has come from the Swedish high-energy physics community and was driven by the future requirements for large computing capacity to analyse data from the Large Hadron Collider (LHC). One-third of SweGrid’s full computer resources are currently being used for the execution of the “ATLAS Data Challenge 2” in May and June 2004. In addition, many other applications in other branches of science, such as genome research, climate research, solid-state physics, quantum chemistry and space science, are also being launched on SweGrid.
The equipment for SweGrid has been financed by the Wallenberg Foundation in Sweden. The personnel costs for seven SweGrid technicians and three doctoral students are being covered by the Swedish Research Council through its Swedish National Infrastructure for Computing (SNIC). The Strategic Technical Advisory Committee in SNIC, composed of the directors of Sweden’s six national computer centres, is acting as SweGrid’s executive board.
A Nordic Grid development project, NorduGrid, began in 2000 as a collaboration between high-energy physicists. It set up the first small Nordic Grid test-bed in 2001 and used this to develop the NorduGrid middleware, which has become one of the first Grid middlewares to be used in production internationally, as during the “ATLAS Data Challenge 1” in 2003.
Stimulated by this progress, the Nordic Science Research Councils (NOS-N) took a common initiative to study how the computer resources in the Nordic countries could be organized in a common Grid facility, called the Nordic Data Grid Facility (NDGF). SweGrid constitutes a Swedish contribution to this common effort. The NDGF study group is scheduled to forward a detailed proposal for such a facility to the NOS-N committee within a year from now.
Several interesting presentations were given at the SweGrid inauguration seminar. Mario Campolargo, head of the Information Society Research Infrastructure Unit of the European Commission, described the pan-European GEANT computer network and the potential this represents for Grid development in Europe. He also discussed the significance of the current European Grid development initiatives sponsored by the EC 6th Framework Programme, such as Enabling Grids for e-Science in Europe, a CERN-led initiative in which Sweden has an active role.
Erik Elmroth from the Swedish National Computer Center in Umeå discussed current activities for making Grid services more accessible, such as developing tools for resource brokering and Grid-wide accounting, and establishing Grid portals as common easy-to-use interfaces to the Grid. Niclas Andersson, the leader of the six technicians who have set up and are now running SweGrid, described the deployment and operations of the test-bed and presented its technical specifications.
John Ellis from CERN gave an overview of the physics at the LHC and illustrated the large computer resources required if the new physics phenomena were to be discovered at the LHC. He demonstrated that finding a heavy particle of mass 1 TeV/c2 at the LHC would be the equivalent of finding a needle in all of Sweden’s haystacks, which he estimated to be 100 m3 each in volume and to total 100,000. Gilbert Poulard, also from CERN, described the reconstruction and analysis of events in ATLAS and how the software and access to data will be exercised with Grid tools during the forthcoming Data Challenge 2.
There were also reports on Grid applications in other disciplines. Gunnar Norstedt from the Karolinska Institutet in Stockholm described the use of SweGrid for the analysis of gene promoters; a gene promoter is a portion of DNA that regulates the genes and their expression. A general computer code for such analysis has been set up and will be made available at SweGrid through a Grid portal. Roland Lindh from the quantum chemistry group at Lund University described MOLCAS, which is a code for electronic structure calculations in large molecules and which will be accessible on SweGrid.
The final part of the ceremony was conducted by Anders Ynnerman, the leader of SNIC. After Janne Carlsson from the Wallenberg Foundation and Jan Martinsson from the Swedish Research Council had expressed their great satisfaction with the project, Sverker Holmgren, head of the Uppsala National Computer Center, gave a successful first demonstration of how to operate SweGrid.
The future possibilities for research at the neutron complex of the Institute for Nuclear Research (INR) of the Russian Academy of Sciences (RAS) was the topic of a workshop that took place in Moscow on 12 March.
The operation of the INR RAS neutron complex located in Troitsk, Moscow, is based on a beam provided by the high-current proton linac of the Moscow Meson Facility. The complex includes a spallation or “impulse” neutron source (IN-0.6) with neutron guides and installations for condensed-matter investigations, a 100 tonne spectrometer for neutrons slowing down in lead (LNS-100) and an irradiation facility, called RADEX, at the beam-stop of the experimental area.
LNS-100 started operation in 2000, and now is to be joined by a time-of-flight facility that will allow complementary experiments in nuclear physics. The beam-stop is being modified to provide a time-of-flight neutron spectrometer, which should provide additional opportunities for neutron-nucleus studies by the beginning of 2005. The first measurements of neutron fluxes with a working model of the time-of-flight neutron spectrometer were taken in 2003.
Approximately 60 representatives from research groups in leading institutes in St Petersburg and the Moscow region participated in the workshop, where discussions revealed a strong interest among the nuclear physicists for co-operation in experiments at the neutron complex. It may also prove to be a suitable place for international experiments on accelerator-driven systems and studies of nuclear transmutation problems.
An accelerator-driven system facility developed around the linac in the INR RAS neutron complex, including a target and a 5 MW subcritical core, could be suitable for studies of the nuclear transmutation of minor actinides and long-lived fission products. Discussions between specialists from the Russian Research Centre Kurchatov Institute, based in Moscow, the Research and Development Institute of Power Engineering, also in Moscow, the Pôle Universitaire Léonard de Vinci La Défense, in Paris, the Institute for Nuclear Research and a number of other centres are now underway.
On 19 December 2003 the Laboratoire de l’Accélérateur Linéaire at Orsay marked the final shutdown of its linear accelerator. The event, which was a highly nostalgic occasion, was commemorated by an official ceremony attended by many scientists, engineers and technicians. On 24 December 1958, almost 45 years earlier to the day, the linac had delivered its very first beam, a 3 MeV electron beam. One year later, the first section of the machine achieved an energy of 165 MeV, thus enabling the experiments to begin.
The decision to build a large linear electron accelerator at Orsay was taken in 1955, the year in which land in the Vallée de Chevreuse had been acquired with a view to extending the Faculty of Science in Paris. The specification for the linac and the responsibility for its construction were entrusted to Yves Rocard, head of the physics laboratory of the Ecole Normale Supérieure, and to Hans Halban, head of research in nuclear physics at the laboratory. The CSF (Compagnie générale de télégraphie sans fil) played a major role in the construction. Right from the start, the laboratory housing the machine was named the Laboratoire de l’Accélérateur Linéaire, or LAL for short.
The advantage that a linear accelerator has over circular machines is that its energy can be pushed very high by adding additional sections. The Orsay linac’s energy was gradually increased in this way, reaching 1.3 GeV in 1964 – which for a short time was the world energy record for electron linacs – and 2.3 GeV in 1968. As of 1963, the accelerator was also equipped to deliver a positron beam, whose initial energy of 250 MeV was later increased to 1 GeV.
From the outset the accelerator was also equipped with spectrometers, which became more and more powerful in order to keep up with the linac’s increasing energy. Initially used in electron scattering experiments, whose purpose was to explore the internal structure of many nuclei and of nucleons themselves, they were later refined to allow the study of π-meson photoproduction and electroproduction, and then the photoproduction of K and η mesons. Tests of quantum electrodynamics were also carried out.
The initial goal of the positron beam was to compare these particles’ scattering cross-sections with those of electrons, in order to measure interference between amplitudes involving the exchange of a single and two virtual photons. The positron beam later became a valuable tool in the implementation of collider and storage rings. The Orsay collider ring, ACO, came first, with the first experiment performed in 1967; then the DCI (Dispositif de collisions dans l’igloo), commissioned in 1977; and finally the Super-ACO ring, fully dedicated to the production of synchrotron radiation, with its first experiment in 1988. From 1985 onwards the linac was used exclusively as an injector for the latter two rings. In the case of the DCI, the quality of vacuum achieved through the machine’s 26 years of exploitation – first for particle physics and then as an X-ray source – was such that it was enough to inject particles on Monday mornings for a sufficiently intense beam to be available throughout the following week.
Thus, during its 45 years of operation, the Orsay linac was used, in turn, for nuclear physics, particle physics and as an injector for the synchrotron light sources for LURE (Laboratoire pour l’Utilisation du Rayonnement Electromagnétique). The ceremony on 19 December 2003 marked the shutdown of the beams that were circulating in the DCI and Super ACO, and were being used by scientists at LURE.
Although the Orsay linac has not been used by the particle-physics community at LAL for almost 20 years now, the laboratory is still carrying out R&D and construction in the field of electron linear accelerators. In partnership with CERN, it designed and built the electron and positron injector, LIL, for LEP in the 1980s. Today, LAL is heavily involved in a study and test programme concerning the power couplers that will supply the superconducting cavities in the framework of the TESLA collaboration; it is also responsible for the electron gun and the pre-buncher cavities of the CTF3 (CLIC test facility) prototype at CERN. Thanks to this activity and the laboratory’s ambition to take part in the construction of a future linear collider, which it is hoped will be undertaken by a worldwide collaboration, LAL continues to be an appropriate name for the laboratory, even if the accelerator from which it derives has now been closed down.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.