When the electron-proton storage ring HERA at DESY began its summer shutdown in mid-August, it had broken several records. It had delivered a luminosity of 3.8 x 10+31 cm-2s-1, exceeding its previous record of 2.0 x 10+31 cm-2s-1, with an integrated luminosity of 87 pb-1, which beat the previous record in 2000, and it had become the first storage ring to provide longitudinally polarized high-energy positrons in colliding-beam mode.
It has been a long and hard struggle to get HERA back into successful operation after a challenging upgrade in 2000 and 2001. Unexpectedly severe backgrounds prevented the collider experiments H1 and ZEUS from taking data when HERA restarted in 2001. The main causes were found to be the strong heating of the beam pipe due to the short positron bunches and the intense synchrotron radiation from the positrons close to the experiments. These resulted in a degradation of the vacuum, and the spray of particles from the interaction of the proton beam with the residual gas produced the unacceptable backgrounds.
Close collaboration between the HERA machine crew and the experiments, aided by external and internal advisory committees, allowed one problem after the other to be identified, understood and solved. Major changes to the beam collimation system, the vacuum system and the detectors were required. Finally, early in 2004, the improvements were such that H1 and ZEUS were able to take data at the nominal HERA beam currents (100 mA of protons and 50 mA of positrons). From then on, the HERA machine crew could concentrate on steadily increasing the HERA currents while the experimenters could focus on taking data efficiently. In parallel, the positron polarization was improved steadily and values in excess of 50% were reached. However, work still remains to be done to achieve high polarization reliably at high luminosities.
All three experiments at HERA – H1 and ZEUS as well as the HERMES experiment with a polarized gas target – have successfully taken data in 2004, with results already presented at ICHEP’04, the International Conference on High Energy Physics held in Beijing in August. Examples include the first, long-awaited measurement of the polarization dependence of the weak interaction cross-section by H1 and ZEUS, and the world’s first determination of the structure of the proton by measuring the scattered positron and the hadronic final state using a target transversely polarized to the direction of the positron beam by the HERMES experiment. While the results are interesting, they demonstrate that about 10 times more data, taken with both electrons and positrons, are required to exploit the scientific potential of the upgraded HERA collider.
During the two months of the summer shutdown, the HERA crew has continued to improve the vacuum system, exchanged components that have caused inefficiencies in running and carried out the regular safety checks that are legally required. When HERA comes back into operation this month (October), the challenge will be to demonstrate that the machine and its experiments are also able to run and take data efficiently with electrons – as they have now proved they can do with positrons.
A key development for the future of high-energy radioactive ion-beam facilities is the efficient and fast “charge-state breeding” from singly charged ions at low energies (10-60 keV). Recently, a major breakthrough has been made with the first charge breeding with an electron cyclotron resonance (ECR) source at the ISOLDE facility at CERN.
Two processes for the charge breeding of 1+ states to multi-ionized ones are currently available. At ISOLDE the 1+ beam is cooled into a Penning trap before injection into an electron-beam ion source (EBIS), which performs the multi-ionization. At the Laboratory for Subatomic Physics and Cosmology (LPSC), Grenoble (CNRS/IN2P3-UJF-ENSPG), the 1+ beam is captured directly into the dense plasma of a dedicated ECR ion source, the PHOENIX Booster, which ensures the cooling and multi-ionization.
At the end of an EU contract on charge breeding (HPRI-CT-1999-50003), which set up a collaboration of European laboratories to study this important topic, the IS397 experiment at ISOLDE was agreed upon to compare the characteristics of the two techniques. The concept of the PHOENIX Booster has already been chosen for two future facilities, ISACII at TRIUMF in Vancouver, Canada, and SPIRALII at GANIL in Caen, France.
Charge breeding of stable ions with an ECR ion source (ECRIS) has been conceived, studied and improved over the past 10 years at LPSC. The first ECRIS used was MINIMAFIOS, by Richard Geller, and since 2000 a dedicated one, the PHOENIX Booster, has been developed together with its associated injection optics by Pascal Sortais and Thierry Lamy. For the primary beams more than 20 elements have been produced by various ion sources (thermo-ionization, glow discharge, ECRs). The parameters measured are the efficiency yields, the charge-breeding times, and eventually, for pulsed modes, the time the multicharged ions are trapped in the device.
The IS397 set-up at ISOLDE duplicates the experiment at LPSC to assure a fully comparable operation. The CLRC Daresbury Laboratory made its PHOENIX source available to the collaboration, together with all its power supplies; the analysing magnet for the multi-ionized (n+) beams came from the test bench at LPSC; and the detection device was provided by ISOLDE. Several other institutes have also contributed with equipment, including Ludwig-Maximilians-Universität with a double Einzel lens and GSI with power supplies.
In the experiment a 50 nA 238U+ beam from the REX-ISOLDE target was injected in the PHOENIX source. The extracted spectrum (see figure) shows a 2% efficiency for 238U26+. The 96Sr15+ and 94Rb15+ states have also been produced, as well as stable and radioactive ions of the noble gases argon, xenon and krypton.
The main advantages of the technique are the simple technology, the fast tuning and the reliability of the system. For the future development of high-intensity accelerators, it is important to note that there is no limitation on the 1+ primary beam current accepted by the process. This is due to the high density of the ECR plasma, which varies as the square of the radiofrequency used to produce the ionizing plasma. Currently 14 GHz is used, giving a density of a few 1012 ions per cm3. The limitation currently is the ECR breeding of the light elements like lithium, sodium and neon, which is less efficient than the Penning trap-EBIS system. However, one of the goals of IS397 is to conclude in which context each technique is more convenient.
• The charge-breeding collaboration is Ludwig-Maximilians-Universität, Munich, Germany; LPSC Grenoble, France; CLRC Daresbury Lab, UK; ISOLDE, CERN, Switzerland; GANIL, Caen, France; INFN, Legnaro, Italy; Kungliga Tekniska Högskolan, Stockholm, Sweden; and the University of Jyväskylä, Finland.
When CERN’s Super Proton Synchrotron (SPS) starts up its scheduled run for 2006 it will be continuing the laboratory’s long tradition of experiments with neutrino beams, exemplified by the discovery of weak neutral currents in Gargamelle. However, in this case the neutrinos will not be destined for detectors at CERN, instead they will be travelling 730 km to the Gran Sasso underground laboratory in Italy, 120 km east of Rome. There is now unambiguous evidence from solar and atmospheric neutrino experiments that neutrinos can “oscillate” – change from one type to another. The aim of the CERN Neutrinos to Gran Sasso (CNGS) project is to investigate this phenomenon further, over a completely different range of energies and distances. Its muon-neutrino beam will be tuned to produce a maximum number of neutrinos per year with a neutrino energy spectrum best suited to the search for muon- to tau-neutrino oscillations.
The CNGS neutrino beam will originate when 400 GeV protons, extracted from the SPS, strike a graphite target to create pions and kaons. The muon-neutrinos produced in the decays of these particles will form the beam directed towards Gran Sasso. A key component of the beam line is therefore a vacuum tube, with a diameter of 2.45 m and length of 1 km, in which the particles decay. The tube passed its vacuum tests at the end of April this year, an important milestone for the overall project.
The decay process has a natural angular spread; even a perfectly aimed pion beam would still produce a neutrino beam with a large angular divergence. In the case of CNGS the neutrino beam arriving at Gran Sasso will have a radius of about 750 m (1σ). Although this is very large compared with the detector size, it is still important to aim the beam at the detectors at Gran Sasso as accurately as possible. Using the most advanced geodetic techniques, including GPS positioning, the CERN survey team wants to “hit” the target with an error better than 50 m. Since the decay tube acts like a collimator for the neutrino beam, the accuracy with which this tube is put in place is crucial.
In order to aim at the Gran Sasso laboratory, the CNGS facility at CERN – the last section of the proton beam line, the production target, etc – is built on a vertical slope of 5.6% and the decay tunnel passes some 12 m below the tunnel of the Large Hadron Collider (LHC). It was therefore decided in December 2001 to drill a vertical hole down from the LHC tunnel at exactly the position at which the decay tunnel should be located. The first qualitative success occurred on 4 March 2002, when the machine boring the decay tunnel indeed passed below this point. Later measurements showed that the tunnel was accurately located to within a few centimetres. The remaining errors were corrected during the installation of the decay tube inside the tunnel.
Why did the project opt for a decay vacuum? The aim is to have as intense a muon-neutrino beam as possible, and if a maximum number of pions and kaons are to be left “free” to decay, without interaction, it is important to have a minimum amount of material in their path. A decay tunnel with air would have resulted in a 28% loss of particles as compared with a vacuum tube; a tube filled with helium would have led to a 7% loss and would still have required a vacuum chamber to evacuate the air first and contain the helium.
The decay tube itself is a 992 m long steel tube, 2.45 m in diameter and 18 mm thick: it is allegedly the largest standard diameter used in the oleoduct (oil pipeline) industry, and a larger size would have produced little gain in neutrino intensity. The chosen length of 992 m conveniently takes the decay tube beyond the LHC tunnel. A much longer decay path would have produced 20-30% more neutrinos but with a significantly higher fraction of unwanted neutrino species in the beam, and the extra cost would have been prohibitive. Studies showed that with a wall thickness of 18 mm throughout and no anchoring points the tube has enough rigidity to withstand the external air and hydraulic pressures and the effects of beam heating.
At CERN the preparations for CNGS have involved the excavation of some 3 km of new tunnels plus several caverns. The first phase of civil engineering was completed in 2003, with the handover of part of the tunnels from the contractors on 1 August. While excavation work had finished in December 2002, concreting the various caverns, which began in 2002, and the 1.7 km of tunnels took a further 6 months – requiring some 12,000 cubic metres of concrete.
The decay tunnel was excavated by a tunnel boring machine at a diameter of 3.7 m. As the tube itself is 2.45 m in diameter there were two options for installation – leaving the tube in air, which would require steel thick enough to withstand pressure and temperature differences, or embedding it in concrete, in which case the concrete prevents buckling and provides some thermal contact between the “hot” steel tube and the “cold” rock surrounding the tunnel. The option of embedding the decay tube in concrete was chosen as it allows the use of a thinner walled steel tube and gives the best cost/quality ratio overall.
Before the decay tube could be installed, the components of the beam dump had to be transported to the end of the tunnel and assembled. The installation of the “hadron stop”, or beam dump, was completed in September 2003. Resembling a three-dimensional “puzzle”, the hadron stop consists of more than 400 iron blocks, 76 graphite blocks and 12 aluminium cooling plates. With a total weight of about 2000 tonnes, it is intended to absorb all particles other than the neutrinos and the muons produced with them. The iron blocks were recuperated from the former West Area Neutrino Facility on CERN’s Meyrin site.
The decay tube was constructed from metal sheets that were rolled and welded into 6 m long sleeves by a contractor in Italy, before being transported to CERN. With two sleeves per lorry, there were 85 lorry loads altogether. Once at CERN the sleeves were lowered via a 55 m deep shaft into an access gallery and transported 650 m to the “target chamber”, where they were welded into sections 18 m long. The welds were then tested visually, with dye for weld penetration, using ultrasound and radiography, before each 18 m section was transported on rails into position in the decay tunnel. Finally the section was aligned and welded to the existing tube, and after further tests of the welding, concrete was injected to seal the tube to the rock of the tunnel. A total of 3.6 km of welds, including those made at the factory, was needed to complete the decay tube. The results of the quality checks are impressive: faults found (and repaired) were at the level of less than 1% of all the welds.
Installation of the decay vacuum tube began on 18 November 2003. Once the initial difficulties had been overcome, it took 24 hours to complete each 18 m section. The full installation was finished on 16 March 2004, and on 1 April the tube was closed ready for pumping down and vacuum testing. The tube sleeves had arrived rusty and humid, but with the aid of a ventilation system, by the time of pumping down the tube was dry, if still rusty. It was then evacuated to less than 1 mbar, sealed and monitored continuously for 10 days. Throughout the test period the pressure remained stable, never exceeding 1.3 mbar. Such a good result was a pleasant surprise: the tube walls must be smoother than expected with very little absorption of water or gas as virtually no outgassing was observed.
Completing the vacuum volume are the entrance and exit windows, designed and built at CERN. For the central part of the entrance window, where beam interactions must be kept to a minimum, a 3 mm thick titanium window with a diameter of 1.45 m will be used, somewhat larger than the useful beam size at this point. This is currently being built, for installation in January 2006, so a temporary window of 50 mm steel has been installed for the vacuum tests. At the far end of the decay volume, where losses due to interactions are not so critical, a 50 mm thick steel window has been chosen and installed.
All in all the CNGS project is well on course for the scheduled first delivery of neutrinos to Gran Sasso in spring 2006. Meanwhile, progress is being made on the two major experiments, OPERA and ICARUS, that will intercept the neutrinos.
At Gran Sasso installation of the magnetic spectrometer of the first of two “supermodules” for the OPERA experiment was successfully completed in June, with installation of the target section starting in September. The magnet and target section of the second supermodule will be completed in June 2005 and December 2005, respectively. The first supermodule will be fully filled with emulsion bricks by the time of the first beam in May 2006, while the filling of the second one will be completed during the 2006 run.
At the same time, after several years of R&D, the ICARUS experiment, which acts as an “observatory” for the study of neutrinos and the instability of matter, is coming together. In the summer of 2001 the first module of the ICARUS T600 detector successfully passed a series of tests on the surface. This module should be installed at the Gran Sasso laboratory this autumn. An increase in detector mass by the addition of further modules is foreseen and should be ready to receive the CNGS neutrino beam when it starts up.
About 35 years ago the physics community of the Institute of Electrical and Electronics Engineers (IEEE), currently the world’s largest professional science and engineering organization with more than 360,000 members worldwide, established the Nuclear Science Symposium (NSS) as one of its major annual events. The NSS had originally evolved from IEEE meetings on scintillation (and then semiconductor) detectors before being formally established in 1969. In the following years interest in the NSS grew substantially with additional subjects being added, including a session on medical imaging, which soon developed into a major parallel conference at the week-long meeting.
The Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) has since grown into a well known, highly respected, and important event for scientists worldwide working in the fields of particle and nuclear physics instrumentation, radiation detection, the associated hardware, electronics and software, and on applications in fields such as medicine. With the numbers of conference contributions (shared roughly equally between NSS and MIC) almost doubling in recent years, and a record number of more than 1600 scientific contributions received for this year’s meeting in Rome, the combined NSS/MIC, together with its specialized workshops, educational courses and industrial programme, has developed into a truly global forum for scientists and engineers active in these fields (see figure 1).
Early history and mission
The NSS was formally established as a broad nuclear instrumentation conference under the sponsorship of the IEEE Professional Group on Nuclear Science (PGNS). The visionary behind the formation of both the IEEE Nuclear Sciences Society and the NSS was Louis Costrell of the National Bureau of Standards in Washington DC, who later pioneered many nuclear instrumentation standards (such as NIM, CAMAC, FASTBUS and VMEp), with the collaboration of the European Standards on Nuclear Electronics organization and many international collaborators. The PGNS became a full society of the IEEE in 1970, elected its first administrative committee (AdCom) in 1971, and soon combined with the Professional Group on Plasma Sciences to become the Nuclear and Plasma Sciences Society (NPSS). In response to high interest among medical physicists, the Medical Imaging Technical Committee was formed in 1975, chaired by Leon Kaufman of the University of California at San Francisco Medical Center. Others prominent in its formation were Bertrand A Brill of Emory University and Glenn Knoll of the University of Michigan. As an elected member of this first NPSS AdCom, Ray Larsen of SLAC assisted in its organization. By 1990 the medical imaging sessions had grown to the point where the NSS became the joint Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC).
The early NSS/MIC meetings included special sessions on various aspects of nuclear instrumentation, nuclear-reactor controls, detector instrumentation and instrumentation for medical imaging sciences. This was the heyday of the development of new imaging CAT scanners in industry, exciting new modular electronics for accelerator instrumentation and detectors, as well as applications in many areas of nuclear-physics research. With the advent of proportional wire chambers and later custom-built silicon detectors, increasingly sophisticated generations of integrated electronics readout systems for applications in both physics and medicine were developed.
The main mission of the NSS/MIC is to serve the scientific community by providing an annual forum and meeting point to present and discuss work, problems and the latest advances in the relevant fields, to the benefit of all participants. The rapid publication of conference proceedings, including on CD, is invaluable, as is access to the authors at the conference. Papers can also be submitted for publication in peer-reviewed archival journals; this takes much longer but is more prestigious to academics. All publications are distributed worldwide.
Recently, the organizers of NSS/MIC have emphasized their desire for feedback from the scientific community for shaping future meetings. Young scientists doing outstanding work in the various fields covered by the NSS/MIC are particularly encouraged to contribute to the conference. In addition, NSS and MIC technical achievement, fellow, best paper and graduate student awards are presented at the conference both to younger and more mature scientists and engineers nominated by their peers or advisors. (Nominees do not have to be a member of the IEEE or the NPSS).
Typical NSS/MIC programmes consist of plenary, parallel and poster sessions, workshops on specialized topics, educational short courses and an industrial programme with an exhibition and seminars. Special events and an extensive social and companion programme round out the week. This seems to provide an ideal environment for very fruitful, beneficial, and increasingly more important interdisciplinary communication between the various fields, the contributing experts and communities, and their industrial partners. To facilitate and emphasize this important aspect of the meeting, the organizers include as much time and as many opportunities as possible for participants, including those from different fields, to get together. Moreover, seeking constant improvement year-to-year, the organizers are careful to request feedback from all participants.
The joint conference has produced a unique synergy among many disciplines, reaping benefits for a wide range of engineers and scientists. In addition to regular sessions in the various areas of interest, the conference has pioneered the use of posters, continuing education short courses, and special plenary sessions that feature world-class presenters of the latest research on accelerators, nuclear medical imaging, detectors and instrumentation for high-energy physics, space physics, environmental detection, reactor controls and nuclear security systems.
Pushing boundaries in Portland
The 2003 meeting in Portland, Oregon, attracted the largest number of participants to date (see figure 2). Chaired by Uwe Bratzler of CERN and NTU Athens (now at Tokyo Metropolitan University) and Maxim Titov of Freiburg University and ITEP Moscow for the NSS, and Mike King and Stephen Glick of the University of Massachusetts for the MIC, with Ralph James of Brookhaven National Laboratory as the overall conference general chair, it provided a comprehensive review of the progress and latest developments in technology and instrumentation and their implementation in experiments for space, accelerators and other radiation environments. The scientific programme underlined the interdisciplinary and synergetic combinations of the NSS and MIC topics, and covered a wide range of applications from radiation instrumentation and new detector materials, to complex radiation detector systems for physical sciences and advanced imaging systems for biological and medical research. Special emphasis was also put on the transfer of technology between fundamental physics research, medical and biological imaging science and industrial applications. More than 572 accepted submissions for the NSS were accommodated in 323 oral presentations and 249 poster contributions.
The field of high-energy particle physics featured widely in Portland, with presentations of innovative research on semiconductor devices by groups working on experiments being prepared for high-luminosity hadron colliders, in particular the Large Hadron Collider (LHC) at CERN, including many impressive front-end electronics systems that are either in or close to being in production. A large number of talks on pixel systems, with comprehensive contributions on deep sub-micron front-ends, radiation-hard detectors, bump bonding and new integrated approaches, gave good insight into the interplay between the progress in technology and requirements for tracking close to the interaction point in the high-intensity collider experiments. Many developments, motivated by the prospects of a linear collider where higher resolution, less material and power are mandatory, are pushing the boundaries of semiconductor detector technology.
Progress in micro-pattern gas detectors was represented by more than 30 contributions. The evolution of gaseous detectors to novel 2D and pixel readout electrodes, together with their intrinsic excellent radiation tolerance, rate capability and spatial resolution, extends their applicability to precision tracking at high counting rates in hostile environments, an area that is currently accessible only to silicon detectors. The many sessions on large systems and more specific topics under development underlined that finding the optimum balance between power, cooling capacity, granularity and performance is the key for further progress in the field of detectors.
In Portland, an International Workshop on Room-Temperature Semiconductor X-Ray and Gamma-Ray Detectors, chaired by Ralph James of Brookhaven and Paul Siffert of Laboratoire Phase, Strasbourg, was held in conjunction with the NSS/MIC meeting, and five topical “satellite” workshops covered areas of specific interest. Workshops on the Compton camera and hadron therapy were combined with full-day workshops on topics of major importance for high-energy physicists involved in the current construction or future upgrades of experiments. The workshop “Problems with Detector Fabrication, Testing, Quality Control and Long Term Operation” provided a thorough review of the problems encountered during production, quality control, and medium- and long-term operation of large systems. The “Detector Aging Workshop”, a follow-up of the workshop that was held at DESY in 2001, once again gathered many experts on this critical matter. Fruitful discussions took place between the groups running the experiments and others currently building detectors for the BTeV experiment at Fermilab and the experiments at the LHC. Trends were discussed and almost agreed upon! The major concerns for all of the groups seem to be the selection of gas mixture, chamber materials and the associated gas system components, and the problems related to CF4-induced etching on materials. Radiation hardness studies should also be performed for the muon detectors in high-luminosity experiments – systems that are so far comfortably coping with low instantaneous rates and negligible radiation doses.
The IEEE NSS/MIC conference is the instrumentation “Mecca” and is unique in the world; no other conference has such a broad synergistic combination of topics. While its roots are clearly in the US, the IEEE has repeatedly stated its objective to be a truly transnational organization, and its largest growth is currently outside the US. The 2004 NSS/MIC conference is taking place in Rome on 16-22 October at the Ergife Palace Hotel, one of the largest exhibition and congress areas in Europe. The 2004 IEEE NSS/MIC committee includes Alberto Del Guerra of Pisa as general chair; Fabio Sauli and Archana Sharma of CERN as programme chair and deputy chair for NSS, respectively; Sibylle Ziegler of the Technical University, Munich, and Michel Defrise of the Free University, Brussels, as programme chair and deputy chair for MIC, respectively. As it has become an IEEE-wide decision to encourage more conferences outside the US, the NSS/MIC will be held abroad every three to four years. After Lyon in 2000 and Rome in 2004, the next overseas conference is being considered for 2008.
One of the main goals of the IEEE, according to its constitution, is as follows: “The IEEE shall strive to enhance the quality of life for all people throughout the world through the constructive application of technology in its fields of competence. It shall endeavor to promote understanding of the influence of such technology on the public welfare.” As our scientific fields, including industry, benefit more and more by transnational interdisciplinary exchange, the IEEE NSS/MIC will continue to help fulfill this mandate by providing a unique forum for communication and open sharing of experience among a broad range of scientists and engineers worldwide.
The Accelerator Test Facility (ATF) at the Brookhaven National Laboratory is the first advanced accelerator facility designed and built to serve the community active in advanced accelerator research. A proposal-driven user facility, it is dedicated to long-term R&D in the physics of particle and laser beams. The users, who come from universities, national laboratories and industry, carry out R&D on advanced accelerator physics, studying in particular the interactions of high-power electromagnetic radiation and high-brightness electron beams, including laser acceleration of electrons and free-electron lasers (FELs). Other topics include the development of electron beams with extremely high brightness, photoinjectors, electron beam and radiation diagnostics, and computer controls.
The core of the ATF consists of a high-brightness photoinjector electron gun, a 75 MeV linac, high-power lasers synchronized to the electron beam to a picosecond level, four beam lines (most equipped with energy spectrometers) and a sophisticated computer-control system. The facility, which has been in operation since 1992, provides the best high-brightness electron beams up to an energy of 75 MeV, with, for example, a normalized rms emittance of 0.8 µm at a charge of 0.5 nC. The bunch length is variable from 1 to 8 ps, with a bunch compressor to extend the range down to 100 fs.
The users enjoy an extensive support infrastructure, which has a few tens of million dollars of investment and is embedded in a large and highly capable national laboratory. ATF staff provide the users with close support and expertise in electron beam-dynamics, lasers and optics, advanced diagnostics, energy spectrometers and computer control. This support is free of charge, while the use of other resources at Brookhaven, as well as the dedicated equipment for experiments, are the responsibility of the users. The users’ activities are reviewed by the ATF Programme Advisory Committee, which includes members from various universities and national laboratories. The committee keeps the number of users relatively steady.
The publication rate from experiments at the ATF is high, with an average of more than three papers in Physical Review per year. The facility is also an excellent training ground for graduate students in accelerator physics and the physics of beams, with on average more than two graduations a year. While a large number of students come from nearby Stony Brook University, the majority come from universities across the US and throughout the rest of the world. The ATF staff is proud of its contribution to graduate education in accelerator and beam physics, through education and support of the students.
The ATF receives steady support from the US Department of Energy, which has enabled the facility to evolve not only in terms of hardware and staff expertise but also in terms of stability and the superb performance of the electron and laser beams. This environment is beneficial to the difficult, cutting-edge experiments in advanced accelerator and coherent source physics that are carried out by the users.
The work of the ATF has pioneered metallic photocathodes such as copper, magnesium and, most recently, niobium, for robust, good quantum efficiency operation. These photocathodes are now found everywhere in the world and are also produced industrially. The same holds true for the radiofrequency (RF) guns, with the celebrated Brookhaven one-and-a-half-cell S-band series of guns. The series now stands at Gun IV (see figure 1), while a new superconducting continuous-wave RF gun is being developed. Examples of advanced diagnostics undertaken at the ATF include the first slice-emittance measurement, the first pulse-length measurement using shot-noise driven fluctuation in incoherent radiation, high-resolution phase-space tomography and more. The ATF is also developing high-performance plasma capillary channels that channel the carbon-dioxide laser beam and provide a convenient source of plasma for a variety of experiments. Most recently, R&D is being carried out on optical stochastic cooling of hadron beams.
By far the most important aspect of the ATF is the research carried out by its users. Milestone experiments in laser acceleration include the work on inverse Cherenkov acceleration and the inverse free-electron laser (IFEL). STELLA, the Staged Electron Laser Acceleration experiment, has successfully used two laser accelerators (both IFELs), demonstrating the steady production of 3 fs electron-beam bunches (figure 2). With this configuration STELLA II has shown monoenergetic laser acceleration for the first time.
Experiments on the development of laser-photocathode RF guns include the “Next Generation Photoinjector”, or Gun III in the ATF series. Others concern the generation of unique radiation sources, including the pioneering high-gain harmonic-generation FEL that set a new trend towards coherent, ultrashort pulse X-ray FELs. The VISA experiment at the ATF, which served as a proof-of-principle experiment for the Linac Coherent Light Source project at SLAC, reached saturation at visible wavelengths and demonstrated the generation of harmonics, their growth and saturation properties and the relationship to microbunching. The Compton scattering experiment to investigate Compton scattering between energetic electrons and laser beams produces a record of about 108 hard X-ray photons per pulse of a few ps.
Most recently, a plasma wake-field experiment demonstrated the phase relationship between the accelerating and focusing component of the plasma wake. This showed a 90° phase difference, thus allowing plasma wake accelerators to accelerate and focus the beam at the same phase.
On 6 July the vehicle with the first nine of 84 muon chambers made at the Dzhelepov Laboratory of Nuclear Problems (JINR DLNP) for the ATLAS experiment left Building 5 of DLNP and set out on its long journey to Geneva.
Having begun in 2000 within the framework of JINR’s participation in the construction of the muon system for the ATLAS detector at the Large Hadron Collider, this work is now entering its final stage. In the coming year all the remaining muon chambers will be assembled and tested at JINR and transported to CERN.
Once at CERN the muon chambers will be tested again and put together with the “trigger” resistive plate chambers, which are being manufactured in Italian scientific centres. The complete assembly will then be given a final test before being installed in the ATLAS pit.
The free-electron laser (FEL) located at the US Department of Energy’s Thomas Jefferson National Accelerator Facility and supported by the Office of Naval Research achieved 10 kW of infrared laser light in late July, making it the most powerful tunable laser in the world. Several experiments are about to begin at the new facility, including a study of laser propagation through the atmosphere for the Naval Research Laboratory, the fabrication of carbon nanotubes by NASA scientists, and photochemistry and photobiology investigations.
The FEL programme at Jefferson Lab began as the One-Kilowatt Demonstration FEL, which broke power records and made its mark as the world’s brightest high-average-power laser. It delivered 2.1 kW of infrared light, more than twice what it was initially designed to achieve, before it was taken offline in November 2001 for an upgrade to 10 kW. During the upgrade process FEL staff installed new optics, more accelerating components, new power supplies in the injector and a new wiggler that enables the electron beam to produce laser light. These improvements increased the linear accelerator energy 300% (from 40 to 160 MeV), doubled the machine’s achievable current and made it possible for the optics to take a 10-fold increase in power.
A detector built by an Indian team has begun running this year in the STAR experiment at the Relativistic Heavy Ion Collider at the Brookhaven National Laboratory. The Photon Multiplicity Detector (PMD) has been provided by a collaboration led by a group from the Variable Energy Cyclotron Centre in Kolkata, with colleagues from the Institute of Physics in Bhubaneswar and university groups from Chandigarh, Jaipur and Jammu.
The PMD is a preshower detector designed to study photon production of high particle densities in the forward region in relativistic heavy-ion collisions. This is an environment where calorimeters cannot be used due to the large overlap between particle showers. By measuring the spatial distribution of photons in phase space in common with charged particle detection, the PMD allows the study of event shapes and fluctuations in photon multiplicity and the charged to neutral ratio, thus throwing light on the deconfinement phase transition and chiral symmetry restoration. A PMD using plastic scintillators as the sensitive medium formed part of the WA93 and WA98 experiments at the CERN Super Proton Synchrotron (CERN Courier September 1991 p16 and January 1995 p14), and made a significant contribution to the study of relativistic heavy-ion collisions in these experiments.
The PMD built for STAR consists of a lead converter 15 mm thick, sandwiched between two planes of detectors with high granularity. The detector plane behind the lead detects photons through the electromagnetic showers they produce in the lead; the one in front helps to reject charged-particle hits. The detectors, which are based on the gas proportional counter, are formed from a honeycomb structure of cells with a copper cathode and tungsten wire anode, and a mixture of argon and carbon dioxide as the sensitive medium. The cells are 8 mm deep with a cross-sectional area of about 1 cm2. Copper walls separate the cells in order to prevent cross-talk by confining low-energy δ-electrons to a single cell. A special feature of the design is its unusual aspect ratio, with the cell size and depth being of similar dimensions. In addition, the cathode extends on to the printed circuit boards covering the copper honeycomb so that the anode-cathode distance is less than 2 mm even though the physical separation between the copper cathode and anode wire is 5 mm. This extended cathode ensures uniform sensitivity of the detector throughout the cell volume.
The complete detector is constructed from units with a rhombus shape consisting of 24 x 24 cells. A number of these units, varying from four to nine, are housed in gas-tight enclosures called super-modules, and each plane of the PMD has 12 super-modules. The detector is assembled in two halves, which can be separated vertically and moved independently, with a final hexagonal shape.
The signal from the cells is processed using GASSIPLEX chips, developed at CERN, which provide 16-channel analogue multiplexed information. The analogue signals are digitized and read out using the C-RAMS ADC board. The front-end electronics board, consisting of four GASSIPLEX chips, is a 70 mm rhombus and is directly mounted on the unit module, almost covering the entire area of the detector over 8 x 8 cells.
The full PMD in STAR has about 83,000 cells in the two planes and covers an area of about 4 m2. It is located near the east wall of the Wide Angle Hall at Brookhaven, 550 cm from the interaction point and behind the forward time-projection chamber.
The research and development for the design of the PMD was done in conjunction with a similar detector for the ALICE experiment at the Large Hadron Collider at CERN. The PMD has been funded by the Department of Atomic Energy and the Department of Science and Technology of the government of India, and is financially supported by the STAR collaboration.
There are two main frontiers for particle accelerators – high energies and high intensities – and it is the latter that attracted participants to the “Physics with a multi-megawatt proton source” workshop held at CERN on 25-27 May. The meeting was organized by the ECFA Muon Study Group and the European Commission network on “Beams for European Neutrino Experiments” (BENE), in close collaboration with the community involved with the “next-generation” European isotope separation on-line radioactive ion-beam facility, EURISOL. The focus was on physics at the high-intensity frontier and the main aim was to explore the short- and long-term opportunities in Europe for particle and nuclear physics at a multi-MW, few-GeV proton accelerator.
CERN’s director-general Robert Aymar opened the meeting by recalling that CERN has a history and a mission of building and operating accelerators at the high-energy frontier. The latest is the Large Hadron Collider (LHC) and a compact linear collider (CLIC) may follow. CERN also has a successful tradition of exploiting its accelerator complex, so as to address diverse issues in particle physics simultaneously – for example, with fixed-target experiments, neutrinos, radioactive ions and antiprotons. Although the LHC is currently the absolute priority, plans should be made now for future investments. Aymar concluded by asking the workshop to contribute to the optimal evolution of the accelerators at CERN, so as to permit the most ambitious and promising spectrum of physics experiments in future.
In an inspired talk, John Ellis of CERN then summarized the most compelling motivations for physics programmes at the high-intensity frontier. The discovery of neutrino oscillations has opened a new window of exploration, which is unique in several ways. Measured mass splittings and mixings for neutrinos are the first experimental data we have on physics at higher energy scales. The potential discovery of leptonic charge-parity (CP) violation promises an insight into the origin of the most fundamental asymmetries of the universe. The accelerator neutrino community in Europe, in particular the ECFA/ BENE network, will do its utmost to maintain a leading role in accelerator neutrino experiments beyond the CERN Neutrinos to Gran Sasso (CNGS) project. In another direction, the manifestation of the strong and weak interaction in the atomic nucleus can be rigorously investigated by means of radioactive ion beams. Under the leadership of the Nuclear Physics European Collaboration Committee (NuPECC), a large community of European Union (EU) nuclear physicists is advocating the physics potential of the new world-leading facility, EURISOL, for nuclear, astro- and fundamental physics, which will be some 1000 times more intense than present facilities.
Presenting Japanese plans, Shoji Nagamiya, director of the Japan Proton Accelerator Research Complex, J-PARC, described the progress and physics potential at this new facility. In terms of both its physics programme – a joint venture between particle and nuclear physics – and its push for a higher power of 0.75 MW, possibly evolving to a few MW, J-PARC is the natural benchmark for any future high-power facility. Steve Holmes of Fermilab then described plans in the US for new high-power proton drivers. Studies are centred on the Brookhaven National Laboratory, with a 1.2 GeV superconducting linac and an important upgrade of the Alternating Gradient Synchrotron, and Fermilab, with two scenarios: an 8 GeV synchrotron with a 600 MeV linac or an 8 GeV superconducting linac.
Super-beams, beta-beams and factories
In the first of several presentations on current ideas in Europe, CERN’s Roland Garoby introduced the 2.2 GeV Superconducting Proton Linac (SPL), under consideration at CERN, in which cycling at 50 Hz results in a mean beam power of 4 MW. For neutrino physics an accumulator and compressor ring would be added to reduce the beam pulse to 3 microseconds (the so-called “super-beam”) and the length of the bunches to 1 ns rms. For a radioactive ion-beam facility the linac beam would be used directly. The advantage to CERN of the SPL would be as a replacement for the Proton Synchrotron Booster (PSB).
The SPL study benefits from a collaboration between CERN and the Injecteur de Protons de Haute Intensité project (IPHI) and the support of the EU’s Sixth Framework Programme (FP6) and the International Science and Technology Centre for projects in Russia. If a positive decision on the SPL is taken in 2006-2007, the low-energy section could be operational in 2010-2011 and could replace Linac2 to increase the performance of the PSB and the PS; the SPL itself could be ready in 2014-2015.
Rapid cycling synchrotrons (RCSs) also offer interesting possibilities, especially for beam energies beyond a few GeV. Chris Prior from the Rutherford Appleton Laboratory (RAL) illustrated the potential of this alternative by describing RAL’s machine ISIS (~0.2 MW at 800 MeV) in detail. He also presented the plans for future proton drivers with multi-MW beam power envisaged at RAL, FNAL, J-PARC and CERN. Although the experience with existing machines is encouraging, these new projects represent a significant step forward in beam power and there are technical challenges on numerous issues, such as beam loss and the stripping and capture of ions.
Exploiting the beam delivered by such an accelerator is a similarly ambitious goal. Helmut Haseroth of CERN highlighted what this means for neutrino physics. Three different production techniques are envisaged: the “super-beam”, the “beta-beam” and the “neutrino factory”. For the super-beam, neutrinos result from the decay of pions immediately after the target. In the beta-beam case, beta-radioactive ions are generated and accelerated to a γ factor of around 100 and stored in a few bunches inside a ring with long straight sections pointing at remote experiments. Neutrino bursts are generated by the beta decays. In a neutrino factory, muons from the pion decays are collected behind the target, “cooled” and accelerated to 20-50 GeV, and then stored in a ring with long straight sections pointing at remote experiments, where neutrinos result from the muon decays.
Extensive R&D is required for any of these ambitious plans with neutrinos to be realized during the next decade. The work already undertaken by the American and Japanese teams should be complemented by a similar effort in Europe, resulting possibly in a joint target experiment at CERN. The technology of the target and the focusing devices is challenging, but largely common to the super-beam and the neutrino factory. R&D is also still needed for the muon phase rotation and cooling stages of a neutrino factory. Complementary resources are eagerly expected for the international Muon Ionization Cooling Experiment (MICE), which has been approved “scientifically” at RAL. Although the development of fixed-field accelerating-gradient machines for muon acceleration may render cooling superfluous, the highest energy with lepton colliders is obtained with circular muon colliders, which require cooled beams to achieve the desired luminosities. For the beta-beam the main technological issue is the generation of the radioactive ion beam and its acceleration without excessive irradiation of the machine components.
The energy of the primary proton beam is a crucial parameter in the optimization of a neutrino beam. Marco Apollonio of Trieste described the Hadron Production Experiment (HARP) at CERN, which will provide decisive data in that respect. The data necessary for selecting the optimum energy for a proton driver is expected to be available later this year.
The SPL would be an excellent proton driver for a future nuclear-physics facility. The additional installations that would be required were presented by Alex Mueller of IPN Orsay, based on the study carried out for EURISOL. This European project for an accelerated radioactive ion-beam facility uses the isotope separation on-line method for ion generation, with the goal of attaining beam intensities thousands of times higher than at current facilities such as REX-ISOLDE at CERN and SPIRAL at GANIL in France.
CERN’s Mats Lindroos concluded the accelerator session by describing the concept of a beta-beam facility based on the original idea of Piero Zucchelli of CERN. The key feature is that “slow” accelerators can be used because the radioactive ions have a lifetime that is three orders of magnitude longer than muons. Although it will stretch the techniques mastered for nuclear physics well beyond today’s performance, experts are confident that solutions can be found for the production of the required ion beams. The promises of this scheme together with the remarkable synergy between nuclear and neutrino physics justify the necessary R&D, and a feasibility study is included in the EURISOL Design Study that was submitted to the EU in March this year.
From neutrinos to exotic atoms
Opening the particle-physics session, Pilar Hernandez of Valencia presented an in-depth review of the prospects for neutrino-oscillation physics at a megawatt neutrino complex. She reviewed the excitement of recent discoveries and the relative merits of super-beams, beta-beams and neutrino factories. Luigi Mosca of Saclay outlined the status and plans for a new very-large European underground laboratory at the Frejus site. It could host detectors of unprecedented size – up to one megatonne – for the study of proton decay and astrophysical neutrinos (supernovae), as well as of the low-energy neutrinos from super-beams and beta-beams. Chan Kee Jung of Stony Brook reviewed the potential of a megatonne or half-megatonne water Cherenkov detector, as envisaged for the proposed Underground Nucleon Decay and Neutrino Observatory (UNO). This is a proven and well established technique and its extrapolation to larger mass seems feasible. An exiting alternative using liquid argon time-projection chambers was also described by Antonio Ereditato of Naples. In this case the lower detector mass of about 0.1 megatonnes is acceptable, thanks to the superior granularity and pattern-recognition capability.
Steve Geer of Fermilab described the merits of physics at a neutrino factory – the most promising, ultimate neutrino facility, and the natural tool for the final and complete exploration of neutrino mixing and mass splittings, and leptonic CP violation. The higher event rates would allow smaller detectors (around 50 kilotonnes) that would need charge identification, but which could be located in existing labs. Systematic uncertainties in this case are less severe. Much accelerator R&D in this field is in progress or being planned by enthusiastic worldwide collaborations, specifically for the phase rotation and cooling of large muon “clouds” and for the acceleration and storage stages further downstream. Geer stressed that timely R&D is essential, in particular for MICE and the proposed target experiment at CERN.
Pasquale Migliozzi of Naples discussed the absolute necessity of near-detector stations for the study of neutrino oscillations. Only with the precise measurement of neutrino fluxes, interaction cross-sections and detection efficiencies, will we be able to predict reliably the interaction rate in the far-neutrino detectors, prove the existence of oscillation effects and eventually measure their CP (neutrino-antineutrino) asymmetries. Alessandro Baldini of Pisa discussed the potential for discovering leptonic-flavour violation using unprecedented fluxes of slow muons. With the SPL, sensitivity to muon-to-electron conversion may indeed test the rates predicted from supersymmetric loops. Equally fertile and promising is the opportunity at higher energies to study rare kaon decays, as outlined by Augusto Ceccucci of CERN.
The topics introduced by John Ellis on nuclear structure and nuclear astrophysics, in particular understanding nucleosynthesis via the rp- and r-process paths, were expanded by William Gelletly of Surrey and Karl-Ludwig Kratz of Mainz, while Klaus Jungmann of the Kernfysisch Versneller Instituut (KVI), Groningen, presented a menu of different experiments to investigate fundamental symmetries – for example, CP violation, forbidden decays, non V-A terms in beta-decay and unitarity of the Cabibbo_Kobayashi-Maskawa matrix – that are possible with a multi-MW facility. Francesca Gulminelli of LPC Caen explained how the nuclear liquid-gas phase transition could be investigated, and Juha Äystö of Jyväskylä made the case for combining antiprotons or muons with radioactive ions in colliding or trapping experiments so as to provide an unsurpassed probe of the charge and mass distribution of these exotic nuclei.
Yorick Blumenfeld of IPN Orsay described the technical challenges remaining for the development of EURISOL and showed the importance of the EU FP6 Design Study, while Jürgen Kluge of GSI described the laboratory’s proposal for a Facility for Antiprotons and Ion Research (FAIR) at the GSI laboratory. From these presentations it became clear that the physics reach of FAIR (nuclear physics and astrophysics, low-temperature quantum chromodynamics, charmed sector, high-density plasmas, etc.) is complementary to that of EURISOL (nuclear physics and astrophysics, fundamental interactions, solid-state physics, radiobiology, etc.).
Looking to the future
The final session turned to the outlook for the future and began with Wu-Tsung Weng of Brookhaven, who underlined that the idea of using a linac driver like the SPL is realistic and feasible, although work is still needed on a number of issues such as control of beam loss and cost reduction. Development should be vigorously pursued on many technological items, such as the target and pion/muon focusing devices. A broad range of physics is possible at a multi-MW driver, however, intensive discussions among accelerator experts and physicists still have to take place to select the proper accelerator configuration (SPL or RCS). CERN should play an important, if not the leading role, in the international collaboration of R&D efforts and encourage participation from its staff, provided CERN’s core mission is not compromised.
Michel Spiro, director of IN2P3, reviewed the outlook for particle physics, with an emphasis on neutrino oscillations following the “Venice road map”, as formulated at the Neutrino Oscillations in Venice (NO-VE) workshop last December. After the round of experiments at CNGS and European participation in the next round at J-PARC in Japan, a European initiative could resume with a low-energy super-beam/beta-beam complex serving large detectors in a new underground laboratory, and then proceed to the final and complete mapping of neutrino phenomena with a neutrino factory and the smaller magnetic detectors that best match its potential.
The nuclear-physics outlook was provided by Muhsin Harakeh of KVI Groningen. He presented the NuPECC perspective for the future of nuclear physics in Europe. NuPECC, as a representative of the European nuclear-physics community, has declared its highest priority to be the construction of both the FAIR and EURISOL facilities, to serve the need of the estimated 1000 European nuclear scientists.
The concluding remarks came from Jos Engelen, CERN’s chief scientific officer. He acknowledged the unique and compelling nature of the physics programmes proposed by the workshop, encouraged the continuation of the efforts for its definition and promised to give it careful attention. While reminding the audience of the limited resources, he voiced explicitly European pride for some of the most novel ideas and encouraged international collaboration in R&D.
In June 1996 computing staff at CERN turned off the IBM 3090 for the last time, so marking the end of an era that had lasted 40 years. In May 1956 CERN had signed the purchasing contract for its first mainframe computer – a Ferranti Mercury with a clock cycle 200,000 times slower than modern PCs. Now, the age of the mainframe is gone, replaced by “scalable solutions” based on Unix “boxes” and PCs, and CERN and its collaborating institutes are in the process of installing several tens of thousands of PCs to help satisfy computing requirements for the Large Hadron Collider.
The Mercury was a first-generation vacuum tube (valve) machine with a 60 microsecond clock cycle. It took five cycles – 300 microseconds – to multiply 40-bit words and had no hardware division, a function that had to be programmed. The machine took two years to build, arriving at CERN in 1958, which was a year later than originally foreseen. Programming by users was possible from the end of 1958 with a language called Autocode. Input and output (I/O) was by paper tape, although magnetic tape units were added in 1962. Indeed, the I/O proved something of a limitation, for example when the Mercury was put to use in the analysis of paper tape produced by the instruments used to scan and measure bubble-chamber film. The work of the fast and powerful central processing unit (CPU) was held up by the sluggish I/O. By 1959 it was already clear that a more powerful system was needed to deal with the streams of data coming from the experiments at CERN.
The 1960s arrived at the computing centre initially in the form of an IBM 709 in January 1961. Although it was still based on valves, it could be programmed in FORTRAN, read instructions written on cards, and read and write magnetic tape. Its CPU was four to five times faster than that of the Mercury, but it came with a price tag of 10 millions Swiss francs (in 1960 prices!). Only two years later it was replaced by an IBM 7090, a transistorized version of the same machine with a 2.18 microsecond clock cycle. This marked the end for the valve machines, and after a period in which it was dedicated to a single experiment at CERN (the Missing Mass Spectrometer), the Mercury was given to the Academy of Mining and Metallurgy in Krakow. With the 7090 the physicists could really take advantage of all the developments that had begun on the 709, such as on-line connection to devices including the flying spot digitizers to measure film from bubble and spark chambers. More than 300,000 frames of spark-chamber film were automatically scanned and measured in record time with the 7090. This period also saw the first on-line connection to film-less detectors, recording data on magnetic tape.
In 1965 the first CDC machine arrived at CERN – the 6600 designed by computer pioneer Seymour Cray, with a CPU clock cycle of 100 ns and a processing power 10 times that of the IBM 7090. With serial number 3, it was a pre-production series machine. It had disks more than 1 m in diameter – which could hold 500 million bits (64 megabytes) and subsequently made neat coffee tables – tape units and a high-speed card reader. However, as Paolo Zanella, who became division leader from 1976 until 1988, recalled, “The introduction of such a complex system was by no means trivial and CERN experienced one of the most painful periods in its computing history. The coupling of unstable hardware to shaky software resulted in a long traumatic effort to offer a reliable service.” Eventually the 6600 was able to realise its potential, but only after less-powerful machines had been brought in to cope with the increasing demands of the users. Then in 1972 it was joined by a still more powerful sibling, the CDC 7600, the most powerful computer of the time and five times faster than the 6600, but again there were similar painful “teething problems”.
With a speed of just over 10 Mips (millions of instructions per second) and superb floating-point performance, the 7600 was, for its time, a veritable “Ferrari” of computing. But it was a Ferrari with a very difficult running-in period. The system software was again late and inadequate. In the first months the machine had a bad ground-loop problem causing intermittent faults and eventually requiring all modules to be fitted with sheathed rubber bands. It was a magnificent engine for its time whose reliability and tape handling just did not perform to the levels needed, in particular by the electronic experiments. Its superior floating-point capabilities were valuable for processing data from bubble-chamber experiments with their relatively low data rates, but for the fast electronic experiments the “log jam” of the tape drives was a major problem.
So a second revolution occurred with the reintroduction of an IBM system, the 370/168, in 1976, which was able to meet a wider range of users’ requirements. Not only did this machine bring dependable modern tape drives, it also demonstrated that computer hardware could work reliably and it ushered in the heyday of the mainframe, with its robotic mass storage system and a laser printer operating at 19,000 lines per minute. With a CPU cycle of 80 ns, 4 megabytes (later 5) of semiconductor memory and a high-speed multiply unit, it became the “CERN unit” of physics data-processing power, corresponding to 3-4 Mips. Moreover, the advent of the laser printer, with its ability to print bitmaps rather than simple mono-spaced characters, heralded the beginning of scientific text processing and the end for the plotters with their coloured pens (to say nothing of typewriters).
The IBM also brought with it the MVS (Multiple Virtual Storage) operating system, with its pedantic Job Control Language, and it provided the opportunity for CERN to introduce WYLBUR, the well-loved, cleverly designed and friendly time-sharing system developed at SLAC, together with its beautifully handwritten and illustrated manual by John Ehrman. WYLBUR was a masterpiece of design, achieving miracles with little power (at the time) shared amongst many simultaneous users. It won friends with its accommodating character and began the exit of punch-card machinery as computer terminals were introduced across the lab. It was also well interfaced with the IBM Mass Store, a unique file storage device, and this provided great convenience for file handling and physics data sample processing. At its peak WYLBUR served around 1200 users per week.
The IBM 370/168 was the starting point for the IBM-based services in the computer centre and was followed by a series of more powerful machines: the 3032, the 3081, several 3090s and finally the ES/9000. In addition, a sister line of compatible machines from Siemens/Fujitsu was introduced and together they provided a single system in a manner transparent to the users. This service carried the bulk of the computer users, more than 6000 per week, and most of the data handing right up to the end of its life in 1996. At its peak around 1995 the IBM service provided a central processor power around a quarter of a top PC today, but the data-processing capacity was outstanding.
During this period CERN’s project for the Large Electron Positron (LEP) collider brought its own challenges, together with a planning review in 1983 of the computing requirements for the LEP era. Attractive alternative systems to the mainframe began to appear over the horizon, presenting computing services with some difficult choices. The DEC VAX machines, used by many physics groups – and subsequently introduced as a successful central facility – were well liked for the excellent VMS operating system. On another scale the technical jump in functionality that was appearing on the new personal workstations, for example from Apollo – such as a fully bit-mapped screen and a “whole half a megabyte of memory” for a single user – were an obvious major attraction for serious computer-code developers, albeit at a cost that was not yet within the reach of many. It is perhaps worth reflecting that in 1983 the PC used the DOS operating system and a character-based screen, whilst the Macintosh had not yet been announced, so bit-mapped screens were a major step forward. (To put that in context, another recommendation of the above planning review was that CERN should install a local-area network and that Ethernet was the best candidate for this.)
The future clearly held exciting times, but some pragmatic decisions about finances, functionality, capacity and tape handling capacity had to be made. It was agreed that for the LEP era the IBM-based services would move to the truly interactive VM/CMS operating system as used at SLAC. (WYLBUR was really a clever editor submitting jobs to batch processing.) This led to a most important development, the HEPVM collaboration. It was possible and indeed desirable to modify the VM/CMS operating system to suit the needs of the user community. All the high-energy physics (HEP) sites running VM/CMS were setting out to do exactly this, as indeed they had done with many previous operating systems. To some extent each site started off as if it were their sovereign right to do this better than the others. In order to defend the rights of the itinerant physicist, in 1983 Norman McCubbin from the Rutherford Appleton Laboratory made the radical but irresistible proposal: “don’t do it better, do it the same!”
The HEPVM collaboration comprised most of the sites who ran VM/CMS as an operating system and who had LEP physicists as clients. This ranged from large dedicated sites such as SLAC, CERN and IN2P3, to university sites where the physicists were far from being the only clients. It was of course impossible to impose upon the diverse managements involved, so it was a question of discussion and explanation and working at the issues. Two important products resulted from this collaboration. A HEPVM tape was distributed to more than 30 sites, containing all the code necessary for producing a unified HEP environment, and the “concept of collaboration between sites” was established as a normal way to proceed. The subsequent off-spring, HEPiX and HEPNT, have continued the tradition of collaboration and it goes without saying that such collaboration will have to take a higher level again in order to make Grid computing successful.
The era of the supercomputer
The 1980s also saw the advent of the supercomputer. The CRAY X-MP supercomputer, which arrived at CERN in January 1988, was the logical successor to Seymour Cray’s CDC 7600 at CERN, and a triumph of price negotiation. The combined scalar performance of its four processors was about a quarter of the largest IBM installed at CERN, but it had strong vector floating-point performance. Its colourful presence resolved the question as to whether the physics codes could really profit from vector capabilities, and probably the greatest benefit to CERN from the CRAY was to the engineers whose applications, for example in finite element analysis and accelerator design, excelled on this machine. The decision was also taken to work together with CRAY to pioneer Unix as the operating system, and this work was no doubt of use to later generations of machines running Unix at CERN.
Throughout most of the mainframe period the power delivered to users had doubled approximately every 3.5 years – the CDC 7600 lasted an astonishing 12 years. The arrival of the complete processor on a CMOS chip, which conformed to Moore’s law of doubling speed every 18 months, was an irresistible force that sounded the eventual replacement of mainframe systems, although a number of other issues had to be solved first, including notably the provision of reliable tape-handling facilities. The heyday of the mainframe thus eventually came to an inevitable end.
One very positive feature of the mainframe era at CERN was the joint project teams with the major manufacturers, in particular those of IBM and DEC. The presence of around say 20 engineers from such a company on-site led to extremely good service, not only from the local staff but also through direct contacts to the development teams in America. It was not unknown for a critical bug, discovered during the evening at CERN, to be fixed overnight by the development team in America and installed for the CERN service the next morning, a sharp contrast to the service available in these days of commodity computing. The manufacturers on their side saw the physicists’ use of their computers as pushing the limits of what was possible and pointing the way to the needs of other more straightforward customers in several years time. Hence their willingness to install completely new products, sometimes from their research laboratories, and often free of charge, as a way of getting them used, appraised and de-bugged. The requirements from the physicists made their way back into products and into the operating systems. This was one particular and successful way for particle physics to transfer its technology and expertise to the world at large. In addition, the joint projects provided a framework for excellent pricing, allowing particle physics to receive much more computer equipment than they could normally have paid for.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.