Imagine trying to record a symphony in a second. That is effectively what CERN’s ALICE collaboration will have to do when the laboratory’s forthcoming Large Hadron Collider (LHC) starts up in 2005. Furthermore, that rate will have to be sustained for a full month each year.
ALICE is the LHC’s dedicated heavy-ion experiment. Although heavy-ion running will occupy just one month per year, the huge number of particles produced in ion collisions means that ALICE will record as much data in that month as the ATLAS and CMS experiments plan to do during the whole of the LHC annual run. The target is to store one petabyte (1015 bytes) per year, recorded at the rate of more than 1 Gbyte/s. This is the ALICE data challenge, and it dwarfs existing data acquisition (DAQ) applications. At CERN’s current flagship accelerator LEP, for example, data rates are counted in fractions of 1 Mbyte/s. Even NASA’s Earth Observing System, which will monitor the Earth day and night, will take years to produce a petabyte of data.
Meeting the challenge is a long-term project, and work has already begun. People from the ALICE collaboration have been working with members of CERN’s Information Technology Division to develop the experiment’s data acquisition and recording systems. Matters are further complicated by the fact that the ALICE experiment will be situated several kilometres away from CERN’s computer centre, where the data will be recorded. This adds complexity and makes it even more important to start work now.
Standard components – such as CERN’s network backbone and farms of PCs running the Linux operating system – will be used to minimize capital outlay. They will, however, be reconfigured for the task in order to extract the maximum performance from the system. Data will be recorded by StorageTek tape robots installed as part of the laboratory’s tape-automation project to pave the way for handling the large number of tapes that will be required by LHC experiments.
The first goal for the ALICE data challenge was to run the full system at a data transfer rate of 100 Mbyte/s – 10% of the final number. This was scheduled for March and April 2000 so as not to interfere with CERN’s experimental programme, which will get up to speed in the summer.
Data sources for the test were simulated ALICE events from a variety of locations at CERN. After being handled by the ALICE DAQ system (DATE) they were formatted by the ROOT software, developed by the global high energy physics community. The data were then sent through the CERN network to the computer centre, where two mass storage systems were put through their paces for two weeks each. The first, HPSS, is the fruit of a collaboration between industry and several US laboratories. The second, CASTOR, has been developed at CERN.
Although each component of the system had been tested individually and shown to work with high data rates, this year’s tests have demonstrated the old adage that the whole is frequently greater than the sum of its parts: problems only arose when all of the component systems were integrated.
The tests initially achieved a data rate of 60 Mbyte/s with the whole chain running smoothly. However, then problems started to appear in the Linux operating system used in the DAQ system’s PC farms. Because Linux is not a commercial product, the standard way of getting bugs fixed is to post a message on the Linux newsgroups. However, no-one has previously pushed Linux so hard, so solutions were not readily forthcoming and the team had to work with the Linux community to find their own.
That done, the rate was cranked up and failures started to occur in one of the CERN network’s many data switches. These were soon overcome – thanks this time to an upgrade provided by the company that built the switches – and the rate was taken up again. Finally the storage systems had trouble absorbing all of the data. When these problems were ironed out, the target peak rate of 100 Mbyte/s was achieved for short periods.
At the end of April the ALICE data challenge team had to put their tests on hold, leaving the CERN network and StorageTek robots at the disposal of ongoing experiments and test beams. During the tests, more than 20 Tbyte of data – equivalent to some 2000 standard PC hard disks – had been stored. The next milestone, scheduled for 2001, is to run the system at 100 Mbyte/s in a sustained way before increasing the rate, step by step, towards the final goal of 1 Gbyte/s by 2005. The ALICE data challenge team may not yet have made a symphony, but the overture is already complete.
On 22 February an international team working at the superconducting TESLA Test Facility (TTF) at DESY, Hamburg, set a new record for the shortest wavelength of radiation ever achieved with a free electron laser (FEL). The collaboration, involving around 140 scientists from 38 institutes in 9 countries, succeeded in generating ultraviolet radiation with a wavelength of 109 nm. The previous best using this type of self-amplified spontaneous emission (SASE) FEL was 530 nm (by a group at Argonne). Within a few weeks the group pushed the wavelength down to 80 nm and tuned the FEL to various wavelengths up to 180 nm, thus demonstrating for the first time the free wavelength tunability of SASE FELs over a large range.
To achieve this decisive step towards a new range of laser wavelengths, the TESLA team used the electron beam from the superconducting test linac of the TTF, set up for the development and testing of new superconducting niobium cavities for DESY’s planned 33 km TESLA electron-positron linear collider.
Having demonstrated that SASE is working at such small wavelengths, the collaboration is now extending the short TTF into a 300 m device that could operate in the soft X-ray range around 6 nm. This user facility should be available for experiments by 2003.
The ultimate goal is to produce X-rays with a wavelength of 0.1 nm. Since these short wavelengths require an electron beam of much higher energy, this X-ray laser facility is planned to be included as an integral part of the proposed TESLA linear collider.
Synchrotron radiation
Since its discovery in the mid-1940s, synchrotron radiation has evolved into an invaluable tool for experiments in a range of fields – from surface physics, materials sciences and chemistry, to geophysics, molecular biology and medicine. Nowadays, third -generation synchrotron radiation sources are producing nearly every wavelength from infrared to hard X-rays. However, one thing has so far remained a dream: beams of coherent high-intensity X-rays – a high-intensity X-ray laser.
How to reach this “ultimate” X-ray source has been the subject of many effort both theoretically and experimentally in the last 20 years. During the past 6-8 years a consensus has developed that fourth-generation sources, implying a higher degree of coherence, higher power, brilliance and ultrashort pulses, possibly at very short wavelengths, such as the hard X-ray region, would probably involve a linear accelerator driving a FEL.
This shift from storage rings to linear devices is because the quality of the electron beam – short bunch lengths and small beam emittance – is limited at storage rings but is crucially important for the FEL process. Especially when aiming at X-ray wavelengths with a SASE FEL, the limitations of storage rings become noticeable.
Free electron lasers
The main ingredients of a FEL are a high-energy electron beam with very high brightness and a periodic transverse magnetic field, such as that produced by an undulator magnet. As the electron bunches zigzag through the magnetic field, they emit synchrotron radiation around their direction of motion. For small undulations the radiation is quasimonochromatic. For every undulator period the radiation phase moves ahead of the electrons by a distance equal to this specific resonant wavelength, keeping each electron in phase with the radiation field.
Depending on the relative phase between radiation and electron oscillation, electrons experience either a retardation or an acceleration with respect to the mean electron velocity.
If the electron beam is of sufficient quality and the undulator long enough, the longitudinal density of the electron bunch becomes modulated, with “microbunching” at the resonant wavelength. This electron density modulation reduces phase cancellation in the emission process, increasing the intensity of the emitted light. This light interacts again with the electron beam and again enhances the bunch density modulation, thereby further increasing the intensity. The net result is an exponential increase in radiated power – ultimately about six orders of magnitude more brilliant than conventional undulator radiation.
Like conventional lasers, most present FELs use an optical cavity formed by mirrors to store the light from many successive electron bunches. Many of these FELs work in the infrared range, and some even reach ultraviolet wavelengths. However, extending them towards the X-ray regime is difficult, owing to the lack of good reflecting surfaces at wavelengths below 150 nm.
An alternative path to shorter wavelengths was found with the development of SASE FELs. These achieve lasing in the single pass of a high-brightness electron bunch through a very long undulator by SASE process, without any mirrors.
The concept of SASE FELs was introduced in the early 1980s (Kondratenko and Saldin 1980) and further explored in 1984 (Bonifacio et al.1984), soon leading to first experimental tests (Scharlemann et al.1986). During 1997-8 a Los Alamos/UCLA experiment at Los Alamos (Hogan et al.1998) produced a gain of 3 x 105 for the first time and established the proof-of-principle of SASE theory at a wavelength of 12 mm. Recently, SASE at 530 nm was demonstrated at Argonne (Milton et al.submitted).
Testbed
The TTF was set up at DESY in 1993, with major contributions from the US, Italy and France, to provide a testbed for the TESLA linear collider project, especially the superconducting niobium cavities for particle acceleration. In 1994 work began on the test accelerator to extend it into a 300 m FEL comprising all of the basic elements that will subsequently be employed in full-size TESLA X-ray lasers.
In a first phase, now brought to laser operation, the TTF was equipped with a 15 m undulator, a bunch compressor (reducing the bunch length, thus increasing the bunch peak current) and a radiofrequency photocathode electron gun.
There are essentially two technical challenges to be met by an X-ray FEL. First, it is crucial to generate and accelerate a low emittance and high-peak-current electron beam. This can be achieved using a high-brightness radiofrequency photocathode gun as an electron source. The electron gun currently used at the TTF-FEL is a joint contribution of Fermilab, INFN/Milan, Rochester, the Max Born Institute in Berlin and DESY.
It has meanwhile demonstrated that such a particle source can drive a facility 24 h a day for weeks and even months. Because the radiofrequency gun performance is so critical for further development, DESY is building up a standalone gun test facility at its institute in Zeuthen near Berlin.
The peak current inside the bunches produced by the low-emittance gun is still not high enough to reach laser saturation within an undulator of reasonable length. The solution is to compress the bunches longitudinally to increase the peak current. This can be achieved using a “bunch compressor chicane” – a sequence of deflecting magnets.
The principle is not new, but aiming at a few kiloamperes of peak current means achieving bunch lengths of less than 0.1 mm, which is a challenge. Accelerating the beam off the crest of the radiofrequency waveform in the linac creates an energy-phase correlation that can be used to shorten the bunch. When passing the chicane, electrons with different momenta travel different path lengths. The TTF-FEL currently uses a bunch compressor at 140 MeV, which compresses the bunch length below 0.5 mm rms.
The second important technical challenge is to keep the electron beam (focused to a transverse beam size of about 0.1 mm) in essentially complete overlap with the photon beam as it passes through the undulator. This sets new standards for undulator alignment procedures and beam orbit control.
Interleaved operation
Combining the machine expertise at a high-energy physics facility with operation of a radiation source continues a long and fruitful tradition at DESY. Technically, both X-ray SASE FELs and linear colliders depend fundamentally on the generation of low-emittance, short electron bunches and on accelerating long bunch trains without loss of this quality. This is best achieved with a superconducting linac, combining high accelerating gradients and low wakefield effects with long bunch trains at high duty cycle, owing to low power losses.
For power cost reasons, a superconducting linear collider has a radiofrequency-on-time fraction of only 1%. Consequently there is room for further radiofrequency pulses to accelerate an interleaved electron beam for FEL operation. In this way the most expensive component of an X-ray laser – the linac – is shared with the high-energy physics community.
All TTF findings are consistent with existing models for SASE FELs. So far, a laser gain of more than 1000 has been observed, while laser saturation is expected well beyond 106. Thus the next steps will be focused on achieving even higher laser gain by improving orbit control and electron beam quality. Operation with long trains of several thousand electron bunches will also be tested.
Having accomplished the proof-of-principle experiment (Andruszkow et al.),the TESLA collaboration will then upgrade the superconducting linac to 1000 MeV (Åberg et al.1995), bringing the FEL wavelength down to 6 nm. The new user facility should be ready for experiments by 2003. As for the TESLA Linear Collider with Integrated X-ray Lasers, a conceptual design was published in 1997 (Brinkman et al.1997) and a Technical Design Report, including schedule and costs, will be presented in 2001 for evaluation by the German Science Council (Wissenschaftsrat), the German Federal Government’s scientific advisory board. As a first step towards formal planning permission, an agreement was signed in 1998 by the relevant German federal states.
Further reading
A M Kondratenko and E L Saldin 1980 Generation of coherent radiation by a relativistic electron beam in an ondulator Part. Accelerators10 207-216.
R Bonifacio, C Pellegrini and L Naducci 1984 Collective instabilities and high gain regime in a free electron laser Opt. Commun.50(6) 373-378.
E T Scharlemann et al.1986 Comparison of the Livermore microwave FEL results at ELF with 2D numerical simulations Proceedings of the Seventh FEL Conference (FEL86)A250 150-158.
M J Hogan et al.1998 Measurements of gain larger than 105 at 12 mm in a self-amplified spontaneous-emission free-electron laser Phys. Rev. Lett.81 4867-4870.
S V Milton et al.submitted Observation of self-amplified spontaneous emission and exponential growth at 530 nm.
J Andruszkow et al.First observation of self-amplified spontaneous emission in a free-electron laser at 109 nm wavelength, DESY 00-066.
T. Åberg et al.1995 A VUV FEL at the TESLA test facility at DESY, Conceptual Design Report DESY Print TESLA-FEL95-03.
R Brinkmann, G Materlik, J Rossbach and A Wagner (eds) 1997 Conceptual Design of a 500 GeV e+e– Linear Collider with Integrated X-ray Laser Facility(DESY 1997-048 and ECFA 1997-182).
A Lawrence Berkeley National Laboratory team has succeeded in generating 300 fs pulses of synchrotron radiation at the ALS synchrotron radiation machine. The team’s members come from the Materials Sciences Division (MSD), the Center for Beam Physics in the Accelerator and Fusion Research Division and the Advanced Light Source (ALS).
Although this proof-of-principle experiment made use of visible light on a borrowed beamline, the laser “time-slicing” technique at the heart of the demonstration will soon be applied in a new bend-magnet beamline that was designed specially for the production of femtosecond pulses of X-rays to study long-range and local order in condensed matter with ultrafast time resolution. An undulator beamline based on the same technique has been proposed that will dramatically increase the flux and brightness.
The use of X-rays to study the course of solid-state phase transitions, the kinetic pathways of chemical reactions and the efficiency and function of biological processes on the fundamental timescale of a molecular vibration (about 100 fs) is an emerging field of research.
Ahmed H Zewail of Caltech was awarded the 1999 Nobel Prize for Chemistry for demonstrating how rapid laser techniques can reveal how atoms move during chemical reactions. Pump-probe methods in which a pump pulse stimulates the process followed by a probe pulse to examine it at intervals thereafter constitute a common way of following the dynamics of ultrafast processes with infrared and visible lasers. However, there is a dearth of ultrafast X-ray sources to provide structural data on this timescale. The pulse length of synchrotron radiation, for example, is limited by the bunch length of the electron beam – about 30 ps at the ALS.
Ultrashort pulses
A solution to the bunch-length problem was described four years ago by Alexander Zholents and Max Zolotorev of the Center for Beam Physics. In short, a high-power femtosecond laser synchronized with the electron bunches passes collinearly with an electron bunch through an insertion device (undulator or wiggler) as in a free electron laser. The high electric field of the shorter laser pulse modulates a portion of the longer electron bunch, with some electrons gaining energy and some losing energy.
The condition for optimum energy modulation occurs when the laser wavelength matches the wavelength of the fundamental emission from the insertion device. Subsequently, when the energy-modulated electron bunch reaches a section of the storage ring with a non-zero dispersion, a transverse separation occurs, resulting in slices of the bunch roughly as long as the laser pulse. A collimator or aperture selects the synchrotron radiation from the displaced bunch slices.
The team led by MSD’s Robert Schoenlein implemented the time-slicing scheme by using a high-power titanium sapphire laser to modulate the electron beam in a 16 cm period wiggler already in straight section 5 of the 12-fold symmetric storage ring. Bend magnets between the wiggler and the beamline provide horizontal dispersion and the synchrotron radiation, and a test chamber on an existing bend-magnet beamline in the curved sector after straight section 6 records the femtosecond pulses (figure 1).
Schoenlein’s group verified the femtosecond time structure by imaging visible light from the beamline onto a nonlinear optical crystal along with a delayed 50 fs cross-correlation pulse from the laser system and then counting photons at the sum frequency as a function of delay between the modulating and the cross-correlation laser pulses. An adjustable knife edge located in the beamline at an intermediate image plane provided a means of selecting radiation from different transverse regions of the electron beam. In this way the team measured a dark 300 fs hole in the central cone of the synchrotron radiation and a bright 300 fs peak in the wing of the synchrotron radiation (figure 2).
This success was the result of a synergistic collaboration between two complementary groups at Berkeley working at the ultrafast science frontier – the Center for Beam Physics, headed by Swapan Chattopadhyay and the Femtosecond Spectroscopy Group, led by Berkeley lab director Charles Shank. As part of a growing femtosecond X-ray science programme at the ALS, new beamlines are under construction and proposed under the leadership of Schoenlein and Roger Falcone of the University of California, Berkeley. A bend-magnet beamline, with an anticipated completion date of June 2000, has a performance goal of 100 fs pulses at a repetition rate of 5 kHz with a flux of about 105 photons/s/0.1% bandwidth and a brightness of about 108 photons/s/mm2/mrad2/0.1% bandwidth for photon energies up to 10 keV. A proposed undulator beamline would increase the flux and brightness by factors of about 100 and 10 000 respectively. An in-vacuum device, the planned undulator has a 5 mm gap, almost a factor of three smaller than the current smallest magnetic gap (14 mm) and nearly a factor of two smaller than the narrowest vacuum chamber (9 mm) in the ring. A vertical rather than horizontal dispersion would also be used. A complete mini beta lattice with large vertical dispersion bumps is being designed to accommodate these features.
When CERN’s LHC collider begins operation in 2005, it will be the most powerful machine of its type in the world, providing research facilities for thousands of researchers from all over the globe.
The computing capacity required for analysing the data generated by these big LHC experiments will be several orders of magnitude greater than that used by current experiments at CERN, itself already substantial. Satisfying this vast data-processing appetite will require the integrated use of computing facilities installed at several research centres across Europe, the US and Asia.
During the last two years the Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC) project, supported by a number of institutes participating in the LHC programme, has been developing and evaluating models for LHC computing. MONARC has also developed tools for simulating the behaviour of such models when implemented in a wide-area distributed computing environment.
This requirement arrived on the scene at the same time as a growing awareness that major new projects in science and technology need matching computer support and access to resources worldwide.
In the 1970s and 1980s the Internet grew up as a network of computer networks, each established to service specific communities and each with a heavy commitment to data processing.
In the late 1980s the World Wide Web was invented at CERN to enable particle physicists scattered all over the globe to access information and participate actively in their research projects directly from their home institutes. The amazing synergy of the Internet, the boom in personal computing and the growth of the Web grips the whole world in today’s dot.com lifestyle.
Internet, Web, what next?
However, the Web is not the end of the line. New thinking for the millennium, summarized in a milestone book entitled The Gridby Ian Foster of Argonne and Carl Kesselman of the Information Sciences Institute of the University of Southern California, aims to develop new software (“middleware”) to handle computationsspanning widely distributed computational and information resources – from supercomputers to individual PCs.
In the same way that the World Wide Web makes information stored on a remote site immediately accessible anywhere on the planet without the end user having to worry unduly where the information is held and how it arrives, so the Grid would extend this power to large computational problems.
Just as a grid for electric power supply brings watts to the wallplug in a way that is completely transparent to the end user, so the new data Grid will do the same for information.
Each of the major LHC experiments – ATLAS, CMS and ALICE – is estimated to require computer power equivalent to 40 000 of today’s PCs. Adding LHCb to the equation gives a total equivalent of 140 000 PCs, and this is only for day 1 of the LHC.
Within about a year this demand will have grown by 30%. The demand for data storage is equally impressive, calling for some several thousand terabytes – more information than is contained in the combined telephone directories for the populations of millions of planets. With users across the globe, this represents a new challenge in distributed computing.
For the LHC, each experiment will have its own central computer and data storage facilities at CERN, but these have to be integrated with regional computing centres accessed by the researchers from their home institutes.
CERN serves as Grid testbed
As a milestone en route to this panorama, an interim solution is being developed, with a central facility at CERN complemented by five or six regional centres and several smaller ones, so that computing can ultimately be carried out on a cluster in the user’s research department. To see whether this proposed model is on the right track, a testbed is to be implemented using realistic data.
Several nations have launched new Grid-oriented initiatives – in the US by NASA and the National Science Foundation, while in Europe particle physics provides a natural focus for work in, among others, the UK, France, Italy and Holland. Other areas of science, such as Earth observation and bioinformatics, are also on board.
In Europe, European Commission funding is being sought to underwrite this major new effort to propel computing into a new orbit.
During the past 50 years, high-energy accelerators have not only become major research tools for nuclear and particle physics, but also influenced many other fields of science and industry by providing a powerful source of synchrotron radiation and other beams. New accelerator concepts have been the key to both an increased understanding of nature via fundamental research and the growing application of accelerators and accelerator techniques in other fields. It is therefore important to continue to develop new accelerators and to maintain accelerator expertise worldwide.
However, the size and cost of future large accelerators will most likely outstrip the resources of a single region, and building them will require a new approach. One way is via the framework of an international collaboration. A collaboration for a major accelerator facility must meet the following challenges:
* maintain and nurture the scientific culture of the participating laboratories;
* maintain the visibility and vitality of each partner.
Furthermore, all participating countries must be willing to invest and to commit themselves through long-term agreements. The proposed solution is a Global Accelerator Network (GAN).
Scientists and engineers from laboratories and research centres around the world could form a network to integrate their scientific and technical knowledge, ideas and resources, and focus them on a common project – a merger of worldwide competence.
The GAN would allow participating institutes to continue important activities at home while being actively engaged in a common project elsewhere. All of the participants could demonstrate a visible level of activity, thus maintaining a vital community of scientists and engineers, and attracting students to the field of accelerator research and development. Last but not least, the network approach could facilitate the thorny problem of site selection for new large accelerator facilities.
The approach is based on the substantial experience gained in the construction and operation of large particle physics experiments at the LHC and LEP (CERN), HERA (DESY) and Fermilab’s Tevatron. In these projects, multinational teams, motivated and united by a common research goal, share the responsibilities of a large experiment. In this way, many groups, mostly from universities, become technically and financially responsible for the design, construction, operation and understanding of parts of the detector, which may be small but are nevertheless vital to the success of the experiment. Much of the work would be done at the home institutes.
These experiments have continually grown, with those for CERN’s LHC collider being comparable in manpower terms with major laboratories, and in complexity with large accelerators.
On the other hand, most accelerators so far have been built and are operated by only one laboratory. An important exception is the HERA electron-proton collider at DESY, where major accelerator components were designed and built in laboratories in other countries. However, once installed, responsibility for their operation and maintenance was handed over to the host laboratory, DESY. The LHC at CERN has evolved along similar lines.
In the GAN framework, new accelerator facilities, as well as experiments and beamlines for synchrotron radiation, would be designed, built and operated by an international collaboration of “partner” laboratories and institutes.
The machine would be built at an existing laboratory – the “host” – to capitalize on available experience, manpower and infrastructure. The host state would have to underwrite a major part of the finance and to make a clear commitment to support the project throughout its duration. (In the case of CERN, the host states are not the principal sponsors of international facilities built on their territory – the organization as a whole is responsible.)
Each partner would take responsibility for certain components of the project, designed, built and tested at home before being delivered to the host site. This responsibility would be maintained even after delivery. Component maintenance, operation and development would be carried out as much as possible from the home institutes, using modern communications technology. For this the partners would need to maintain duplicates of accelerator components for testing, checking and development. In some institutions, “copies” of the accelerator control room could even provide for highly efficient round-the-clock operation. At the host site, a core team, under guidance from all partners, would provide the necessary on-site technical support.
Sharing the cost
With GAN, major capital investment and operation funding would be taken up inside the partner states. Operational costs (mainly electricity), excluding manpower, would be shared by all partners according to a predefined arrangement. Most manpower would remain in the partner institutions, except during periods of installation and overhaul, and during collaboration meetings.
Details of the collaboration and management structures, together with the exact sharing of responsibilities between partners and the host, have yet to be worked out, but examples can surely be found within existing arrangements.
Remote control and diagnostics, allowing off-site partners to participate on site actively, are the key GAN features. While this would be an innovation for accelerators, there already exists substantial experience worldwide in the remote operation of large technical installations.
In major particle physics experiments, sub-detectors are frequently monitored and run remotely. A synchrotron radiation facility in Hiroshima, Japan, is operated under remote control from Tokyo. Large telescopes for astronomy are operated remotely – experiments on satellites and on distant planets are routinely operated from control centres on Earth. In industry, remote diagnostics and operation have become standard, even in nuclear power plants.
Many technical issues, including hardware- and software-related items, such as multiple control rooms, modular components and spare parts, standardization of systems and software, common data bases, common documentation, optimal communication and adequate protection against unauthorized access, are examined in an initial proposal (Willeke et al.1999).
The financial implications of a GAN now need to be appraised, especially to understand the additional costs resulting from remote operation. Several human aspects are also involved. How can the desired “corporate identity” be attained? How much manpower is needed at the host site and at the partner institutes? What scientific sociology will emerge? Many of these issues resemble those that have already been encountered in large experiments, which will serve as useful role models.
Whatever the challenges, a GAN could provide the framework for the construction and operation of future large accelerators, which would otherwise be impossible to realize. As a first step, the International Committee for Future Accelerators (ICFA) has set up a task force to study the model and its implications.
Some 40 years since it was first recognized that the positively charged muon could be used as a local microscopic probe of condensed matter, the application of muons in this field has grown from the exotic hobby of some particle physicists in the late 1950s and early 1960s into an established and mature technique.
Polarized positive muons, brought to a stop inside materials, precess in the local magnetic fields. This muon spin rotation (mSR) method competes with and complements approaches like neutron scattering, Mössbauer spectroscopy, nuclear magnetic resonance (NMR) and electron paramagnetic resonance, to form the arsenal of modern experimental tools in condensed matter research.
With interest continually growing, the mSR community has become the largest user group at the meson factories of PSI in Switzerland and TRIUMF in Canada, and it now shares equal status with the neutron scatterers at the ISIS facility of the UK’s Rutherford Appleton Laboratory.
Continuing in the tradition that was established in 1978 in Switzerland, and followed up in Vancouver (1980), Shimoda (1983), Uppsala (1986), Oxford (1990), Maui (1993) and Nikko (1996), the 8th International Conference on Muon Spin Rotation, Relaxation and Resonance (mSR 1999) was held in its country of origin. Close to 180 physicists and physical chemists gathered in the beautiful alpine setting of Les Diablerets to discuss and learn of the latest applications of positive and negative muons in condensed matter research and physical chemistry.
Condensed matter contributions
More than 200 original contributions werepresented (orally and in poster form). Each session was opened by one of eight invited plenary speakers from outside the mSR community, thereby providing a link to the condensed matter community as a whole.
Superconductivity, and in particular high-temperature super-conductors and their discovery, were a key feature of a talk given by Nobel laureate K A Müller (Zürich), who emphasized that, rather than pure luck, it was the result of well focused year-long research into material properties and their understanding that led to their discovery.
Techniques such as NMR, closely related to mSR, are now being employed to study high-Tcsuperconductors. These exciting studies were outlined by C P Slichter (Urbana-Champaign).
New results from mSR studies of non-superconducting materials, such as the cuprate La1-xSrxCuO4, revealed a huge isotope effect. This oxygen isotope effect manifests itself by dramatically increasing, by up to 80%, the spin glass transition temperature (phase transition) when oxygen-16 is replaced by oxygen-18, thus pointing to a strong electron-phonon coupling in cuprates that in turn is also expected to govern the cooper-pairing mechanism in superconducting compounds.
A second new result was the first unambiguous discovery of a spontaneous magnetic field signalling the onset of superconductivity in Sr2RuO4. This result implies that time-reversal invariance is broken, as in the case of a ferromagnet, and that the Cooper pairs are in a different spin state compared with conventional or high-Tcsuperconductors.
Magnetic moments
Magnetism featured high on the list of popularity, with nearly half of the contributed papers focusing on the subject, undoubtedly reflecting the fact that the positive muon is itself a magnetic probe. X-rays and neutrons are also now used as complementary probes to muons in magnetism, as demonstrated in a talk by G H Lander (EC-JRS-ITE, Karlsruhe).
A new class of magnetic materials – molecular magnets – which are made up of either purely organic or inorganic molecules or a mixture of both, together with molecular clusters and magnetic nanoparticles, are currently under study with muons. Molecular clusters exhibit large spins and their magnetization relaxation may be governed by magnetic quantum tunnelling. A first observation of this, in CrNi6 and CrMn6, was presented at the conference and was seen as a highlight in this field. New opportunities for mSR in studying such systems were presented by D Gatteschi (Florence).
Another subject of intense study concerns lower-dimensional magnetic systems. Some materials, such as SrCu2O3, have their magnetic moments aligned in parallel running chains, forming a ladder-like structure, where the spins combine into non magnetic spin singlets. By creating couplings between the so-called ladders, or doping these materials with non-magnetic species, one can drive these systems into a long-range magnetically ordered state, and this has indeed been verified. In one example it even appears as if the presence of the muon can break up the spin-singlet pairs.
The muon can also be thought of as a light proton isotope and it has been used as such to study the structural and dynamic features of materials, including the determination of the site of the implanted muon. Muonium, which is a hydrogen-like quasi atom (m+e–), has been used extensively as a probe in mSR studies of the behaviour of hydrogen in semiconductors, hydrogen being a common impurity affecting the electronic properties (e.g. the passivation of donors or acceptors) in these substances. A session devoted to muons in semiconductors was opened with a talk by B Bech-Nielsen (Aarhus) on vacancy-hydrogen defects in silicon.
As far as the role of muons as light proton isotopes in metal hosts is concerned, current investigations are part of the extensive and technologically relevant research on hydrogen-metal systems, as pointed out in a talk by P Vajda (Palaiseau) on hydrogen ordering and magnetic phenomena in metal-hydrogen systems.
Two new highlights in this area, both concerning the quantum nature of muon or muonium diffusion in matter, were reported. The first, which has been found independently by two groups, consisted of the first observation of a local muon tunnelling state seen in two different materials. The second showed that muonium can travel in the form of a Bloch wave, through propagation in a band-like state at very low temperatures (below 10 mK) – something that many had considered impossible.
New results on the formation of muonium in liquids and solids seem to suggest that the formation happens mainly after the thermalization of the implanted muon, followed by capture of a free electron created by ionization near the end of the muon track, also termed as delayed muonium formation. The liberated electrons were found to be located downstream of the stopped muon, indicating that the initial momentum direction is largely conserved during the slowing-down phase.
The step from muonium formation to muonium chemistry is only a short one. Here the muon can, for example, be used as a polarized spin label in physical chemistry. Muonic radicals (muon-containing molecules, each with an unpaired electron) have also been used to investigate materials, and studies have been made of molecular dynamics and intermolecular electron transfer in systems of biological interest.
A session on instrumentation and techniques, devoted to new ways of using muons, showed that the development of low-energy muon sources in the 10 eV – 20 keV range, and in particular the successful implementation of such a source at PSI together with the first applications, opens up new possibilities for the study of thin films and multilayers.
The complementary use of spin-polarized beta-radioactive nuclei, as produced at ISOLDE (CERN) and soon to be produced at ISAC (TRIUMF), were described by R F Kiefl (Vancouver). Positron spin relaxation, inspired by mSR, was presented by J Major (Stuttgart).
Future facilities
New developments also bring a push for new facilities. An evening session devoted to further developments and new facilities gave an overview as well as allowing discussion. In particular, plans for muon sources at the Japanese KEK-JAERI accelerator complex (most likely to be realized), the neutron spallation source at Oak Ridge and the second planned spallation target of the Rutherford Appleton Laboratory ISIS facility were presented.
Three devil’s advocates – Yuri Kagan (Moscow), John Mydosch (Leiden) and Roderick Wasylishen (Halifax) – provided friendly and sometimes critical feedback throughout the sessions, culminating in their summaries at the end of the meeting.
In recognition of his important role in the development of mSR at the venerable 184 inch cyclotron at the Lawrence Berkeley Laboratory in around 1970, Kenneth M Crowe was guest of honour. Many groups can trace their roots back to those exciting days in Berkeley.
The conference, under the patronage of the European Physical Society, was organized by A Schenck, conference chairman (ETH Zürich); E Roduner, chairman of the programme committee (Stuttgart); G Solt, conference secretary (PSI); and a local organizing committee. It was supported by PSI, the Swiss National Fund, ETH-Zürich, the University of Zürich, the European Science Foundation through the FERLIN Programme and industrial and private sponsors.
mSR 2002 will take place in Richmond, Virginia, with C Stronach (Virginia State) as conference chairman and Y J Uemura (Columbia) as programme committee chairman.
Where do cosmic rays come from? What mechanisms can accelerate particles to ultrahigh energies? What happens around a supermassive black hole? What kind of phenomenon is responsible for the mysterious gamma ray bursts? Have dark matter particles gathered in the centre of the Earth or of the galaxy? These questions and many others now motivate ambitious experiments in high-energy astrophysics. Astronomy with high-energy neutrinos may soon be able to answer some of these questions and drastically change our point of view on the universe.
If one wants to understand the sources of high-energy cosmic rays, one faces a fundamental difficulty: at high energy the traditional messenger, the photon, is absorbed by the radiation and matter that it encounters along its path from the sources to the detectors – the universe is actually opaque to high-energy gamma rays. Primary cosmic rays are electrically charged so they are deflected by galactic and intergalactic magnetic fields, and it is hard to identify their sources. To look at the high-energy sky, one is thus forced to use another messenger that is electrically neutral and basically insensitive to obstacles. The neutrino is the ideal messenger.
However, because of these essential qualities, neutrinos are also difficult to detect: one needs a big target to catch them – bigger than the already huge existing underground detectors. One also needs efficient shielding against background due to penetrating cosmic rays. The adopted solution is to equip large volumes of a natural medium that is transparent to visible light with an array of photodetectors. These electronic eyes can then detect the wake of Cerenkov light emitted by the charged particles induced by neutrino interactions in or near the detector. This almost 40-year-old idea has led to projects to construct cosmic neutrino observatories under the sea.
One of these – the ANTARES project, a deep-sea neutrino telescope in the Mediterranean Sea – is expected to play a pioneering role in this new area of astronomy. (The original Antares is the star Alpha Scorpii, the brightest star in Scorpio and so named because it was believed to be an opponent of Mars/Ares.)
The DUMAND project to mount a major detector in the Pacific near Hawaii, long a feature of physics conferences, was unfortunately abandoned. However, the baton has also been taken up by the AMANDA and RICE projects in the Antarctic ice, the RAND Antarctic scheme, the Lake Baikal detector in Siberia and the Greek-led NESTOR project.
The Italian NEMO project is evaluating deep-sea sites and carrying out environmental studies off the coasts of Italy and Sicily.
Since 1996 the physicists and engineers of the ANTARES project have assessed the feasibility of a deep-sea telescope. Three years of intense R&D went into understanding the deep-sea environment and solving the technical challenge of deploying a complex and large piece of equipment in the Mediterranean Sea. The collaboration has now begun the construction of a 300 m2detector off the French Mediterranean shore.
ANTARES is now a CERN “Recognized experiment”. The collaboration consists of 16 particle physics institutes, 2 marine science institutes and an astronomy institute from France, Italy, the Netherlands, Russia, Spain and the UK.
Neutrinos as cosmic messengers
Until now, apart from a handful of events induced by a nearby supernova in 1987, no neutrino of cosmic origin has been identified. High-energy source candidates like cores or jets of active galactic nuclei, or the extremely violent cosmic events that are thought to be responsible for frequent gamma-ray bursts, are increasingly being observed and becoming better understood, but their activity at high energy remains mysterious: do they produce high-energy cosmic rays or are they simply gamma-ray emitters?
If high-energy protons are accelerated in these sources, they must also emit neutrinos. On the other hand, if these high-energy cosmic accelerators cannot fulfil the role of producing the observed cosmic-ray spectrum, then very-high-energy cosmic rays (and neutrinos) must be created by non-accelerating mechanisms like the decay of Big Bang massive relic particles.
Furthermore, neutrinos could reveal the presence of dark matter in the form of neutralinos, the lightest of the yet undiscovered particles predicted by supersymmetry theories. These would have been formed in the early universe and would accumulate in the cores of stars, galaxies and planets, from where they would “shine” as neutrinos.
Last but not least, as recent history has shown, astrophysics experiments permit the exploration of domains beyond the reach of accelerator experiments: studying the fluxes of atmospheric neutrinos, the Super-Kamiokande and MACRO experiments have been able to test very small values of the mass difference between two types of neutrinos. The observation of the same phenomenon is also possible with large neutrino telescopes using the Earth’s diameter as a baseline and at higher energy, thus with very different systematics. It would thus be possible to confirm or rule out the recent evidence for neutrino masses and to measure neutrino oscillation parameters.
Neutrinos can only be detectable through their interaction with matter. As they interact very weakly, massive targets are necessary. High-energy muons produced via such interactions are electrically charged and very penetrating, with a trajectory that is almost aligned with that of the parent neutrino.
When they traverse a transparent medium such as seawater, these relativistic particles produce a wake of Cerenkov light. The time development of this luminous wake allows physicists to reconstruct the direction of the muon, and thus that of the neutrino, with a precision better than a fraction of a degree, and thus pointing back to the celestial object that emitted the neutrino.
High-energy cosmic rays that bombard the upper atmosphere generate a flux of high-energy muons, which constitute a significant background to other terrestrial measurements, even under several thousand metres of water. Only neutrinos are capable of traversing the Earth, and a chance interaction immediately afterwards produces an upgoing muon beneath the detector. The atmospheric muon background is removed by selecting only these upgoing tracks. Thus the Earth itself is used as a neutrino target as well as providing a shield to filter background muons. The detectors will therefore be looking towards the bottom of the sea.
A cosmic neutrino telescope will consist of a three-dimensional array of light sensors (photomultipliers) covering an effective volume of the order of a cubic kilometre at a depth of a few thousand metres. In a first stage, a smaller telescope will give the first indications of cosmic neutrino sources and fluxes and will be used to validate the methods of this new type of astronomy.
An initial requirement is to explore the unusual environment and study the design and deployment of a large deep-sea detector. A three-year step-by-step R&D programme was initiated in 1996 to measure the deep seawater properties and find a suitable site.
Another goal was to bring together and master the marine technologies required to deploy, connect and operate large and complex pieces of offshore apparatus in a deep-sea environment. In parallel, studies were carried out to optimize detector performance.
Autonomous instrumented mooring lines were used to study the properties of the seawater – transparency and light scattering, optical background due to natural radioactivity and living organisms, and the bio-fouling of optical surfaces. The results were very encouraging – light can travel over more than 50 m without noticeable alteration. In particular, the small amount of large-angle scattering promises good angular accuracy.
To study mechanical solutions and to test deployment and recovery procedures, an actual size detector line has been built and was immersed and recovered several times during 1998. This line was 350 m high and consisted of 32 optical modules (glass spheres designed to protect the photomultipliers against water pressure). It was also instrumented with acoustic and electronic sensors to measure position to within 10 cm.
In a second stage the line was equipped with eight photomultipliers and deployed at a depth of 1100 m off Marseille in November 1999. It was linked to the shore by a 37 km submarine electro-optical cable. Atmospheric muon data analysis is under way. About one muon every 10 s is recorded and reconstructed. Position monitor analysis indicates an accuracy of a few centimetres on the reconstructed position of individual photomultipliers.
The future telescope will need to connect individual lines to a central node in situ.Deep-sea connection tests have therefore been carried out using the Nautile, a deep-sea submarine belonging to IFREMER. Connections were successfully made and electrically validated at a depth of 2400 m in December 1998.
With site quality and technical feasibility established, the project is entering a new phase. In three years a detector will be installed consisting of 1000 photodetectors on 13 lines, 400 m high and distributed over a 150 m radius surface. It will be deployed at a depth of 2400 m off Toulon and should provide an effective surface after reconstruction of 0.1 km2for 10 TeV muons, 0.2° angular resolution for muons above 10 TeV, and enable reconstruction of the direction and the energy of neutrino-induced muons over a wide range.
This detector will open a new era of cosmic neutrino astronomy and will also provide practical experience and expertise which will be invaluable for the realization of a future large-scale detector capable of conducting a search for astronomical sources.
Some accelerators come and go; others come to stay. One of the stalwarts is the MIT-Bates electron linear accelerator. A symposium at MIT last November marked the 25th anniversary of the first publishable data on high-resolution electron scattering and photonuclear reactions at this remarkable machine.
Representatives from the Bates community and from universities and sister laboratories where electronuclear research is being undertaken took part. Speakers summarized where the field stands now, how it got there and where it might lead.
Talks from experimentalists covered the current state-of-the-art research at Bates and other laboratories, while theorists focused on some of the issues that arise in electroweak studies of nuclear and hadronic structure. The symposium aimed to underline the relationships between nuclear structure and electrons, the structure of few body nuclei, few-body nuclei/nucleon structure, hadronic structure and parity-violating electron scattering.
From the excitement conveyed in the talks, it is clear that each of these areas is receiving a lot of attention at all of the active laboratories and that the near future will continue to reflect intense activity. A notable feature at Bates has been the central position of the users, many of whom were present at the symposium, and in particular the enthusiasm and talent of generations of graduate students.
The decisive roles played by technological developments in the evolution of this field, and in particular at Bates, were highlighted.
In the earliest days, the development of dispersion-matching and energy-loss spectroscopy produced spectroscopic data with resolutions of better than 10-4 using the entire accelerator beam, the spectrum of which was spread over 10-2. In this era a group from the University of Massachusetts designed and built a magnetic chicane that extended the angular range to 180°, giving better access to magnetic transitions in complex nuclei.
Shortly after the pioneering experiments at SLAC in 1978 on parity violation in deep inelastic scattering, a Yale group proposed a similar experiment at Bates for which they designed and built a polarized electron injector. This ultimately produced a polarized electron beam of 60 mA and measured a parity-violating asymmetry in elastic scattering from carbon-12 of less than 1 ppm.
A succession of experiments using polarized electrons have followed, so that today most of the experimental programme uses spin-oriented electrons. Of note in the past few years have been the very successful data-taking runs on parity-violating electron scattering from hydrogen and deuterium (SAMPLE), the goal of which has been to determine the magnetic strangeness content of the nucleon.
Spin physics in general has been a growing theme at Bates. Quasielastic scattering from polarized helium-3 has been carried out on targets based on spin exchange with optically pumped rubidium (Harvard and Michigan) and/or optically pumped helium-3 with metastability exchange (Caltech).
Spin transfer and induced polarization have been studied in hydrogen and deuterium using either a proton polarimeter (Virginia, William & Mary, MIT) or a neutron polarimeter (Kent State). Tensor recoil polarization in elastic scattering off deuterons used polarimeters designed and built at Argonne, Alberta and Saclay.
These developments have been complemented by a challenging target development programme. Notable cryogenic targets were about 50l of tritium for elastic and quasielastic scattering (MIT and Saskatchewan), 50 bar of helium-3 at 24 K (Massachusetts) and a 40 cm liquid-hydrogen target dissipating 500 W of beam power (Caltech). Many of these projects have had implications that are continuing not only at Bates and other medium energy labs but also at SLAC, DESY (HERMES) and the Jefferson Laboratory.
Bates and the Out of Plane Spectrometer (OOPS) collaboration (Illinois, MIT, Arizona and Athens) have recently completed the construction of an array of four proton spectrometers that can be moved with great precision out of plane, which, when combined with polarized electrons, will provide access to electronuclear response functions that are otherwise difficult or impossible to obtain.
Commissioning and data-taking starts this spring and a time-stretcher ring will be used for part of this run to deliver high-duty factor extracted beams. In addition, looking to the near future and new initiatives, the lab is actively constructing a large acceptance detector (BLAST) to be used with polarized internal targets in the electron storage ring. Commissioning is expected in 2001 and the community is anticipating an exciting programme of nuclear/hadronic structure studies with electrons at Bates.
These and other technical developments provide a powerful handle on the electronuclear S matrix. Experimental work has just begun on a selection of the more promising observables, and some important new results were presented at the symposium, with many more expected in the near future.
The symposium proceedings will soon be published by the American Institute of Physics.
CERN’s flagship accelerator, the 27 km Large Electron Positron collider (LEP), began its final year in fine style in April colliding beams at a record 104 GeV per beam, just three weeks after start-up. With the search for the elusive Higgs particle top of the LEP physics agenda for 2000, high-energy running is receiving maximum attention.
LEP’s full complement of superconducting accelerating cavities, all running at their maximum design gradient, gives the machine an energy reach of 96 GeV per beam. To reach the magic figure of 100 GeV in 1999, LEP’s engineers had to push most of the cavities to 7 MV/m, more than 16% beyond the design gradient.
To extract yet higher energies this year has required some ingenuity on the part of the LEP team. Eight of the old (normal conducting) copper cavities, which provided LEP’s energy for the years that it ran at around 50 GeV per beam, have been reinstalled. The superconducting cavities are being pushed still further. Magnets designed to provide small corrections to beam orbits will also be used to reduce the amount of energy lost by the beams as they travel round the ring.
All this adds up to a high-risk strategy – there is literally no spare capacity left in the machine – but the rewards could be high. Results from LEP have already tied down the mass of the Higgs particle to the 108-190 GeV range. With collisions at 104 GeV per beam, the experiments are sensitive to a Higgs mass of up to about 115 GeV. Before the machine is switched off to make room for the LHC proton collider, the least that can be expected from LEP is a smaller mass range for future physicists to aim for. The best might be a major discovery to crown LEP’s already illustrious career.
LEP is treading on fertile physics ground. According to orthodox physics, the Higgs should be lighter than two hundred giga electronvolts (see figure). The minimal supersymmetric model requires at least one Higgs particle lighter than 1.5 times the mass of the Z – less than about 135 GeV.
The new Isotope Separator and Accelerator (ISAC) is now operational at the Canadian TRIUMF laboratory, producing intense beams of short-lived, exotic nuclei. A major component in the laboratory’s programme over the next five years will be the upgrade of ISAC to ISAC-II, raising the energy from 1.5 to 6.5 MeV/nucleon and extending the mass range, which will enable many more exotic isotopes and nuclear reactions to be studied.
At the laboratory on 18 April for the formal opening of ISAC, Canadian industry minister John Manley announced that the Canadian federal government will give $200 million to TRIUMF over the next five years. The money – $40 million per year over five years – represents a 20% increase from the federal government. As well as allowing TRIUMF to build ISAC-II, the funding will also enable a second phase of the Canadian contribution to CERN’s LHC project, including the provision of the warm twin-aperture quadrupoles for the two beam-cleaning insertions, and the resonant charging power supplies and the pulse-forming networks for the injection kickers.
Further funding for ISAC-II civil construction is expected from the province of British Columbia. The promise of ISAC-II has already repatriated young Canadians back to Canadian universities and brought equipment, previously built in Canada, back into the country from US facilities. It has also attracted experiments from the US and Europe.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.