The four detector groups conducting research at the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory have announced results indicating that they have observed a state of hot, dense matter that is more remarkable than had been predicted. In papers summarizing the first three years of RHIC findings, to be published simultaneously by the journal Nuclear Physics A, the four collaborations (BRAHMS, PHENIX, PHOBOS and STAR) say that instead of behaving like a gas of free quarks and gluons, as was expected, the matter created in RHIC’s heavy-ion collisions appears to be more like a liquid.
The evidence comes from measurements of unexpected patterns in the trajectories of the thousands of particles produced in individual collisions. The primordial particles produced tend to move collectively in response to variations of pressure across the volume formed by the colliding nuclei – an effect known as “flow”, since it is analogous to the properties of fluid motion.
However, unlike ordinary liquids, in which individual molecules move about randomly, the hot matter at RHIC seems to move in a pattern exhibiting a high degree of coordination among the particles.
This flow is consistent with that of a theoretically “perfect” fluid with extremely low viscosity and the ability to reach thermal equilibrium very rapidly because of the high degree of interaction among the particles. The physicists at RHIC do not have a direct measure of the viscosity, but they can infer from the flow pattern that, qualitatively, the viscosity is very low, approaching the quantum mechanical limit.
Technology transfer promotes the injection of science into all levels of daily life in many different ways. For example, nobody would ever have thought that a phenomenon based on quantum theory – quantum entanglement – would find practical applications in cryptography, computing and teleportation, and lead to the creation of companies to safeguard the sharing of information. High-energy particle physics stimulates innovative technological developments. In the quest to find out what matter is made of and how its different components interact, high-energy physics needs highly sophisticated instruments in which the technology and required performance often exceed the available industrial know-how. Thanks to the technologies developed for its research activities, CERN has produced improvements in a variety of fields, many of which are described in a new publication that illustrates the effectiveness of technology transfer between the organization and industry.
Since its creation in 1954 CERN has had a tradition of partnership with industry and making its technologies available to third parties. Many of CERN’s users come from distant locations and would like as much as possible to analyse data from their experiments in their home institutions. This led to the development of data networks between CERN and these institutes. As a result CERN became one of the major hubs of the European scientific data network, and with hindsight it is in a way natural that it was the birthplace of the World Wide Web. Furthermore, the major technology conferences and exhibitions that CERN has often organized – the first took place in 1974 – have been important occasions for establishing relationships between CERN and industry. However, up to the 1980s, except for the protection of computer software through a copyright statement, there was no structure in the laboratory to support an innovation policy.
During the first 30 years of its life, CERN did not use intellectual- property protection, such as patents. Its policy was “publish or perish”, rather than “protect, publish and flourish”. Furthermore, the conventional model of technology transfer was via purchasing contracts, which required frequent interaction between industry and CERN owing to the highly innovative equipment concerned. The contracts and the financial rules required competitive bidding, with the award going to the lowest offer – a process that is not well adapted to collaborative agreements aimed at technology transfer. Then in 1984, when planning for the Large Hadron Collider (LHC) began, CERN recognized the need for strong involvement of industry even at the initial R&D stage, given the magnitude and technical complexity of the project.
In 1986 the relations between CERN and industry were analysed and two years later its member states encouraged the organization to take a more proactive attitude towards technology transfer. This was formalized with the establishment of the Industrial Technology Liaison Office – the beginning of a technology-transfer strategy at CERN. The call for technology for the development of the LHC detectors, launched in 1991, was another opportunity to reinforce the relationships between CERN and industry. At the same time value was given to the protection of intellectual property generated by the laboratory’s activities and endorsed by the creation of a Technology Transfer Group.
This means that CERN now has another way to fuel technical innovations in the industries of its member states, beyond the conventional method of procurement. The proactive model, facilitated by the endorsement of a technology-transfer policy in 2000, enables CERN to identify, protect, promote, transfer and disseminate innovative technologies in the European scientific and industrial environment. Once the technology and intellectual property have been properly identified and adequately channelled (that is to say, protected by the appropriate means), they enter a promotional step intended to attract external interest and to prepare the ground for targeted dissemination and implementation.
The dissemination and exploitation of CERN’s technologies are at the heart of the technology-transfer process. In addition to the conventional licensing model for transferring the technology, there is a policy of R&D partnership, which aims to promote CERN’s technology more quickly and to further its dissemination outside particle physics. This type of transfer requires a large investment for the development of a specific product, so tangible financial results are uncertain.
A key technology for the next generation of heavy-ion accelerators will be a powerful, high-charge-state heavy-ion injector that provides an ion-beam intensity an order of magnitude higher than is currently achievable. In addition, future facilities – such as the proposed Rare Isotope Accelerator (RIA) in the US, the Radioactive Ion Beam Factory at RIKEN in Japan, and the project to upgrade the facility at the Gesellschaft für Schwerionenforschung (GSI) in Germany – will demand a high flexibility in the species of ions available for experiments that may last several weeks. High-performance electron cyclotron resonance (ECR) ion sources routinely produce beams of ions ranging from hydrogen to uranium, thereby providing the necessary reliability and flexibility. However, to meet the requirements for high currents, a new generation of ECR ion source will be needed.
The Versatile ECR Ion Source for Nuclear Science (VENUS), designed and built at the Lawrence Berkeley National Laboratory (LBNL), is the most advanced superconducting ECR ion source and the first “next-generation” source in operation (figure 1). It is the first fully superconducting ECR ion source that reaches magnetic-confinement fields sufficient for optimum operation at 28 GHz. Recently the project passed a major milestone with the successful coupling of 28 GHz microwaves into the plasma of the ion source. Preliminary tests at this frequency have already resulted in record intensities for beams of medium and highly charged ions. The results indicate for the first time that the high demands of the next generation of heavy-ion accelerators can be met.
The development of ECR ion sources has its roots in fusion plasma research in the late 1960s. The principle is to use magnetic confinement and ECR heating to produce a plasma made up of energetic electrons and relatively cold ions. Figure 2 shows the main ingredients of an ECR ion source: magnets for plasma confinement, microwaves for ECR heating, and gas to create and sustain the plasma. For high-charge-state sources the magnetic confinement consists of an axial magnetic-mirror field superimposed by a radially increasing sextupole (also called hexapole) or other multipole magnetic field. The combination of the axial mirror field and the radial multipole field produces a “minimum-B” configuration, in which the magnetic field is at a minimum at the centre of the device and increases in every direction away from the centre, providing stable plasma confinement.
The electrons, which are heated resonantly by microwaves, produce the high-charge-state ions primarily by sequential impact ionization of atoms and ions in the plasma. The ions and electrons must be confined for long enough for this sequential ionization to take place. In a typical ECR ion source, ions need to be confined for about 10-2 s to produce high-charge-state ions. The ionization rate depends on the plasma density, which typically ranges from about 1011 cm-3 for low-frequency sources to more than 1012 cm-3 for the highest-frequency sources. Charge exchange with neutral atoms must be minimized, so operating pressures are typically 10-6 mbar or less. The plasma chamber is biased positively so that the ions can be extracted from the plasma and accelerated into the beam-transport system.
The first sources using ECR heating to produce multiply-charged ions were reported in 1972 in France by Richard Geller. Since then the development and refinement of ECR ion sources have improved performance dramatically. For example, in 1980 the Micromafios source at the Centre d’Etudes Nucléaires de Grenoble produced 20 eμA of Ar8+ and 10 eμA of Ar9+. Later the 18 GHz source at RIKEN produced 2000 eμA of Ar8+ and 1000 eμA of Ar9+ in 2003.
The main drivers for improving the performance of ECR ion sources were formulated in Geller’s famous ECR scaling laws, which predicted that higher magnetic fields and higher frequencies would increase both plasma density and ion-confinement times, which would improve performance. Following these guidelines and using other experimental data, the ambitious design for the VENUS ECR ion source was developed with magnetic-confinement fields much greater than those of previous sources. The strong forces between the superconducting sextupole and the solenoid coils were the main technical challenges of building such a source. Indeed, VENUS was the first source to build such a strong confinement structure, and it holds the world record for the highest ECR confinement field ever achieved, with an axial field of 4 T at injection, 3 T at extraction and a radial field at the plasma chamber wall of 2.4 T.
The technology that made this high field-strength possible was the careful design of the magnet-clamping structure, utilizing bladders filled with liquid metal and a split-pole structure made from iron and aluminium for the sextupole coils. The iron increases the radial field-strength by about 10%, and the aluminium pieces were used to match the thermal expansion of the superconducting wire and the pole. Figure 3 shows the conceptual design of the magnet structure.
Originally VENUS was designed to produce high-current, high-charge-state ions for the 88 inch cyclotron at LBNL, but it has evolved to serve also as the prototype injector ion source for the driver linac of the proposed RIA facility. In the latter application VENUS has become a highly visible project. Similar injector sources are proposed or under construction in RIKEN, GSI and the Laboratori Nazionali del Sud, Catania, in Italy.
Testing, testing…
The operational experience with VENUS has been excellent in terms of stability, reproducibility and reliability during the commissioning period with power at 18 and 28 GHz. During initial operation at 28 GHz, record intensities of medium-charge-state beams such as 245 eμA of Bi29+, and high-charge-state beams such as 16 eμA of Bi41+, were extracted easily. The testing programme has initially focused on bismuth, since its mass is close to that of uranium, which will be the most challenging ion beam for RIA and also for the radioactive ion-beam factory in RIKEN. Bismuth is an ideal test beam since it is less reactive than uranium, not radioactive and evaporates at modest temperatures. However, the processes of extraction and ion-beam formation, as well as the transport characteristics, are very similar to those for uranium. Moreover, bismuth is also very similar to lead, so the results could also be of interest for a future intensity upgrade for the Large Hadron Collider at CERN.
The preliminary performance data measured at 28 GHz in 2004 with VENUS confirmed the scaling laws for intensity, and were the first evidence that meeting the high-intensity requirements is feasible. Nevertheless, these high-intensity beams present a new challenge for the beam-transport lines of ECR ion sources, which are traditionally designed for low-current ion beams. In addition, the high magnetic field at the extraction region greatly affects ion-beam formation and quality (i.e. emittance). This could appear to limit a further increase in the confinement fields and heating frequencies. However, experiments have found that higher-charge-state beams have much higher beam quality than lower-charge-state beams.
This can be explained by a model where the high-charge-state ions are extracted closer to the magnetic-field axis than the low-charge-state ions, leading to less angular momentum and a smaller transverse beam emittance. VENUS’s widely variable magnetic field at extraction will enable us to explore this model experimentally. Later this year the VENUS source will be tested with uranium ions – one of the key ion beams for RIA and for the RIKEN radioactive ion-beam factory – which will be a major milestone for the project.
The well established ring imaging Cherenkov (RICH) technique measures the Cherenkov angle via direct imaging of photons emitted through the Cherenkov effect. It is mainly used in high-energy and astroparticle physics experiments to identify charged particles over an impressive range in momentum, from a few hundred mega-electron-volts/c up to several hundred giga-electron-volts/c. The performance of the technique has yet to be matched by competing technologies, especially when the physics objectives require excellent particle identification.
In 1993, Eugenio Nappi of Bari and Tom Ypsilantis of Collège de France began a series of international workshops to provide a forum for reviewing the most significant developments and new perspectives on this powerful technique. From 30 November to 5 December 2004, the beautiful resort of Playa del Carmen on the Yucatan Peninsula in Mexico hosted the fifth in the series. Following on from the first workshop in Bari, Italy, and subsequent meetings in Uppsala, Ein Gedi and Pylos, this was the first foray to the other side of the Atlantic.
RICH2004 was dedicated to the centenary of the birth of Pavel Cherenkov in July 1904, who discovered the effect through which charged particles travelling faster than light in a medium emit a characteristic radiation. To honour him, the local organizing committee – headed by two seasoned RICH practitioners in Mexico, Jurgen Engelfried from the University in San Luis Potosi (UASLP) and Guy Paic from the Instituto de Ciencias Nucleares of the National University in Mexico City (UNAM) – invited Cherenkov’s daughter, Elena Cherenkova, to the meeting. A physicist herself, she captured the attention of the audience by recollecting her father through unpublished photographs and anecdotes. Boris Govorkov, a long-time collaborator of Cherenkov, was also invited to talk about the history of the discovery of Cherenkov radiation.
The workshop itself consisted of topical sessions on Cherenkov-light imaging applications and related technological issues. It attracted some 100 participants from around the world and the large number of abstracts received proved that this field is still very fertile and open to innovative ideas, about both the basic configuration of detectors and the technology used in their construction. While all the submitted contributions were very interesting, the organizers had to make a selection to allow time for extensive discussions between talks. In addition to the 10 invited talks, 55 other papers were accepted for oral presentations, while the remainder were conveyed in the poster session.
The main advances of the past few years played a central role in the workshop. These include the imaging of Cherenkov photons totally reflected in quartz bars (the basic principle of the Detection of Internally Reflected Cherenkov light [DIRC] technique adopted in the BaBar experiment at the Stanford Linear Accelerator Center [SLAC]); the development and applications of photocathodes made of thin films of caesium iodide (CsI); and the current availability of multi-anode photomultipliers (MAPMTs) and large-area hybrid photon detectors (HPDs).
Jochen Schwiening of SLAC gave an overview of the current DIRC layout for BaBar, while in a contribution to the poster session Jerry Va’vra, also from SLAC, described the possibility of enhancing the detector’s performance by adding a focusing system. The design and construction of gaseous photodetectors based on large-area CsI photocathodes, which work in reflective mode with electron extraction in CH4 at atmospheric pressure, have been mastered. This was shown by Abraham Gallas of CERN, Silvia Dalla Torre of Trieste and Mauro Iodice of Rome, who reported on applications in the ALICE and COMPASS detectors at CERN and in experiments in Hall A at the Jefferson Laboratory, respectively. Herbert Hoedlmoser of CERN also reviewed preliminary results from irradiation tests on CsI photocathodes.
Although gaseous photon detectors remain the most effective solution for large detector areas in relatively low-rate experiments, the improvements in the technology of multichannel vacuum-based photon detectors have created the possibility of using the Cherenkov-light imaging technique in applications that were unthinkable only a few years ago. One example is measuring how long Cherenkov photons take to propagate in long quartz bars (the time-of-propagation or TOP counter), as Toru Iijima from Nagoya discussed. The two RICHs being constructed for the LHCb experiment at CERN are the most exacting examples of this “new generation” and several talks covered their challenging design.
In parallel with the industrial production of HPDs and MAPMTs, the development of custom designs has recently evolved considerably. A partnership with one major industrial manufacturer has been established to develop multi-anode hybrid avalanche photodiodes and photodevices based on the combination of a micro-channel plate and micromegas, as reported by Takayuki Sumiyoshi of KEK and Va’vra, respectively.
Besides the CsI-based RICHs mentioned above, Vladimir Peskov from Paris also discussed novel gaseous detectors. In the same vein, Fabio Sauli of CERN reviewed advances in the gas electron-multiplier (GEM) technique, which enables high-performance detectors to be built that are essentially discharge-free and have very high gains. Amos Breskin of the Weizmann Institute reviewed the important results obtained in detecting single photons with a multi-GEM counter, filled with CF4 or Ar/CH4, which operates smoothly up to a gain as high as 105. These studies were the basis for the development of the “hadron-blind” Cherenkov detector under construction for the Pioneering High-Energy Nuclear Interaction Experiment (PHENIX) at the Brookhaven National Laboratory (BNL), as reported by Itzak Tserruya, also of the Weizmann Institute.
The trend of operating RICH detectors in the visible range improves performance because the chromatic aberration is less than with detectors working in the far-ultraviolet region; it also implies a larger choice of materials for the radiator, such as silica aerogel. Several speakers discussed the outstanding improvement of the optical characteristics of this amazing material, made possible by the work of the group of Alexander Danyliuk and Alexei Onuchin in Novosibirsk and of the Japanese company Matsushita.
The high transparency of aerogel nowadays and the possibility of producing tiles made of layers with different refractive indices enable more compact detector designs based on proximity focusing geometry. Such a design is envisaged for the upgrade of the Belle experiment at KEK and, in threshold mode, for heavy-ion experiments, as reported by Peter Krizan of Ljubliana and Paic, respectively. On technological aspects, Veljko Radeka of BNL reported on perspectives for front-end and read-out electronics, and Olav Ullaland of CERN discussed the design of fluid systems.
A discussion about the performance of operating detection systems included overviews of the RICH for the HERA-B experiment at DESY, from Marko Staric of Ljubljana; the triethylamine RICH in the CLEO III experiment at Cornell, from Radia Sia of Syracuse; and the dual-radiator RICH of the HERMES detector at DESY, from Harold Jackson of Argonne. Forthcoming RICH detectors in fixed-target and collider experiments took centre stage halfway through the workshop with reports on RICH2 for COMPASS at CERN from Fulvio Tessarotto of Trieste, and the RICHs for the Charged Kaons at the Main Injector (CKM) and B Physics at the Tevatron (BTeV) experiments at Fermilab from Peter Cooper of Fermilab and Tomasz Skwarnicki of Syracuse, respectively. The impressive results obtained with RICH detectors, especially in charge-parity violation in B-physics experiments, were reviewed by Blair Ratcliff of SLAC.
A full day of the workshop was devoted to astroparticle physics applications, beginning with overviews from Greg Hallewell of Marseille and Alan Watson of Leeds. They made it clear that the new generation of experiments under construction in astrophysics will be the most challenging designs ever attempted.
The consensus of the workshop is that nowadays, with the exception of the next generation of experiments at linear colliders, all experiments need and plan for particle identification at ever-higher momenta and therefore, for the most part, rely on RICH detectors. This was the key message of the talk by Nappi at the end of the meeting. The high quality of the talks and the enjoyable location made this event a great success and RICH practitioners are very much looking forward to the sixth in the series, which will be held in Trieste in autumn 2007.
• RICH2004 was sponsored by several Mexican institutions including the Centro de Investigación y de Estudios Avanzados (CINVESTAV), the Consejo Nacional de Ciencia y Tecnología (CONACyT), UNAM and UASLP; CERN; the National Science Foundation (NSF); the Centro LatinoAmericano de Fisica (CLAF); SLAC; and Fermilab.
A dynamic technology-transfer department is only one of many factors determining how often and how successfully start-up companies will be created. This belief is not based on any solid evidence, but on a single example with, at best, anecdotal value. The example is from my own experience over the past three years in helping to create SpinX Technologies, a start-up company developing instruments for pharmaceutical research and clinical diagnostics.
The SpinX story started around Christmas 2001 in a branch of IKEA. Following a casual conversation with a stranger sharing his table at the cafeteria, Piero Zucchelli from CERN could not let go of one thought. Of the myriad technologies used or developed at CERN, there had to be some just waiting to be exploited outside of particle physics. Since my summer-student days when Piero had been my supervisor, we frequently worked together and I had become interested in business as well as in molecular biology. Over the next few weeks, we spent countless hours brainstorming how technology at CERN could be applied to biotechnology.
Gradually, we zoomed in on a field where many advances had been the work of physicists: microfluidics. The idea behind microfluidic devices, often called lab-on-a-chip systems, is to perform biochemical experiments using sub-microlitre volumes rather than millilitre volumes. Liquids are manipulated by running them through channels no wider than a human hair laid out on a silicon or plastic substrate. The applications are mostly biochemical and range from food testing to drug discovery.
What struck us most is that all microfluidic devices were specialized for a specific type of biochemical experiment characterized by a well defined but fixed sequence of operations. There were none where different protocols could be “programmed” on the same chip by setting a series of valves. Before long, Piero had come up with a valve implementation using an infrared laser. With this Virtual Laser Valve, everything started to fall into place and we soon had all the elements for a programmable microfluidics platform.
At some point a friend at CERN suggested that, if we were serious about this idea and believed it had commercial value, we should take it to serious investors. He would make the introductions. With nothing but an idea, we crossed the doorstep at Index Ventures, a venture-capital fund managing €750 million. It took us half a year to build their interest. For us, this was a turning point. Leaving the comfort of stable, well paid positions in research, we had to devote all our time and energy to SpinX with no guarantee of success.
Our case illustrates where a technology-transfer department can make a very real contribution, but also where it is essentially powerless. Contacts with venture-capital funds, law firms, business consultants and so on are obviously helpful to any aspiring entrepreneur, and a technology-transfer department is ideally placed to build this kind of network. But the key factor in attracting high-quality capital is commitment. Francesco de Rubertis, a partner at Index Ventures, says, “We do not invest in technologies, but in the teams that can make them happen.” Venture capitalists expect the people behind an idea to be entirely devoted to their start-up. Anything less is a sure deal-breaker.
Throughout the process, the interaction with CERN in general and technology transfer at CERN in particular could not have been more productive. Both of us were offered leave from the moment we started to work full time until the moment the investors committed. During the due-diligence process, the technology-transfer department provided us in record time with a legal statement clarifying the ownership of the intellectual property.
We did convince Index Ventures of our business concept, SpinX was created and the experience has been extremely rewarding. Today we are a company of 10 people, we have identified the first application of our technology, we have built a working system for that application and we are talking with several pharmaceutical companies interested in evaluating the system.
Would SpinX exist if it hadn’t been for CERN? I doubt it. We may not have started from any specific technology at CERN, but we could not have done it without the experience built there. It was at CERN that we learned the importance and benefits for international, multi-disciplinary teams to “try the impossible”. With 10 people, SpinX has seven nationalities and six doctorates, with backgrounds ranging from physics and engineering to biochemistry and enzymology. Like particle detectors at CERN, the instrument we developed uses a host of off-the-shelf components from widely varying industries. Finally, there is the undeniable value of the CERN brand. Recently, following a conference presentation of the technology to a senior executive at Eli Lilly, he commented that if former CERN physicists could not get it to work, then nobody could!
On 7 March the first of the superconducting dipole magnets for the Large Hadron Collider (LHC), under construction at CERN, was lowered into the accelerator tunnel.
The 15 m-long dipoles, each weighing 35 t, are the most complex components of the machine. In total, 1232 dipoles will be lowered 50 m below the surface via a special oval shaft. They will then be taken through a transfer tunnel to their final destination in the LHC tunnel, carried by a specially designed vehicle travelling at 3 km per hour.
In addition to the dipole magnets, the LHC will be equipped with hundreds of smaller magnets. More than 1800 magnet assemblies will have to be installed. Once in position, the magnets will be connected to the cryogenic system to form a large string operating with superfluid helium, which will maintain the accelerator at a temperature of 1.9 K.
The lowering of this first magnet into the tunnel coincided with another milestone: the delivery of half of the superconducting dipole magnets. The remaining 616 dipoles are due to arrive by autumn 2006. The construction of these superconducting magnets represents a huge challenge both for CERN and for European industry; for example, some 7000 km of niobium-titanium superconducting cable has had to be produced to form the magnetic cores.
Altogether some 100 companies in Europe are involved in manufacturing the magnet components. The greatest task was the move from the prototyping and pre-series phase to large-scale production. This has been met successfully and three industrial sites, in France, Germany and Italy, are manufacturing about 10 magnets each week.
On 19 February the Belle experiment running at Japan’s KEKB accelerator, the KEK B-factory, accumulated a record integrated luminosity of 1 fb-1 in a single day, corresponding to roughly 1 million BBbar meson pairs.
KEKB’s design luminosity of 1 x 1034 cm2 s-1 was first reached in May 2003. Since then the record has regularly been broken and on 15 February a new peak of 1.516 x 1034 cm2 s-1 was achieved. On average the KEKB luminosity is about 20% higher than it was a year ago. During operation of the TRISTAN accelerator at KEK from 1987 to 1995, the total integrated luminosity seen by the VENUS detector was 400 pb-1. Belle is now collecting the same amount of data in less than half a day.
Most of the performance increase is due to the novel scheme of continuous beam injection used at KEKB in which the detector keeps taking data while the electron and positron beams are being injected into the accelerator. This was previously thought to be almost impossible owing to the large noise introduced by the injected beams. However, the KEK accelerator group has developed a sophisticated scheme of continuous beam injection, while the detector group has also developed an electronics system that is more tolerant to noise.
The Institute for High Energy Physics (IHEP) in Protvino, Russia, is producing some of the largest and most challenging chambers for the muon spectrometer of the ATLAS detector at CERN. Monitored drift-tube (MDT) chambers come in a variety of sizes, but the 192 chambers now being produced at IHEP include 16 with a length of 6.3 m and between them incorporate 60,000 precision MDT tubes.
MDTs have been constructed in many institutes in Europe, the US, Russia and China, but this is the first time that chambers of this size have been successfully produced. Despite the huge size of the chambers, the 50 μm thick anode wires are positioned to better than 20 μm. Production in Protvino is expected to finish by mid-April.
During the Christmas break, the hydrostatic level sensors (HLSs) in the ATLAS cavern revealed a new facet of their capabilities. Installed by the CERN survey group to monitor any deformation or movement of the structure on which the detector feet rest, these sensors with submicrometre resolution coupled to the heavy ATLAS mechanical infrastructure took on the function of a seismograph.
The signals recorded by the sensors are shown in the figure, which reveals two perturbations, one on 23 December starting at 15.45 GMT and the other on 26 December at 01.23 GMT. Seeing these unusual readings raised the question of whether they were connected with the earthquake off the Indonesian coast that gave rise to the devastating tsunami.
The Geneva Centre for the Study of Geological Risks was duly contacted and it confirmed that the earthquake off the coast of Sumatra, which measured 9.0 on the Richter scale, was indeed responsible for the large peak recorded at CERN. When a seismic event occurs, the resulting vibrations spread out in all directions and two types of wave can be distinguished: primary waves, which propagate through the earth at speeds of 6-8 km s-1, and the slower waves that are confined to the surface of the Earth (such as the horizontal Love wave, which can cause structural damage to buildings).
The epicentre of the Sumatra earthquake was some 9000 km from CERN and happened at 00:59 GMT (07:59 local time). The primary waves need about 20 minutes to reach the ATLAS cavern, which is consistent with the first perturbations recorded by the sensors at 01.23 GMT on 26 December.
The earlier, smaller perturbation is linked to another earthquake measuring 8.1 on the Richter scale, which is thought to have been correlated with the earthquake of 26 December. It happened at 14.59 GMT on 23 December north of Macquarie Island (between Australia and Antarctica), much further away from CERN.
The Wide Angle Shower Apparatus (WASA) detector, currently at the CELSIUS facility of The Svedberg Laboratory (TSL) in Uppsala, Sweden, is to find a new home. CELSIUS was commissioned in 1983, using the hardware of CERN’s ICE ring, and its experimental programme will end in summer 2005. The WASA detector, built in the 1990s by a collaboration between Sweden, Poland, Germany, Russia and Japan, will then be relocated to the Cooler Synchrotron (COSY) ring at the Forschungszentrum Jülich (FZJ) in Germany.
WASA is a fixed-target 4π detector comprising a central part and a forward part. The central detector, built around the interaction point, is designed mainly for the detection of the decay products of π0 and η mesons: photons, electrons and charged pions. It consists of an inner drift chamber, a superconducting solenoid and a caesium-iodide calorimeter. The forward detector, designed to detect target recoil and scattered beam particles, consists of 11 planes of plastic scintillator counters and proportional counter drift tubes. The target consists of a beam of frozen hydrogen or deuterium pellets about 25 μm in diameter, which will allow luminosities of up to 1032 cm-1 s-1 in interactions with the circulating beam at COSY.
The transfer of WASA to COSY will be mutually beneficial. Photon detection is important for understanding the physics of hadronic reactions, since many of the produced mesons and excited baryonic states have a significant number of decay branches into multi-photon final states. This calls for a detector with a wide-acceptance electromagnetic calorimeter. Until now, such a detector has been missing from COSY, and WASA fits the bill nicely.
WASA will also benefit from the higher energy of the COSY beam compared with CELSIUS, which is well above the threshold for η’ production in proton-proton interactions. (COSY offers beam momenta of up to 3.7 GeV c-1 with polarized and cooled proton and deuteron beams, whereas CELSIUS can only go up to 2.1 GeV c-1.) WASA will be shipped to the Forschungszentrum Jülich this autumn and the experimental programme is expected to start in the beginning of 2007. Once at COSY, the WASA detector offers an opportunity to deepen our understanding of non-perturbative quantum chromodynamics (QCD) through a precise study of symmetry breaking and very specific investigations of hadron structure.
For example, the η and η’ decays that vanish in the limit of equal-light quark masses (for example η’→ηππ) allow the exploration of explicit isospin symmetry-breaking in QCD. Furthermore, precision measurements of rare η and η’ decays can be used to obtain new limits on the breaking of the charge, parity and time symmetries or their combinations. Last but not least, WASA at COSY can contribute significantly to testing the various models offered to explain exotic and crypto-exotic hadrons – such as the light scalar mesons a0/f0(980), pentaquarks like the Θ+ or hyperon resonances like the Λ(1405) – through precise measurements of decay chains and couplings to other hadrons.
Another promising process where precise measurement can confront theoretical predictions is the isospin-violating process dd→απ0. Pioneering measurements have already been performed at the Indiana Cooler. At COSY such studies can be extended to higher energies and, in particular, to the reaction dd→απ0η, which should be driven by the isospin-violating a0-f0 mixing.
COSY can produce more than 106 η’ mesons per day, and their subsequent hadronic, radiative, leptonic and forbidden decays can be detected by WASA. The expected event rates will substantially increase world statistics.
• The WASA-at-COSY project is a collaborative effort between many institutions, in particular TSL and FZJ. The project currently comprises 137 members from 24 institutes in seven countries.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.