A team of researchers at SLAC has shown that plasma acceleration can dramatically boost the energy of particles over a short distance. The breakthrough is the culmination of almost a decade of work, led by Chan Joshi from University of California, Los Angeles, Thomas Katsouleas from the University of Southern California and Robert Siemann from SLAC.
The technique uses the plasma-wakefield effect – the high electric fields generated in the wake of an intense beam of either photons or charged particles passing through a plasma. In 2006, Wim Leemans and colleagues from the Lawrence Berkeley National Laboratory and Oxford University accelerated electrons to 1 GeV in laser-driven wakefields over 3.3 cm. Now Ian Blumenfeld and colleagues have used the intense, ultrarelativistic electron beam from the 3 km linac at SLAC to create the wakefields.
In the experiment at SLAC, the team directed the 42 GeV beam from the linac into lithium gas in an 85 cm long plasma chamber. The electrons ionize the gas at the front of the beam pulse, creating a plasma, and also push out the plasma electrons to leave a column of ions. The plasma electrons are attracted back to the ions, but overshoot, setting up space–charge oscillations at the rear of the pulse, forming the wake. While most of the electrons in the beam pulse lose energy as they create the wakefield, those near the back of each pulse are accelerated in the high field created there. The measurements showed that some electrons more than doubled their energy, up to a maximum of 85 ± 7 GeV (see figure), implying a peak accelerating field of around 53 GV/m. In 800 events, 30% showed an energy gain of more than 30 GeV.
In tests with a 113 cm lithium-gas column, the team measured a maximum energy of just 71 ±11 GeV, and only 3% of 8000 consecutive events showed an energy gain of more than 30 GeV. This apparent saturation in the energy gain appears to be due to an expansion of the front of the beam, which could be reduced with a lower-emittance beam.
At 6.00 a.m. on 28 February the heaviest section of the Compact Muon Solenoid (CMS) detector began its momentous journey into the experiment’s cavern, 100 m below ground. Using a huge gantry crane, custom-built by the Vorspann System Losinger Group, the pre-assembled central piece, weighing 1920 tonnes – or as much as five jumbo jets – was gently lowered into place, descending at a rate of about 10 m an hour. It finally touched down smoothly at 6.00 p.m., under the eyes and cameras of assembled press, hundreds of CMS collaboration sightseers and TV viewers around the world.
The giant element, 16 m tall, 17 m wide and 13 m long, consisted of the complete superconducting solenoid, together with the central section of the magnet return yoke. Its descent was a challenging feat of engineering, as there was only 20 cm leeway between the detector and the walls of the shaft. To make the journey, the piece was suspended by four massive cables, each with 55 strands and attached to a step-by-step hydraulic jacking system. Sophisticated monitoring and control ensured that it did not sway or tilt.
The CMS collaboration broke with tradition by starting assembly of the detector before completion of the underground cavern, taking advantage of a spacious surface assembly hall to pre-assemble and pre-test the solenoid magnet and the various detectors. There are 15 pieces altogether, and the descent of the central section marks the halfway point in the lowering process, with the last piece scheduled to go underground in the summer.
The Diamond Light Source, the UK’s new synchrotron facility in Oxfordshire, has welcomed its first scientific users after opening its doors for business in February. The projects, selected from 127 proposals received last year, cover a broad range of research, from cancer studies, to advancing data-storage techniques, to unravelling the mysteries of the solar system. They will provide the teams at Diamond with real projects to assist in the six-month period of fine-tuning the first experimental stations.
These first research projects will be carried out in beamlines that are part of Phase I of Diamond’s development – comprising the buildings, the synchrotron itself and the first seven beamlines. Phase I investment of £260 million from the UK government (86%) via the Council for the Central Laboratory of the Research Councils and the Wellcome Trust (14%), was used to deliver the facility on time, on budget and to specification. Funding for Phase II of the project – a further £120 million – was confirmed in October 2004 and will be used to build 15 additional beamlines to expand the available range of research applications. Construction has already started on the Phase II beamlines and beyond this, on average four to five new beamlines will be available each year until 2011.
Nuclear science is one of many branches of physics that daily disprove the musings of Lord Kelvin. Sometime around 1900, before quantum mechanics and special relativity, the pioneer of thermo-dynamics and the creator of the absolute-temperature scale reportedly declared, “There is nothing new to be discovered in physics now.” More than a century later, nuclear physicists remain energized by a host of pursuits, including exploring the science of atomic nuclei, understanding processes in nature’s most powerful explosion, the supernova, and addressing open questions about fundamental symmetries of nature. The compelling questions that drive this research are creating a push for new facilities in various parts of the world, including a rare-isotope science facility for the US.
Rare-isotope research today explores the limits of nuclear stability and determines nuclear properties in the uncharted domain of nuclides with very unusual proton-to-neutron composition. The nuclei farthest from stability are especially important and provide the best vehicle for understanding the interplay of internal structure, reactions with other objects of a similar nature, and decays to the continuum. Such nuclei enable myriad experimental possibilities that collectively will advance and possibly even transform nuclear theory. Some of the most likely experiments will address wide-ranging themes in nuclear physics, ranging from magic numbers to dynamical symmetries to the limits of stability. Others will home in on specific measurements of nuclear-shell structure aimed at elucidating the most important degrees of freedom.
Attempts at fine-grained analysis of nuclear structure invariably lead to innovation in the tools that underpin the workaday world of nuclear physics. One contemporary example is intermediate-energy Coulomb excitation, which allows critical information to be extracted from experiments with beam intensities of only a few tens of thousands of atoms a day. Another example is the precision determination of nuclear binding energies with Penning traps. Current experimental frontiers include studies of nuclear sizes, wave functions, half-lives and decay modes of exotic nuclei.
Beyond the relevance to basic nuclear-structure physics, rare-isotope research is increasingly vibrant at its edges, where the field connects to other intellectual pursuits such as mesoscopic quantum systems – which can be averaged over many atomic-scale systems – and astrophysics. To a very good approximation, we can describe nuclei as self-sustaining finite droplets of a two-component – neutron and proton – Fermi-liquid, the detailed properties of which depend on the delicate interplay of the strong, electromagnetic and weak interactions.
Advances in computation techniques have allowed accurate microscopic calculations of the properties of very light (A < 16) nuclei. For heavier nuclei, full microscopic treatments rapidly become unfeasible and additional approximations must be introduced to solve the underlying many-body quantum problem. This is mesoscopic study, between microscopic and macroscopic. Many exotic nuclei are systems of marginal stability for which coupling to the continuum is important. They are “open” mesoscopic quantum systems in which interactions among finite numbers of particles can be described by effective forces. Describing the interplay of internal structure and external interactions is relevant to other research areas in physics, including information processing, quantum chaos, decoherence and phase transformations.
Understanding mesoscopic quantum systems is important to progress in nanotechnology and quantum computing, which are areas of high interest in condensed-matter physics and quantum optics. In nanotechnology, the basic quantum many-body problem raises fundamental issues about the design and engineering of artificial mesoscopic systems in which complexity emerges from the elementary interactions of a relatively small number of constituents. Nuclear science addresses similar questions, though at femtometre rather than nanometre scale.
Nuclear processes shape much of the visible universe – one reason why astrophysics and nuclear physics have long been closely connected. This link will strengthen in the future. Research with rare isotopes together with progress in observational astronomy will help address several areas of inquiry in astrophysics, including the chemical history of the universe, the conditions and sites where the elements were created, and the nature of exotic objects such as neutron stars, and explosive events such as novae and supernovae.
With more than 2000 active scientists worldwide, rare-isotope research is vibrant and international. New and planned facilities at RIKEN in Japan, GSI in Germany and GANIL in France complement existing facilities such as Louvain-la-Neuve in Belgium, HRIBF in Tennessee and the TRIUMF ISAC facility in Canada.
Inspired by such promise in this field, the US National Academies have recently published a report justifying the case for a new isotope science facility in the US. The report, nearly a year in the making and released online in unedited prepublication form on 8 December, concluded that the science goals were compelling and that “the science addressed by a rare-isotope science facility… should be a high priority for the United States”. The report adds that, provided the new facility is based on a heavy-ion linac, it will complement existing and planned nuclear-science activities worldwide.
A January town meeting in Chicago provided additional momentum. The aim of the meeting, part of the US nuclear-science community’s current five-year strategic planning exercise, was to identify top priorities in nuclear-structure and nuclear-astrophysics research. Attendees at the meeting concluded that one such priority is a more powerful means for producing rare isotopes for research with stopped, reaccelerated and in-flight (or fast) beams.
The scientific questions and a set of possible options for the technical implementation of such a new facility are laid out in some detail in a recent whitepaper released by the National Super-conducting Cyclotron Laboratory (NSCL) at Michigan State University (MSU). This document proposes building a high-power superconducting heavy-ion linac at MSU. The new Isotope Science Facility (ISF), the working name of the proposed MSU facility, would be based on a linac able to deliver beams of all stable elements with variable energies up to at least 200 MeV/nucleon and beam power up to 400 kW. A team at Argonne National Laboratory has presented similar ideas.
The ISF would combine the possibility of measurements with post-accelerated radioactive beams with the ability to conduct experiments using fast radioactive beams. In many cases, fast beams would provide 10,000 times higher sensitivity than is possible with reaccelerated beams, and make possible experiments with single ions of the rarest isotopes. This is an important consideration, the NSCL whitepaper points out, given that many interesting isotopes that are potential objects of study are produced at levels below a hundred or so per second, where the reaccelerated beam technique starts to become difficult. For example, fast beams would allow researchers to probe how nuclear structure evolves in nickel isotopes moving from atomic numbers 48 to 83. In addition fast beams would enable the study of key benchmark nuclei near 48Ni, 60Ca, 78Ni and 100Sn.
Complementary to the fast-beam approach, the proposed facility will allow isotope separation online (ISOL) techniques, in which isotopes are produced at rest in a thick target. CERN’s ISOLDE facility pioneered the field, and the forefront research using reaccelerated beam produced from ISOL continues in Geneva. Stopped beams are important for precision measurements with ion or atom traps or for collinear laser spectroscopy. Reaccelerated beams provide the opportunity to measure important nuclear-reaction rates relevant to nuclear astrophysics and to employ the well-proven techniques of nuclear-structure physics to a host of new nuclei. In addition, reaccelerated beams allow the investigation of fusion reactions, which will lead to the production of new neutron-rich isotopes of very heavy elements.
The use of a heavy-ion linac allows in-flight separation of ions and provides a path to reaccelerated beams that overcomes some of the chemical limitations of traditional ISOL techniques. Stopping, extracting and reaccelerating rare-isotope beams leads to intensity losses, the full extent of which is not yet known, although NSCL, Argonne National Laboratory, GSI and RIKEN are making significant progress. The time has come, however, for full performance tests of the concept. NSCL is building a project to test comprehensively the production, gas-stopping and reacceleration sequence.
A new US facility would complement international efforts in this field and would be relevant far beyond basic nuclear-structure research, especially given the links to other physics-related disciplines, such as astrophysics and mesoscopic science. However, the most important reason to proceed with rare-isotope research is to address questions at the core of nuclear physics. What are the limits of nuclear existence? How do we develop a predictive theory of nuclei? What is the origin of simple patterns in complex nuclei? What is the nature of neutron stars?
Such big questions represent the barest fraction of the unknown in nuclear science, which demonstrates that there is much compelling knowledge to be generated in a next-generation isotope-science facility – and also that Lord Kelvin is as wrong today as he was more than a century ago.
Megawatt-class beam targets are nowadays attracting attention from a wide variety of users, for investigations that span the spectrum from the transmutation of long-lived radioactive waste, through material research, to radioactive beams and neutrino factories. At the Paul Scherrer Institute (PSI), the Megawatt Pilot Experiment (MEGAPIE) has recently demonstrated the feasibility of safely running a liquid heavy-metal target in the world’s most powerful DC proton beam. The experiment is particularly important for the development of an accelerator driven system (ADS) for the transmutation of long-lived radioactive waste. It serves to demonstrate the feasibility, potential for licensing, and long-term operation under realistic conditions, of a high-power spallation target, which could later provide the high-energy neutrons required to induce fission in waste atoms.
Spallation neutrons are produced efficiently by firing a proton beam at a heavy-metal target such as lead where the reactions of the protons with nuclei literally knock out or “spallate” neutrons, while further neutrons are evaporated. On average each proton produces about 11 neutrons. Up until now spallation targets have always been solid, but MEGAPIE has demonstrated the advantages of a liquid target, namely an increase in neutron flux and convectional cooling of the target window. The second advantage gives the liquid target potential for higher power, in contrast to a solid target, which cannot be cooled sufficiently. In MEGAPIE, the use of a liquid target with the 1 MW beam at the Swiss Spallation Neutron Source (SINQ) increased the neutron flux by about 80% compared with the previous solid-lead target.
A powerful alliance
MEGAPIE is a collaboration of nine research institutes in Europe, Japan, Korea and the US, which have agreed to design and build a liquid-metal spallation target suitable for 1 MW beam power, and to license and operate it at PSI, where SINQ is the world’s only spallation neutron facility with a sufficiently powerful proton driver. The present 1.1 MW proton beam from PSI’s 590 MeV ring cyclotron delivers, after passing two secondary-beam production targets, a continuous proton-beam current of up to 1.4 mA (about 800 kW) at an energy of 575 MeV to the SINQ spallation source. For MEGAPIE, the collaboration decided that the liquid-metal target must be irradiated for a minimum of three months, both to achieve a sufficiently high irradiation dose on the component materials and to demonstrate that the system could operate reliably. During its operation, the target served as the source for the neutron-scattering programme at PSI, which involves some 260 experiments.
The MEGAPIE target consists of 920 kg of liquid lead-bismuth eutectic (LBE), contained in a steel casing. On impact, the 800 kW proton beam deposits about 580 kW of heat in the target material. The heat is removed by circulating the lead-bismuth in forced convection through a heat exchanger. The proton beam penetrates the lead-bismuth to a depth of 27 cm and generates an integrated flux of 1017 neutrons a second.
During the four months of operation, the target operated very satisfactorily and according to predictions. It triggered only a small number of unscheduled beam shutdowns and experienced more than 8000 beam interrupts of different durations without damage. Its availability reached 95%, with an accumulated proton charge amounting to 2.8 Ah.
Earlier Monte Carlo simulations had indicated that the liquid-metal target should provide a 40% increase in neutron flux (at identical current) compared with a solid target. However, initial measurements at selected instruments confirmed an increase in neutron flux, which the collaboration met at first glance with some scepticism: instruments at the cold guide gave a flux increase as high as 70–80%. However, gold-foil activation measurements have confirmed flux increases of 80–90% at both a thermal and a cold beam port. New calculations with more detailed target and moderator geometry now reproduce these results.
The higher flux means that it will be possible to carry out more experiments within the same time frame, a definite benefit for the over-booked beam lines. With a flux gain of this magnitude, operation with a permanent liquid-metal target at SINQ has become a priority and PSI has launched a new project to pursue this goal.
From nuclear waste to beta-beams
The success of MEGAPIE is particularly important for research into ADS transmutation of radioactive waste. The long-lived minor actinides (neptunium, americium and curium) are the main contributors to the long-term radio-toxicity of nuclear wastes. However, it should be possible to transmute them into short-lived or stable elements using a sub-critical ADS equipped with an internal neutron source and driven by a high-energy proton beam. CERN has made major contributions to this concept with the experiments FEAT and TARC (CERN Courier April 1997 p8). In 1998, a technical working group headed by Carlo Rubbia established a roadmap to achieve ADS transmutation. The group considered the development of a high-power spallation target and the demonstration of its reliable operation to be vital steps en route.
Researchers are now also considering ADS scenarios based on megawatt spallation neutron targets for the next-generation European Radioactive Ion Beam Facility, EURISOL. Here a 1 GeV superconducting linear proton driver with separate post-acceleration capabilities will allow low-, intermediate- and high-energy, very intense radioactive ion-beams to probe fundamental questions in nuclear structure, nuclear astrophysics and fundamental symmetries and interactions. Another use of an ADS based on a spallation source would be the production of a neutrino beam in a “beta beam”. In this case, radioactive ions circulating in a storage ring beta-decay to produce a pure beam of electron-neutrinos/antineutrinos; the ions themselves are produced in a two-step process from the interaction of spallation neutrons in a suitable secondary target.
The pie is opened
The accelerator shutdown at the end of 2006 marked the end of the irradiation phase for MEGAPIE. The final phase of the experiment – the post-irradiation examination of the target components – will start after the target, which is now solidified, has been stored for two years. The analysis will provide information about corrosion effects on structural materials and allow the validation of various models. The state of the beam window will allow the combined effect of LBE and proton irradiation to be assessed and provide information on the potential lifetime of such a beam window. The analysis of the LBE will also furnish information on the spallation products and their chemistry, so validating neutronic and radiochemical models. This information will feed back into the design and operation of new spallation sources. New versions of ADS will also benefit enormously from the experience gained from MEGAPIE, which has also proved to be a key experiment for future industrial projects involving the transmutation of nuclear waste.
The origin of ultra-high-energy cosmic rays (UHECR) observed at energies above 1019 eV is a mystery that has stimulated much experimental and theoretical activity in astro-physics. When cosmic rays penetrate the atmosphere, they produce showers of secondary particles and corresponding radiation, which in principle yield information on the particle tracks, energy and origin of the primary cosmic rays. The CODALEMA experiment in Nançay has recently measured the radio-electric-field profiles associated with these showers on an event-by-event basis. These novel observations are directly connected to the shower’s longitudinal development, which is related to the nature and energy of the incident cosmic ray.
There have been previous partial studies of radio emission from showers, so why is this result so promising? The UHECR flux is very low (a few per square kilometre per century), so the largest detector arrays are square kilometres in area to detect secondaries at ground level combined with various other techniques, such as the fluorescence emission. However, the latter method is limited to the optical domain, where the need for moonless skies and appropriate environmental conditions results in a maximum duty cycle of only about 10%. So, the radio detection technique offers an interesting if challenging alternative (or complementary) method in which one antenna array can provide a very large acceptance and sensitive volume adequate to characterize rare events, such as UHECR.
There is also another important argument for the radio technique: because the distance between radiating particles is several times smaller than typical radio wavelengths, the individual particles radiate in phase. This will result in a coherent type of radio emission dominating all other forms of radiation, with a corresponding electromagnetic radiated power proportional to the square of the deposited energy. In air, the coherent radiation will build up at frequencies up to several tens of megahertz, while in dense materials, more compact showers can result in coherence radiation up to several gigahertz. Gurgen Askar’yan first suggested the production of radio emission in air showers in 1962, and some observations were reported in the 1960 and 70s (Allan 1971, Gorham and Saltzerg 2001). The electronics available at the time made the measurements unreliable, however, and researchers abandoned the technique in favour of direct ground particle or fluorescence measurements.
High-performance digital signal-processing devices have now made feasible the sampling of radio-frequency (RF) waveforms with large frequency bandwidth and high time-resolution, depending on the nature of the primary cosmic ray. Exploiting these new possibilities, the SUBATECH Laboratory, Nantes, and the Paris Observatory have developed the CODALEMA (Cosmic ray Detection Array with Logarithmic Electro-Magnetic Antennas) experiment on the site of the Nançay Radio Observatory.
For its first phase, CODALEMA has used some of the 144 log-periodic antennas of the decametric array of the Nançay Observatory distributed along a 600 m baseline. All the antennas are band-pass filtered (24–82 MHz) and linked, after RF signal wide-band amplification, to fast-sampling digital oscilloscopes (figure 1).
In its first running period, CODALEMA has established the appropriate conditions for the analysis of the antenna data, either in stand-alone mode or in coincidence with a set of particle detectors acting as a trigger. Figure 2 shows four cosmic-ray events identified at different zenith angles. It illustrates how the results reveal the dependence of the electric field on the distance of the antenna to the shower impact (in metres) at energies around 1017 eV. They show for the first time the richness of the information contained in the longitudinal shower development measured by radio detection.
First, the device is sensitive to amplitudes down to 1 μV/m/MHz, which is free from the fluctuations in the number of particles encountered in particle detectors. This allows detailed analysis of the field amplitude and its dependence on the energy and nature of the incident cosmic ray. It is remarkable that this sensitivity is carried over distances presumably greater than 600 m from where the shower hits the ground. The clear zenith angle dependence of the field profile illustrates the large angular acceptance of the array. This demonstrates for the first time the sensitivity of this detection method to the development of the shower, related to sequences of charge generation. Additionally, a Fourier transform analysis of the signal revealed a possible frequency dependence of the signal with impact parameter, a quantity strongly connected to the physical characteristics of the air shower.
At last, the study of the stand-alone antenna mode has clearly established that the transient character of the radio signal can be safely used to determine the arrival direction (with an accuracy of around 0.7°) and to reconstruct full event waveforms.
These results clearly demonstrate the interest in a complete re-investigation of the radio detection of UHECR that Askar’yan first proposed in the 1960s. Only two experiments in the world have undertaken this type of analysis with atmosphere showers: CODALEMA and the LOPES experiment, which is studying the same RF domain (Isar et al. 2006). The latter works as an extension of the triggering multi-detector set-up Kascade-Grande in Karlsruhe. Other active experiments use dense materials such as ice, salt or lunar regolith, and therefore study higher frequency domains; these include RICE and ANITA (Antarctic), FORTE satellite (Greenland), SALSA (Mississippi) and GLUE (NASA-Goldstone).
In addition, the CODALEMA results indicate specific features that encourage the use of this technique as a complementary method to experiments based on large ground detector arrays, such as the Pierre Auger Observatory. The CODALEMA collaboration is currently investigating this possibility. In addition, it is considering the exploitation of the large zenithal acceptance for the challenging study of very inclined showers, which correspond to large slant depths in the atmosphere (or Earth). For example, while suppressed by the Earth’s opacity and barely accessible to other techniques, high-energy neutrinos can interact at any point along such trajectories, producing τ particles and subsequent detectable “young” showers in the atmosphere. Another interesting application is the characterization of distant storms and very energetic atmospheric radiation events of currently unknown origin. An upgraded experimental set-up has been running since November 2006, consisting of 16 antennas with new active wide-bandwidth dipoles together with 13 particle detectors to allow shower-energy determinations for calibration purposes.
• The CODALEMA collaboration comprises three laboratories from the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3): SUBATECH/Nantes, LPSC/Grenoble LAL/Orsay, two laboratories from the Institut National des Sciences de l’Univers (INSU): Observatoire de Paris/LESIA and LAOB/Besançon; the LPCE/Orleans CNRS laboratory; and the private ESEO Angers Institute.
On 24 January the linac for the Japan Proton Accelerator Research Complex (J-PARC) successfully accelerated a beam of negative hydrogen ions up to 181 MeV, the design energy for Phase I of the project. The acceleration to the full energy is three months earlier than scheduled.
J-PARC, which is a joint project between the High Energy Accelerator Research Organization (KEK) and the Japan Atomic Energy Agency (JAEA), is being built at Tokai, approximately 120 km north of Tokyo. The accelerator system will comprise a 400 MeV proton linac (181 MeV at the first stage of Phase I), a 3 GeV, 25 Hz Rapid-Cycling Synchrotron (RCS), and a 50 GeV Main Ring Synchrotron. The RCS provides the Materials and Life Science Experimental Facility with a 1 MW beam to generate pulsed muons and pulsed spallation neutrons. Every 3 s, the beam from the RCS is injected four times into the Main Ring, where it is ramped up to 50 GeV (40 GeV at Phase I). Fast extraction then provides a beam for neutrino production and slow extraction sends beam to the Hadron Experimental Facility (HDF). The neutrinos travel to the Super-Kamiokande detector located 295 km to the west, while the slowly extracted beam will produce secondary beams for hyper-nuclei experiments, rare-decay experiments with kaons, hadron-spectroscopy experiments and so on, or will serve primary beam experiments in the HDF.
Construction of J-PARC started in April 2001, and beam commissioning began some five years later. The radio-frequency quadrupole (RFQ) linac accelerated beam up to 3 MeV on 20 November 2006, the very day that the beam commissioning started. A month later, on 19 December, the team accelerated the beam up to 20 MeV using the first tank of the drift-tube linac (DTL), and up to 50 MeV using the second and third tanks the next day. Then at midnight on 19 January all of the 30 separated-type DTL (SDTL) cavities, which are driven by 15 klystrons, were ready for acceleration up to 181 MeV. First, the commissioning team performed a phase scan with each pair of the SDTL cavities driven by one klystron before finally completing the scan through 15 pairs on 24 January.
In each scan the team measured the beam energy by the time-of-flight method. For the initial beam study, the peak current, the beam pulse length and the repetition rate were set at 5 mA, 20 μs and 2.5 Hz, respectively, to avoid possible damage in accelerator components should something go wrong at high beam power. During the commissioning, both the RF power source and cavity system proved to be very stable. This stability together with a very accurate alignment of all of the magnets, especially in the drift-tube linac, were the two major elements that allowed a rapid tuning of the linac, three months before schedule.
Klystrons drive the J-PARC DTL, but it also has quadrupole electromagnets, which are variable-focusing elements. To meet the conflicting requirements of these two systems, the researchers chose an RF acceleration frequency of 324 MHz rather than the widely used 350 MHz; the compact quadrupole electromagnets were developed with the full use of electroforming and wire-cutting techniques; and industry developed the 324 MHz klystrons with a pulsed power of 3 MW (500 μs and 50 Hz) in close collaboration with the J-PARC linac team.
The combination of the 3 MeV RFQ linac and the 324 MHz RF source is now being considered as a best choice for the front end of the proton linac by many future projects at Fermilab, the ISIS spallation neutron source in the UK, the Facility for Antiproton and Ion Research at GSI and the Chinese Spallation Neutron Source. This is partly because 324 or 325 MHz, 3 MW pulsed klystrons are now available, and partly because the frequency is a quarter of L-band frequency, which would be used in a future superconducting International Linear Collider.
The RCS beam commissioning will start in autumn 2007, while the beam commissioning of the Main Ring and the muon and neutron production targets together with their beam transports will start in May 2008. By the end of 2008, the complex will be ready for the J-PARC users. The success of the linac beam commissioning earlier than scheduled is encouraging news.
In early January ALICE’s time projection chamber (TPC) moved 300 m from the assembly hall to the experiment cavern, taking four days to complete the journey. This 5 m wide, 5 m diameter cylinder weighs 8 tonnes and is extremely fragile.
The first steps included lifting the TPC with an overhead crane from the cleanroom in the assembly hall and positioning it onto four hydraulic jacks, which raised the TPC to 80 cm. Then a flatbed truck gently slid under the structure and carefully carried it to the entrance of the cavern, making sure not to tilt it more than 2°. The next step was to lower the cylinder 50 m into the ALICE cavern. This proved challenging, with just 10 cm of leeway between the delicate TPC and the shaft walls. Finally, a gantry crane moved the TPC close to its final position within the solenoid magnet, where work will begin on installing the internal tracking system.
The TPC consists of very light, fragile carbon-fibre. The surface structure, or field cage, is covered with 30,000 mylar strips secured with the utmost precision. The two endcaps carry electronic read-out channels. These are connected by several thousand flat cables to two service support wheels, which provide support for the electrical, electronic and gas-supply systems. In May, the TPC will be tested in its underground location using cosmic rays.
The first End-Cap Toroid (ECT) for the ATLAS experiment at the LHC has begun the last stage of its journey to the underground cavern. Now that the assembly of the cold mass, integration in the vacuum vessel and connection to the vacuum pumps, cryogenic lines and current leads are all complete, the toroid will undergo a cooling test on the surface before being lowered into the cavern.
The 5 m wide, 11 m diameter, 240 t structure is one of two similar ECTs, the last large magnets to be installed inside ATLAS. Moving at about 1 km/h on a special transport trailer, it left the assembly hall on the Meyrin site in preparation for cold-testing at a nearby outdoor location. The transport operation was extremely delicate: the slightest wrong turn or movement could have caused the tall structure to sway at an angle that would cause serious damage to the fragile parts inside. The toroid cold mass is suspended inside the vacuum vessel by four gravity rods and tipping the ECT at too large an angle could have damaged these rods.
During the surface test, the ECT is being cooled to 80 K using the cryogenic plant in a nearby building. Tests will check for cold leaks in the cooling circuits and verify the electrical insulation of the coils under thermal stress. During the 300–80–300 K thermal cycling, all of the crucial components as well as the magnet’s instrumentation will be thoroughly checked to make sure that it will function properly once installed underground.
The test will last until mid-March. The toroid will then be lowered into the ATLAS cavern in early June for final commissioning, when it will be cooled to 4.5 K using the ATLAS cryogenic plant and charged up to the nominal current of 20.5 kA. The second ECT is scheduled for lowering in early July, just in time for closure of the LHC beam pipe in August.
The strategy for building the CMS detector is unique among the four major experiments for the LHC at CERN. The collaboration decided from the beginning that assembling the large units of the detector would take place in a surface hall before lowering complete sections into the underground cavern. At the time the main driving factor was the attempt to cope with delivery of the underground cavern late in the schedule as a result of running the previous accelerator, LEP, together with civil-engineering works that were complicated by the geology of the terrain. Another goal was to minimize the large underground assembly operations, which would inevitably take more time and be more complex and risky in the confined space of the cavern. As construction and assembly progressed above ground, however, it became clear that there would a valuable opportunity for system integration and commissioning on the surface.
The complexity of CMS and the other LHC experiments is unprecedented. For this reason, the collaboration believed that the early combined operation of the various subsystems would be an important step towards a working experiment capable of taking data as soon as the LHC provides colliding beams. Initial plans focused on testing the state-of-the-art 4 T solenoid. This would require closing the yoke, already substantially instrumented with muon chambers. Since final elements of other subsystems would also be available by this stage, installed in their final locations, the idea of staging a combined system test in the surface hall became an attractive possibility.
Such a test also required the presence of the full magnet control system and scaled-down versions of the detector control, data-acquisition (DAQ) and safety systems. After much brainstorming and pragmatic criticism, the idea developed into the “cosmic challenge” for which the overall benchmark of success was the recording, and ultimate reconstruction, of cosmic-muon tracks passing through all sub-detectors simultaneously. This objective alone placed a big demand on the compatibility and interoperability of the sub-detectors, the magnet, the central DAQ, the control and monitoring systems and the offline software. The groups working on the Electromagnetic Calorimeter (ECAL) and the Tracker decided to find the resources to contribute active elements, rather than passive mechanical structures. This was a major factor in the positive feedback that eventually led virtually all systems, which will be needed to operate CMS in the LHC pilot run, to participate in the Magnet Test and Cosmic Challenge (MTCC).
In more detail, the objectives of the cosmic challenge were to: check closure tolerances, movement under a magnetic field, and the muon alignment system; check the field tolerance of yoke-mounted components; check the installation and cabling of the ECAL, the Hadron Calorimeter (HCAL), and Tracker inside the coil; test combined sub-detectors in 20° slice(s) of CMS with the magnet, using as near as possible final readout and auxiliary systems to check noise and interoperability; and last but not least, trigger and record cosmic muons and try out operational procedures.
In addition, the cosmic tests had to make no significant impact on progress in assembling the detector, and had to take place in the shadow of the work on commissioning and field-mapping the magnet. The tests also had to complement the trigger-system (high rate) tests taking place in the electronics-integration centre. Moreover, the aim was to use final systems as far as possible, that is with no (or very few) specific developments for the cosmic test. Another important aspect was to build a fully functional commissioning and operations team of experts from a collaboration that brings together more than 2000 people from laboratories worldwide, transcending linguistic and cultural backgrounds.
In order not to interfere with assembly work, electronics racks and control rooms for the tests were installed just outside the surface-assembly building in a large control barrack recovered from the OPAL experiment at LEP. Substantial investments were nonetheless needed in the surface hall, general and sub-system infrastructure, the triggering system, some temporary power supply systems, and in the tracker “slice” that was specially made for the cosmic challenge within a full replica of the final containment tube.
As the project progressed, the collaboration began to recognize its importance as a first test of intra-collaboration communication and remote participation, and the original scope expanded to include more substantial objectives for offline as well as online systems. A series of Run Workshops, culminating in a readiness review in June 2006, established the final objectives of the project. Weekly Run Meetings open to all CMS, eventually becoming daily, also ensured coordination. Ultimately the diligent work of hundreds of people aided by a little good fortune transformed the cosmic challenge into a cosmic success for CMS.
Four sub-detectors took part in the challenge. The silicon tracker system comprised 75 modules of the Tracker Inner Barrel in two partially populated layers, 24 modules of the Tracker Outer Barrel, and 34 modules of the Tracker Endcap system in two partially populated “petals”. By normal standards these 133 modules were a substantial system, comparable to any silicon detector used at LEP. It is worth remarking that this represents only 1% of the final CMS system, by far the largest ever built using silicon detectors. In addition, there were two barrel supermodules comprising 3400 lead tungstate crystals of the ECAL, or about 5% of the total; eight barrel sectors (22%), four endcap sectors (11%), and four sectors of the outer barrel section of the HCAL. For muon detection there were three (out of 60) muon barrel sectors, consisting of drift tube (DT) and resistive plate chambers (RPC), together with cathode strip chambers (CSCs) forming endcap muon chambers – in all, 8% of the total system.
As was the case for the sub-detectors, all common support systems were tested in close to final versions, using in most cases production hardware and software. The first priority was the definition and implementation of elements of the Detector Safety System. The teams had also to integrate sub-detectors with the central Detector Control System and a scaled-down version of the trigger system. The tests used the central DAQ with its final architecture and approximately 1% of the final computing power, and successfully operated the integrated run control, event builder, event filter, data storage and transfer to the CERN Advanced Storage manager (CASTOR). Throughout the whole exercise a fully functional event display enabled a simple and quick feedback on the status of different sub-detectors.
Other important organizational components of the operations were the consistent use of an electronic logbook, webcams, video-conferencing tools and Wiki-based documentation, as well as web-based monitoring, which was extensively tested. The challenge involved data transfers from Tier-0 (at CERN) to some Tier-1 centres (at CNAF/Bologna, PIC/Madrid and Fermilab) through the Physics Experiment Data Export (PHEDEX) protocol exercising the fast offline analysis and remote monitoring at the Meyrin CERN site as well as at the Fermilab Remote Operations Center.
There were two distinct phases of the cosmic challenge: the first phase in July and August 2006 was parasitic to the commissioning of the magnet. During this phase around 25 million “good” events were recorded with, at least, DT triggers and the ECAL and Tracker slice in the readout. Of these, 15 million events were at a stable magnetic field of at least 3.8 T – close to the maximum field of 4 T. A few thousand of the events corresponded to the benchmark where a cosmic ray was recorded in all four CMS sub-detector systems – Tracker, the ECAL, the HCAL and muon system – with nominal magnetic field. The image of the first of these events rapidly became a symbol of the success of the cosmic challenge and the demonstration of the CMS detector “as built”. During the challenge, data-taking efficiency reached more than 90% for extended periods. Data transfer to some Tier-1 centres, online event display, quasi-online analysis on the Meyrin site, and fast offline data-checking at Fermilab were some of the highlights of Phase I, which in this way offered a first taste of the full running experience. One example of an encouraging result was the good agreement between the predicted and measured cosmic muon spectra, both in momenta and angular distributions, using the new CMS software, CMSSW.
Phase II took place during October and November, after an efficiently executed “cosmic shutdown”, during which the tracker slice and the ECAL were removed and replaced with a field-mapper. While not as glamorous as the first phase, Phase II provided a wealth of solid information relevant to commissioning and operating CMS as an instrument for physics. For this phase, the team corrected and tested several minor faults found in Phase I in the magnet, detectors and central systems, and took more data with the HCAL and muon systems. Phase II recorded about 250 million events for studies of calibration, alignment and efficiency. The measurements made of the effect of the magnetic field on the response of the HCAL and on the drift paths in the muon barrel DTs were particularly crucial. Integration work on aspects of the trigger also allowed data to be recorded with some final systems.
Less than two weeks after the end of the magnet tests, the CMS detector was fully re-opened so the major elements could begin to be lowered into the experiment cavern. Meanwhile, work on analysing the millions of cosmic-ray events recorded in the cosmic challenge continues in many of the institutes in the collaboration. Now, as attention turns to completing the remaining assembly and installation of the muon, tracking and ECAL systems, the whole collaboration is looking forward eagerly, and with confidence, to re-assembling the detector underground and repeating the exciting and successful accomplishments of 2006, but this time with tracks from collisions of LHC beams.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.