Comsol -leaderboard other pages

Topics

Rewriting the rules on proton acceleration

For half a century, the synchrotron has been the workhorse of high-energy particle physics, from its first use with external beams to the modern particle colliders. The basic principle is to use the electric field in a radio-frequency (RF) wave to accelerate charged particles, the frequency varying to keep in time with particles on a constant trajectory through a ring of guiding magnets.

CCEacc1_04-05

Now a team has demonstrated a different way of accelerating the protons in tests at the proton synchrotron (PS) at KEK, the Japanese High Energy Accelerator Research Organization in Tsukuba. For the first time, a bunch of protons in the synchrotron has been accelerated by an induction method (K Takayama et al. 2004). The technique may overcome certain effects that normally limit the intensity achieved in a synchrotron beam, and could prove to be an important advance for future proton colliders.

The concept of an “induction synchrotron” was first proposed about five years ago by the author and Jun-ichi Kishiro of KEK and the Japan Atomic Energy Research Institute (Takayama and Kishiro 2000). The idea was to overcome shortcomings of the RF synchrotron, in particular the limited longitudinal phase-space available for the acceleration of charged particles – in other words the distribution in energy and position around the ring of the particles being accelerated. In a conventional synchrotron, the particles are accelerated when they pass through an RF cavity, a device that contains the oscillating radio wave. The electric field naturally concentrates the particles into bunches in the direction of motion (i.e. longitudinally).

In the induction synchrotron, however, the accelerating devices are replaced with induction devices, in which a changing magnetic field produces the electric field to accelerate the particles. The basic device is a ferromagnetic ring, or core, through which the particles pass. A pulsed voltage sets up a magnetic field, and the changing magnetic flux in turn induces an electric field along the axis of the core. The induction-acceleration technique was first developed in the late 1960s and has a range of applications in linear accelerators, but the recent KEK experiment was the first time it was applied in a circular machine.

CCEacc3_04-05

The system consists basically of an induction cavity with three cells driven by a pulse modulator as shown in figure 1. The cells developed for the experiment, which are rather like one-to-one transformers, use a nanocrystalline alloy as the magnetic-core material. The pulse modulator is connected to the acceleration cavity through a 40 m long transmission cable to keep the modulator far from the accelerator, where its solid-state switching elements would be exposed to high radiation. A matching resistance at the driver end reduces reflections. The pulse modulator can be operated in various modes from burst to 1 MHz continuous-wave via a system controlled by a digital signal processor (DSP).

In July 2004 the system was demonstrated to be capable of generating a step-pulse of 2 kV and a peak current of 18 A at 1 MHz with a duty cycle of 50%. It was then installed in the KEK PS in September, ready to test induction acceleration.

For the experiment a single bunch of 6 x 1011 protons was injected into the main ring at 500 MeV, trapped in an RF bucket and accelerated up to 8 GeV. The aim was that the RF would simply capture the beam bunch while the induction voltage provided the acceleration. The timing of the master trigger for the pulse modulator was adjusted via the DSP so that the signal from the bunch stayed around the centre of the induction voltage pulse for the entire accelerating period. Figure 2 shows typical waveforms of the induction voltage signals for the three cells, plus the bunch signals.

CCEacc2_04-05

To confirm the induction acceleration, the relative phase difference Δφ between the RF and the bunch centre was measured for three cases: with the RF voltage alone; with both the RF and positive induction voltages for acceleration; and with the RF and negative induction voltages. With both the RF and induction voltages, the centre of the bunch receives an effective voltage per turn of V = Vrfsinφ + Vind’ where Vrf and Vind are the RF and the induction voltages respectively, and φ is the position of the bunch in the RF phase. A value for V of 4.8 kV is required for the RF bunch to follow the linearly ramping bending field of the synchrotron magnets.

Figure 3 shows the temporal evolutions of the measured phase for three cases. The results are in close agreement with the prediction from the equation for the voltage per turn of φ = 5.7°,-1.0° and 12.4° for the three cases, with Vrf = 48 kV and Vind = 5.6 kV for the respective cases. The position of the proton bunch relative to the RF voltage for each case is also shown schematically on the right in the same figure. The plots indicate the successful acceleration of the bunch beyond the transition energy. This is the critical energy characteristic of a strong-focusing synchrotron at which the particles’ revolution period becomes almost independent of energy and the stable phase position switches from one side of the RF pulse to the other, as indicated in figure 3.

CCEacc4_04-05

These results are the first step in demonstrating the feasibility of an induction synchrotron, which could have important implications for future machines. A significant advantage of the induction technique is that the functions of acceleration and longitudinal focusing are achieved separately. This is not the case in the RF synchrotron where the gradient in the electric field provides the longitudinal confinement. In an induction machine voltage pulses of opposite sign separated by some time period can provide the longitudinal focusing forces. A pair of barrier-voltage pulses should work in a similar way to the RF barrier, which has been demonstrated at Fermilab, Brookhaven National Laboratory and CERN.

Separating the acceleration and focusing functions in the longitudinal direction brings about a significant freedom of beam-handling compared with conventional RF synchrotrons. In particular, it offers a means of forming a “superbunch”: an extremely long beam bunch with a uniform density that would be most attractive in future hadron colliders and proton drivers for neutrino physics. In addition, crossing the transition energy without any longitudinal focusing seems to be feasible, and this could substantially mitigate undesired phenomena, such as bunch-shortening from non-adiabatic motion and microwave instabilities. The next step at KEK will be to test the barrier-voltage concept, proceeding further towards the formation of a superbunch in an induction synchrotron.

LHC upgrade takes shape with CARE and attention

CERN’s Large Hadron Collider (LHC), first seriously discussed more than 20 years ago, is scheduled to begin operating in 2007. The possibility of upgrading the machine is, however, already being seriously studied. By about 2014, the quadrupole magnets in the interaction regions will be nearing the end of their expected radiation lifetime, having absorbed much of the power of the debris from the collisions. There will also be a need to reduce the statistical errors in the experimental data, which will require higher collision rates and hence an increase in the intensity of the colliding beams – in other words, in the machine’s luminosity.

CCElhc1_04-05

This twofold motivation for an upgrade in luminosity is illustrated in figure 1, which shows two possible scenarios compatible with the baseline design: one in which the luminosity stays constant from 2011 and one in which it reaches its ultimate value in 2016. An improved luminosity will also increase the physics potential, extending the reach of electroweak physics as well as the search for new modes in supersymmetric theories and new massive particles, some of which could be manifestations of extra dimensions.

The timescale for an upgrade of 10 years from now turns out to be just right for the development, prototyping and production of new superconducting magnets for the interaction regions and of other equipment, provided that an adequate R&D effort starts now. It is against this background that the European Community has supported the High-Energy High-Intensity Hadron-Beams (HHH) Networking Activity, which started in March 2004 as part of the Coordinated Accelerator Research in Europe (CARE) project. HHH has three objectives:
• to establish a roadmap for upgrading the European hadron accelerator infrastructure (at CERN with the LHC and also at Gesellschaft für Schwerionenforschung [GSI], the heavy-ion laboratory in Darmstadt);
• to assemble a community capable of sustaining the technical realization and scientific exploitation of these facilities;
• to propose the necessary accelerator R&D and experimental studies to achieve these goals.
The HHH activity is structured into three work packages. These are named Advancements in Accelerator Magnet Technology, Novel Methods for Accelerator Beam Instrumentation, and Accelerator Physics and Synchrotron Design.

CCElhc2_04-05

The first workshop of the Accelerator Physics and Synchrotron Design work package, HHH-2004, was held at CERN on 8-11 November 2004. Entitled “Beam Dynamics in Future Hadron Colliders and Rapidly Cycling High-Intensity Synchrotrons”, it was attended by around 100 accelerator and particle physicists, mostly from Europe, but also from the US and Japan. With the subjects covered and the range of participants, the workshop was also able to reinforce vital links and co-operative approaches between high-energy and nuclear physicists and between accelerator-designers and experimenters.

The first session provided overviews of the main goals. Robert Aymar, director-general of CERN, reviewed the priorities of the laboratory until 2010, mentioning among them the development of technical solutions for a luminosity upgrade for the LHC to be commissioned around 2012-2015. The upgrade would be based on a new linac, Linac 4, to provide more intense proton beams, together with new high-field quadrupole magnets in the LHC interaction regions to allow for smaller beam sizes at the collision-points – even with the higher-intensity circulating beams. It would also include rebuilt tracking detectors for the ATLAS and CMS experiments. Jos Engelen, CERN’s chief scientific officer, encouraged the audience to consider the upgrade of the LHC and its injector chain as a unique opportunity for extending the physics reach of the laboratory in the areas of neutrino studies and rare hadron decays, without forgetting the requirements of future neutrino factories.

CCElhc3_04-05

For the GSI laboratory, the director Walter Henning described the status of the Facility for Antiproton and Ion Research project (FAIR), and its scientific goals for nuclear physics. He pointed to the need for wide international collaboration to launch this accelerator project and to complete the required R&D.

Further talks in the session looked in more detail at the issues involved in an upgrade of the LHC. Frank Zimmermann and Walter Scandale from CERN presented overviews of the accelerator physics and the technological challenges, addressing possible new insertion layouts and scenarios for upgrading the injector-complex. The role of the US community through the LHC Accelerator Research Program (LARP) was described by Steve Peggs from the Brookhaven National Laboratory, who proposed closer coordination with the HHH activity. Finally, Daniel Denegri of CERN and the CMS experiment addressed the challenges to be faced if the LHC detectors are to make full use of a substantial increase in luminosity. He also reviewed the benefits expected for the various physics studies.

The five subsequent sessions were devoted to technical presentations and panel discussions on more specific topics, ranging from the challenges of high-intensity beam dynamics and fast-cycling injectors, to the development of simulation software. A poster session with a wide range of contributions provided a welcome opportunity to find out about further details, and a summary session closed the workshop.

The luminosity challenge

The basic proposal for the LHC upgrade is, after seven years of operation, to increase the luminosity by up to a factor of 10, from the current nominal value of 1034 cm-2 s-1 to 1035 cm -2 s-1. The table compares nominal and ultimate LHC parameters with those for three upgrade paths examined at the workshop.

The upgrade currently under discussion will include building essentially new interaction regions, with stronger or larger-aperture “low-beta” quadrupoles in order to reduce the spot size at the collision-point and to provide space for greater crossing angles. Moderate modifications of several subsystems, such as the beam dump, machine protection or collimation, will also be required because of the higher beam current. The choice between possible layouts for the new interaction regions is closely linked to both magnet design and beam dynamics; different approaches could accommodate smaller or larger crossing angles, possibly in combination with an electromagnetic compensation of long-range beam-beam collisions or “crab” cavities (as described below), respectively. A more challenging possibility also envisions the upgrade of the LHC injector chain, employing concepts similar to those being developed for the FAIR project at GSI.

The workshop addressed a broad range of accelerator-physics issues. These included the generation of long and short bunches, the effects of space charge and the electron cloud, beam-beam effects, vacuum stability and conventional beam instabilities.

A key outcome is the elimination of the “superbunch” scheme for the LHC upgrade, in which each proton beam is concentrated into only one or a few long bunches, with much larger local charge density. Speakers at the workshop underlined that this option would pose unsolvable problems for the detectors, the beam dump and the collimator system.

For the other upgrade schemes considered, straightforward methods exist to decrease or increase the bunch length in the LHC by a factor of two or more, possibly with a larger bunch intensity. Joachim Tuckmantel and Heiko Damerau of CERN proposed adding conventional radio-frequency (RF) systems operating at higher-harmonic frequencies to vary the bunch length and in some cases also the longitudinal emittance of the beam, while Ken Takayama promoted a novel scheme based on induction acceleration.

Experiments at CERN and GSI, reported by Giuliano Franchetti of GSI, have clarified mechanisms of beam loss and beam-halo generation, both of which occur as a result of synchrotron motion and space-charge effects due to the natural electrical repulsion between the beam particles. These mechanisms have been confirmed in computer simulations. Studies of the beam-beam interaction – the electromagnetic force on a particle in a beam of all the particles in the other beam – are in progress for the Tevatron at Fermilab, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven, and the LHC.

Tanaji Sen of Fermilab showed that sophisticated simulations can reproduce the lifetimes observed for beam in the Tevatron, and Werner Herr of CERN presented self-consistent 3D simulations for beam-beam interactions in the LHC. Kazuhito Ohmi of KEK conjectured on the origin in hadron colliders of the beam-beam limit – the current threshold above which the size of colliding beams increases with increasing beam intensity. If the limit in the LHC arises from diffusion related to the crossing angle, then RF “crab” cavities, which tilt the particle bunches during the collision process, thus effectively providing head-on collisions despite the crossing angle of the bunch centroids, could raise the luminosity beyond the purely geometrical gain in making the beams collide head-on.

Another effect to consider in the LHC is the electron cloud created initially when synchrotron radiation from the proton releases photoelectrons at the beam-screen wall. The photoelectrons are pulled toward the positively charged proton bunch and in turn generate secondary electrons when they hit the opposite wall. Jie Wei of Brookhaven presented observations made at RHIC, which demonstrate that the electron cloud becomes more severe for shorter intervals between bunches. This may complicate an LHC upgrade based on shorter bunch-spacing. Oswald Gröbner of CERN also pointed out that secondary ionization of the residual gas by electrons from the electron cloud could compromise the stability of the vacuum.

The wake field generated by an electron cloud requires a modified description compared with a conventional wake field from a vacuum chamber, as Giovanni Rumolo of GSI discussed. His simulations for FAIR suggest that instabilities in a barrier RF system, with a flat bunch profile, qualitatively differ from those for a standard Gaussian bunch with sinusoidal RF. Elias Metral of CERN surveyed conventional beam instabilities and presented a number of countermeasures.

The simulation challenge

In the sessions on simulation tools, a combination of overview talks and panel discussions revisited existing tools and determined the future direction for software codes in the different areas of simulation. The tools available range from well established commercial impedance calculations to the rapidly evolving codes being developed to simulate the effects of the electron cloud. Benchmarking of codes to increase confidence in their predicative power is essential. Examples discussed included beam-beam simulations and experiments at the Tevatron, RHIC and the Super Proton Synchrotron at CERN; impedance calculations and bench measurements (e.g. for the LHC kicker magnets and collimators); observed and predicted impedance effects (at the Accelerator Test Facility at Brookhaven, DAFNE at Frascati and the Stanford Linear Collider at SLAC); single-particle optics calculations for HERA at DESY, SPEAR-3 at the Stanford Linear Accelerator Center, and the Advanced Light Source at Berkeley; and electron-cloud simulations.

Giulia Bellodi of the Rutherford Appleton Laboratory, Miguel Furman of Lawrence Berkeley National Laboratory, and Daniel Schulte of CERN suggested creating an experimental data bank and a set of standard models, for example for vacuum-chamber surface properties, which would ease future comparisons of different codes. New computing issues, such as parallelization, modern algorithms, numerical collisions, round-off errors and dispersion on a computing Grid were also discussed.

The simulation codes being developed should support all stages of an accelerator project that has shifting requirements; communication with other specialized codes is also often required. The workshop therefore recommended that codes should have toolkits and a modular structure as well as a standard input format, for example in the style of the Methodical Accelerator Design (MAD) software developed at CERN.

Frank Schmidt and Oliver Bruning of CERN stressed that the MAD-X program features a modular structure and a new style of code management. For most applications, complete, self-consistent, 3D descriptions of systems have to co-exist with the tendency towards fast, simplified, few-parameter models – conflicting aspects that can in fact be reconciled by a modular code structure.

The workshop established a list of priorities and future tasks for the various simulation needs and, in view of the rapidly growing computing power available, participants sketched the prospect of an ultimate universal code, as illustrated in figure 2.

• The HHH Networking Activity is supported by the European Community-Research Infrastructure Activity under the European Union’s Sixth Framework Programme “Structuring the European Research Area” (CARE, contract number RII3-CT-2003-506395).

STAR has silicon at its core

STAR silicon vertex tracker.

An important milestone has been reached in the STAR experiment at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) with the integration of the silicon strip detector (SSD). The installation completes the STAR ensemble, which is dedicated to tracking the thousands of charged particles that emerge at large angles to the colliding beams. The SSD has been fully commissioned and is now collecting data.

The completion of the STAR SSD is the result of a multi-year French research and development effort that began shortly after STAR started taking data in 2000, and which was led by the Laboratoire de Physique Subatomique et des Technologies Associées (Subatech) in Nantes, and the Institut de Recherche Subatomique (IreS) in Strasbourg. The detector incorporates state-of-the-art bonding technology as well as front-end electronics and control chips designed and developed by the Laboratoire d’Electronique et de Physique des Systèmes Instrumentaux (LEPSI) in Strasbourg.

The SSD makes extensive use of double-sided silicon microstrip sensors and has a total sensitive area of about 1 square metre. The detector consists of 320 detector modules arranged on 20 ladders (see figures 1 and 2). These form a barrel at a radius of 23 cm from the beam, inserted between the silicon vertex tracker (SVT) and the time projection chamber (TPC).

A section of the silicon strip detecto

The detector enhances the tracking capabilities of the STAR experiment in this region by providing information on the positions of hits and on the ionization energy loss of charged particles. Specifically, the SSD improves the extrapolation of tracks in the TPC to the hits found in the SVT. This increases the average number of space points measured near the collision vertex, significantly improving the detection efficiency for long-lived meta-stable particles such as those found in hyperon decays. Moreover, the SSD will further enhance the SVT tracking capabilities for particles with very low momentum, which do not reach the TPC.

The SSD was based on an early proposal for the Inner Tracking System of the ALICE experiment at CERN’s Large Hadron Collider; however, the design of the detector has evolved and matured considerably after several years of research, development and prototyping. To fulfil the constraints of the STAR environment, innovative solutions were required for electronics, connections and mechanics.

The detector module comprises one double-sided silicon microstrip sensor and floating electronics on two hybrid circuits of very low mass (see figure 3). The silicon sensor contains 1536 analogue channels (768 x 2) and has a resolution of 17 μm in azimuth (Rφ) and 700 μm in the beam direction (z). Each of the hybrid circuits is dedicated to one side of the sensor and hosts six A128C front-end chips and a Costar chip for control purposes. The hybrids are connected to the outer boards via a low-mass Kapton-aluminium bus, manufactured at CERN.

ouble-sided silicon microstrip senso

The A128C front-end chip, developed in a collaboration between LEPSI and IReS, shows an extended input range corresponding to ±13 MIPs (minimum ionizing particles) and an extra-low power consumption of less than 350 μW per channel. A dedicated multipurpose application-specific integrated circuit is in charge of the hybrid controls and temperature measurements.

Kapton-copper microcables and state-of-the-art tape automated bonding (TAB) technology connect the silicon readout strips to the input channels of the front-end electronics chip, and the chips to their hybrids. TAB enables a flexible connection, which acts as an adapter between the different pitches of the detectors and the chips. The technology is also testable and it provided a good yield during production. It was essential to make the detector modules small enough to be integrated into STAR.

Another unique feature of the SSD is its air-cooling system

Another unique feature of the SSD is its air-cooling system. The carbon-fibre-based structure on the ladder that supports the detector modules, analogue-to-digital converters, and control boards is wrapped with a Mylar foil and defines a path to guide the flow of air induced by transvector airflow amplifiers. This design avoids the use of liquid coolant, cooling pipes and heat bridges, and provides a material budget with a total radiation length very close to 1%.

A high level of serialization has been reached by incorporating the analogue-to-digital converters and control boards close to the detector modules. This enables the data from the half million channels of the SSD to be transported to the STAR data acquisition system using only four giga-link optical fibres. In the future, additional parallelization of the readout will enable the readout speed of the SSD to be increased to match the trigger and data-taking rates foreseen for STAR in the high-luminosity era of RHIC II.

The SSD project has been funded by the IN2P3/CNRS, the Ecole des Mines de Nantes, the metropolitan district of Nantes, the Loire-Atlantique department, and the regions of Alsace and Pays de la Loire. Financial support has also been provided by the US Department of Energy through the STAR collaboration.

In need of the human touch

I have led software projects since 1987 and have never known one, including my own, that was not in a crisis. After thinking and reading about it and after much discussion I have become convinced that most of us write software each day for a number of reasons but without ever penetrating its innermost nature.

CCEvie1_04-05

A software project is primarily a programming effort, and this is done with a programming language. Now this is already an oxymoron. Programming is writing before; it entails predicting or dictating the behaviour of something or someone. A language, on the other hand, is the vehicle of communication that in some ways carries its own negation because it is a way of expressing concepts that are inevitably reinterpreted at the receiver’s end. How many times have you raged “Why does this stupid computer do what I tell it [or him or her according to your momentary mood toward one of the genders], and not what I want!?” A language is in fact a set of tools that have been developed through evolution not to “program” but to “interact”.

Moreover every programmer has his own “language” beyond the “programming language”. Many times on opening a program file and looking at the code, I have been able to recognize the author at once and feel sympathy (“Oh, this is my old pal…”) or its opposite (“Here he goes again with his distorted mind…”), as if opening a letter.

Now if only it were that simple. If several people are working on a project, you not only have to develop the program for the project but you also have to manage communication between its members and its customers via human and programming language.

This is where our friends the engineers say to us “Why don’t you build it like a bridge?” However, software engineering is one more oxymoron cast upon us. We could never build software like a bridge, no more than engineers could ever remove an obsolete bridge with a stroke of a key without leaving tons of scrap metal behind. Software engineering’s dream of “employing solid engineering processes on software development” is more a definition than a real target. We all know exactly why it has little chance of working in this way, but we cannot put it into words when we have coffee with our engineer friends. Again, language leaves us wanting.

Attempts to apply engineering to software have filled books with explanations of why it did not work and of how to do it right, which means that a solution is not at hand. The elements for success are known: planning, user-developer interaction, communication, and communication again. The problem is how to combine them into a winning strategy.

Then along came Linux and the open source community. Can an operating system be built without buying the land, building the offices, hiring hundreds of programmers and making a master plan for which there is no printer large enough? Can a few people in a garage outwit, outperform and eventually out-market the big ones? Obviously the answer is yes, and this is why Linux, “the glorified video game” to quote a colleague of mine, has carried a subversive message. I think we have not yet drawn all the lessons. I still hear survivors from recent software wrecks say: “If only we had been more disciplined in following The Plan…”

Is software engineering catching up? Agile technologies put the planning activity at the core of the process while minimizing the importance of “The Plan”, and emphasize the communication between developers and customers.

Have the “rules of the garage” finally been written? Not quite. Open source goes far beyond agile technologies by successfully bonding people who are collaborating on a single large project into a distributed community that communicates essentially by e-mail. Is constraining the communication to one single channel part of the secret? Maybe. What is certain is that in open source the market forces are left to act, and new features emerge and evolve in a Darwinian environment where the fittest survives. But this alone would not be enough for a successful software project.

A good idea that has not matured enough can be burned forever if it is exposed too early to the customers. Here judicious planning is necessary, and the determination and vision of the developer is still a factor in deciding when and how to inject his “creature” into the game. I am afraid (or rather I should say delighted) we are not close to seeing the human factor disappear from software development.

CMS cavern ready for its detector

On 1 February 2005, the cavern for the CMS detector at CERN was inaugurated in a ceremony attended by many guests, including the Spanish and Italian ambassadors to the United Nations and representatives of the construction companies.

CCEnew1_03-05

The hand-over of this cavern, a gigantic underground structure 100 m underground near the village of Cessy in France, marks the end of the large-scale civil-engineering work for the Large Hadron Collider (LHC) at CERN.
The second of the new caverns for the LHC experiments, the CMS cavern is the result of several years of work by a consortium of Italian, Spanish, British, Austrian and Swiss civil-engineering companies. Problems arising from the local geology made it a spectacular feat of engineering.

The new structures built for CMS in fact comprise two caverns, together with two access shafts, for which 250,000 m3 of spoil had to be removed. The cavern for the detector is 53 m long, 27 m wide and 24 m high. The second cavern, housing the technical services, is adjacent. Unlike the strategy for the ATLAS detector, the various components for the CMS detector are being assembled and tested in a surface building before being lowered into the cavern, starting next January.

Work began six-and-a-half years ago with the excavation of the two access shafts. This was not an easy task given the 50 m deep stratum of extremely unstable moraine that also contains two water tables. To excavate this loose, wet earth, a ground-freezing technique was used, which involved circulating a brine solution at a temperature of -23 °C, followed by liquid nitrogen.

The molasse between the two caverns, which was too weak to withstand the high levels of stress exerted on it, presented a further difficulty and had to be replaced by a huge pillar of reinforced concrete.

To control the environmental impact of the project, special attention was paid to water treatment and to minimizing dust and noise levels. Moreover, the tonnes of spoil were deposited in the immediate vicinity, avoiding noise and disruption on the roads of the nearby villages. These storage areas are being landscaped and will be planted with vegetation between now and June 2005.

RHIC starts colliding copper with copper

The Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory in the US has started colliding beams of copper ions. RHIC, which is actually two concentric rings 4 km in circumference, was built to create collisions between heavy ions, in particular gold. The use of intermediate-size copper nuclei – resulting in energy densities that are not as high as in earlier gold-ion runs, but more than was produced by colliding gold ions with much lighter deuterons – is important to understanding the new phenomena that have been observed in the heavy-ion collisions.

CCEnew6_03-05

The energy of the gold-gold collisions was predicted to be sufficient to “melt” protons and neutrons to produce a hot “soup” of free quarks and gluons – the quark-gluon plasma. To date, the gold-gold collisions at RHIC have produced some very intriguing data that indicate the presence of a new form of matter – hotter and denser than anything ever produced in a laboratory. However, while some observations fit with what was expected of quark-gluon plasma, others do not.

So there has been considerable debate over whether the hot, dense matter being created at RHIC is indeed the postulated quark-gluon plasma, or perhaps something even more interesting. Data already in hand show that the quarks in the new form of matter appear to interact quite strongly with one another and with the surrounding gluons, rather than floating freely in the “soup” as the theory of quark-gluon plasma had predicted. Many physicists are beginning to use the term “strongly interacting quark-gluon plasma” to express this understandi

ng.
The deuteron-gold collisions do not exhibit the same behaviour, leading to the suggestion that what is seen in the gold-gold collisions is not an intrinsic property of the gold ions themselves, but is indeed created in the collisions. The copper experiments will provide another control that will help in understanding how the new phenomena observed are turned on and off, and when.

The copper-copper run is expected to last for about 10 weeks, but depends on funding for fiscal year 2005.

U70 tests stochastic extraction scheme

Tests for an advanced slow stochastic extraction (SSE) scheme have been performed successfully at the U70, the 70 GeV proton synchrotron at the Institute for High Energy Physics (IHEP), Protvino, in the Moscow region. A holder of the record for highest-energy accelerator during the late 1960s, the U70 is still in operation today, and the feasibility tests in November and December 2004 may offer an interesting option for future beams.

CCEnew7_03-05

The SSE concept was pioneered in 1978 by Simon van der Meer at CERN as a spin-off from his work on stochastic cooling, which led to the conversion of CERN’s Super Proton Synchrotron to a proton-antiproton collider, and a share of the 1984 Nobel Prize in physics for van der Meer. This technique, yielding long and uniform spills, was later successfully used at CERN in the Low Energy Antiproton Ring (LEAR), achieving extraction times of several hours.

Stochastic extraction is a modification of resonant extraction in which particles are moved to the extraction resonance by random kicks from a noisy radiofrequency system. It has the advantage of being immune to unavoidable ripples in the magnetic optics that deteriorate the spill under resonant extraction. This might prove especially useful to a venerable machine like the U70.

The SSE tests were performed on an ejection plateau at 60 GeV in the U70, with recorded beam and extraction currents as shown in the figure in blue together with the fitted curves in red.

About 90% of the spill is extracted in 0.8 s. The extracted current is not free from AC ripple, but the IHEP engineers are hopeful that they can suppress this in future. The design goal is to obtain ripple-free flat-topped spills lasting 2-3 s or longer.

The tests have been deemed a success and the feasibility of SSE at the U70 has been confirmed by the beam measurements. The scheme promises smoother and longer spills, which will improve the machine’s functionality.

HERA hits new heights

In August 2000 the first phase of operation (Run I) of HERA, DESY’s electron/positron-proton collider, came to a successful conclusion after the machine reached a luminosity of 2 x 1031 cm-2 s-1, surpassing its original design luminosity by 30%. The total luminosity delivered between 1992 and 2000 to the colliding-beam experiments H1 and ZEUS amounted to about 190 pb-1, and electron and positron beams with a longitudinal spin polarization of up to 60% were routinely delivered to the HERMES experiment, which uses a gas target. In addition, proton-nucleus interaction rates as high as 5-20 MHz were provided for the HERA-B experiment. The run enabled the four experiments to publish a large number of results on the strong and electroweak interactions as well as physics beyond the Standard Model.

CCEher1_03-05

The objective of the second phase of the HERA programme, Run II, was to operate with a greater luminosity, about four times higher than the design luminosity of Run I. The upgrade began in 2001 and proved challenging in several respects. By October 2004 the collider had completed a year of successful running with positrons and could be switched to electron-proton operation, for the first time since 1999.

The upgrade challenges

In order to provide the greater luminosity required for Run II, the interaction regions of the colliding-beam experiments had to be rebuilt to reduce the beam cross-section at the collision points by a factor of three, to values of 112 μm x 30 μm. In addition, the interaction regions of H1 and ZEUS were to be equipped with pairs of spin rotators to allow for longitudinally spin-polarized lepton beams in collisions with protons.

The requirements of the upgrade represented a challenging engineering project. Strong focusing magnets had to be fitted inside the existing detectors – a task made very difficult by the small apertures involved, the limited available space and insufficient access to support points. In addition, new technologies had to be developed in order to improve the interaction regions still further, which had already been optimized during HERA Run I. The technical novelties developed for the upgrade include large-aperture superconducting combined-function magnets with small outer dimensions, which are supported inside the narrow aperture of the colliding beam-detectors. This means that a beam separation closer to the experiments is necessary to achieve stronger focusing.

CCEher2_03-05

Moreover, inside the strong magnetic fields of the superconducting magnets, the lepton beam emits high-power synchrotron radiation. This requires a sophisticated vacuum system to handle the large power loads and to provide at the same time the excellent vacuum pressure of 0.1 nanotorr needed around the detectors for tolerable background conditions.

The upgraded components were installed during a shutdown from September 2000 to July 2001, which was followed by a period of technical commissioning and commissioning with beam in the autumn of 2001. The high luminosity that can be achieved in the upgraded configuration was demonstrated soon after the accelerator was restarted. In October 2001, a specific luminosity of Lspec= 1.8 x 1030 mA-2 cm-2 s-1 was reached with a small number of bunches, a value about two-and-a-half times greater than the ones achieved before the upgrade.

Fighting background problems

For the collider experiments, H1 and ZEUS, however, it was a different story. It turned out that the backgrounds they saw were larger than expected and this prevented turning on the tracking detectors in H1 and ZEUS. There was even a risk of damaging some detector components close to the beam. This led to considerable joint efforts between the accelerator and experimental groups to explore and to understand the reasons for the high background and to develop appropriate countermeasures. These efforts included detailed Monte Carlo simulations of the background conditions, which were benchmarked with accelerator experiments. This process required a considerable amount of accelerator study time. The results and conclusions were discussed during an international workshop in July 2002 and the improvement programme was presented to an international review committee in January 2003.

CCEher3_03-05

This thorough analysis led to the conclusion that the backgrounds generated by protons lost in the interaction-region beam correlated with the poor initial vacuum conditions in the new system in the presence of the positron beam. The vacuum recovery was also slowed down by considerable thermal desorption of synchrotron radiation masks inside the beam pipe close to the experiments. This was due to higher-order mode heating at injection energy when the bunches are short. In addition, in the spring of 2002 it became apparent that the ZEUS detector was also being hit by scattered synchrotron radiation. This was caused by a problem with a mask that was actually designed to shield the detector against it.

These problems limited the intensity during running in 2002, and this in turn allowed only slow recovery and conditioning of the vacuum. However, by the end of 2002 a significant improvement in the vacuum at the interaction region and a corresponding reduction of the proton-induced background had been achieved. This indicated that tolerable background conditions with full beam currents would be possible after further conditioning.

CCEher4_03-05

At the same time, the more intricate operational procedures of the upgraded accelerators were consolidated. These include global and local orbit stabilization systems with active feedback, which control the beam orbit to 0.1 mm during injection, acceleration, low-beta squeezing, tuning and luminosity running.
High longitudinal spin polarization of the positron beam was tuned up and measured for the first time simultaneously at all three interactions points during a test run in February 2003. HERA was then able to report the achievement of a world first: the collision of a longitudinally spin-polarized positron beam with high-energy protons.

Before this achievement could be exploited in the physics programme, however, the shutdown period from March to July 2003, which was needed to complete the experimental detector upgrades, was used to improve the synchrotron-radiation masks. The shape of the masks was changed to reduce higher-order mode losses of the beam, the cooling of the masks was improved, and the problem with the mask inside the ZEUS detector was resolved. Furthermore the pumping of the beam pipe inside the H1 detector and in a long beam-pipe section inside one of the magnets was improved to speed up the vacuum conditioning. These measures all achieved the desired effects: the vacuum system recovered quite quickly after the shutdown, the higher-order mode heating was reduced considerably and – most importantly – the problem with scattered synchrotron radiation in the ZEUS detector was completely solved.

Back to high luminosity

CCEher5_03-05

High-luminosity operation with protons and positrons started after vacuum conditioning with beam in October 2003. However, beam intensities in November and December were limited by new rules on radiation safety, which required an upgrade of the active machine-protection system. This was accomplished by the end of December 2003. Then, from January 2004, the HERA beam currents were increased steadily and the operating currents previously achieved in 2000, of around 100 mA protons together with 48 mA positrons, were reached.
From January to June 2004, the HERA luminosity was increased from 1.6 x 1031 cm-2 s-1 to 3.8 x 1031 cm-2 s-1, which is twice the value achieved in 2000. At the same time, the longitudinal positron spin polarization was tuned to values up to 50%. By August 2004, a total integrated polarized positron-proton luminosity of 92 pb1 had been delivered to the collider experiments. As a result, all three HERA experiments – H1, ZEUS and HERMES – have successfully taken data in 2004, with interesting first results presented in August 2004 at the International Conference on High Energy Physics in Beijing.

HERA’s luminosity upgrade is nearly complete, and we are now looking at increasing the luminosity again by another 50%. This requires further increasing the beam intensities, and better control of the beam parameters and the specific luminosity. An improvement programme to achieve this goal during 2005 is under way.

The proton background conditions for the experiments steadily improved during the 2004 run. In February 2004, the ZEUS experiment reported excellent background conditions together with large luminosity, and the proton-induced backgrounds in H1 have been demonstrated to be tolerable up to the highest beam intensities. Unfortunately, a number of vacuum leaks in the interaction regions due to a weakness in the design of a flange connection temporarily led to larger vacuum pressure there, resulting in poor background conditions. During a shutdown in August and September 2004, which was required to perform the annual safety tests and some detector repair work, the interaction-region vacuum system was improved further.

After this shutdown, HERA resumed operation with protons and electrons, rather than positrons, for the first time since 1999. To maximize the integrated luminosity of HERA over the coming years, a programme is under way to improve the availability of the components and the overall operational reliability. In addition, a longitudinal broadband damper system is being developed to control coupled bunch instabilities. This will help to control the proton bunch length and will provide a minimum effective transverse beam size so as to maximize luminosity.

The present plan is to continue the electron run until mid-2006, then switch back to positrons and complete the HERA data-taking by mid-2007. The three experiments are ready and eagerly awaiting a large harvest of HERA II data.

Protons on the doorstep of the LHC

When the Large Hadron Collider (LHC) begins operation, two new beam transfer lines, with a combined length of 5.6 km, will bring 450 GeV proton beams or ions from the Super Proton Synchrotron (SPS) to the new machine. Line TI 2 leads from the extraction in long straight section LSS6 in the SPS to the injection point into the clockwise ring of the LHC near interaction point 2. The other line, TI 8, leads from the extraction in LSS4 to the injection point of the anti-clockwise ring near interaction point 8. The first 100 m of this transfer line, called TT40, are shared with the primary proton line to the CNGS facility  and were commissioned together with the new extraction system in LSS4 in 2003. In October 2004 the complete TI 8 line became operational, with protons travelling the 2.5 km to the LHC tunnel.

CCEpro1_03-05

Studies on how to transport beam from the SPS to the LHC began in the early 1990s. Various configurations were investigated, one of them even implying a polarity reversal of the SPS. The use of cryogenic magnets was also considered. Eventually a system using room-temperature magnets was chosen because it was more economical overall, since the transfer lines will operate only during the short periods of LHC filling.

Between them the two transfer lines required the excavation of more than 5 km of new tunnels and enlargements. Excavation for TI 8 began in autumn 1998 with a civil-engineering shaft near the SPS, some 50 m deep and 8 m in diameter. The first enlarged part of the tunnel, TT40, and some adjacent underground works were excavated using machines known as “road headers”. However, for drilling the 2.3 km towards the LHC a tunnel-boring machine was used. This had to work its way down to the tunnel that housed the still operational Large Electron Positron (LEP) collider, through a height difference of some 70 m, although this is not usually the preferred way of working. Excavation finished in June 2000 and was followed by lining with concrete, leaving a finished tunnel 3 m in diameter.

By contrast, TI 2 was entirely excavated by road headers. Although the inclination of the LHC tunnel means that the SPS extraction and LHC injection sections are nearly at the same height above sea-level, this tunnel needed a Z-shape vertical profile because of geological constraints (an underground river bed!). Additional magnet groups were required for the vertical bending. The construction of TI 2 and TI 8 involved the excavation of 60,000 m3 of material and the use of 21,000 m3 of concrete.

CCEpro2_03-05

Geodetic referencing work on TI 8 started in autumn 2002, followed by the installation of general services, such as electricity and water cooling, and pulling the power and controls cables. Installation of the magnet system began in December 2003 and finished in May 2004. The relatively restricted space of the transfer tunnels required the development of a new system to transport and install the magnets. This is based on a modular system of “buggies” in the form of very compact tractors with a payload of 9 t each, which are fitted with air cushions and in-wheel motors. The wheels can turn on the spot under the load and allow the magnets to be displaced laterally towards their installation position. An automatic guidance system enables the travelling convoy to reach a typical driving speed of around 3.5 km/h. Using this system together with various girders and adapters, more than 400 magnets have been placed in TT40/TI 8, from 300 kg correctors to 13.5 t bending magnets recovered from earlier installations, as well as the 22 t beam dumps. In addition to work on TI 8 and TI 2, the system will be used to install magnets in the main LHC tunnel as well as for the CNGS project, thanks to its versatility.

All 348 main dipole magnets, 179 main quadrupoles and 93 corrector magnets for TI 2 and TI 8, as well as the bulk of the vacuum system, have been built by the Budker Institute for Nuclear Physics (BINP) in Novosibirsk, as part of the contribution of the Russian Federation to the LHC project. These have been transported to CERN by lorry over the 6000 km between the two laboratories. In addition, 73 dipoles and quadrupoles have been reused from the decommissioned PS-to-SPS electron transfer line and the SPS-to-LEP transfer lines. Because of the small emittance of the beam, the apertures of the lines could be relatively small – sometimes no bigger than a postage stamp.

CCEpro3_03-05

The next stage was to install the beam instrumentation devices, set up the vacuum system and make the necessary electrical and water connections. TI 8 then entered 11 weeks of hardware commissioning to check all the systems individually, such as the magnet powering and polarities, the magnet temperature interlock system, and the read-out of the beam instrumentation devices. Special measures were taken to ensure a sufficient air flow from the ventilation system, and a final verification of the alignment of the beam-line elements took place. The last two weeks before the first beam test in October were used to operate all the systems together from the control room, and a series of “dry runs” allowed the many new components of the control system to be deployed and tested in advance.

For the actual beam tests, the beam dump at the end of the line was supplemented temporarily by additional iron and concrete shielding blocks. This was to minimize the radiological impact on the LHC tunnel and the cavern for the LHCb experiment, where installation is still in full swing. The entire LHC point 8 and several hundred metres in the adjacent LHC arcs were closed. Also the beam tests, spread over two weekends, were scheduled to minimize the impact on the ongoing installation work.

CCEpro4_03-05

A single-bunch beam with 5 x 109 protons was prepared for the first beam tests. The line was set to 449.2 GeV, the SPS energy measured during the 2003 lead-ion run, and the LSS4 extraction system was set up and re-steered. As soon as the beam dumps at the beginning of the line were retracted, the first bunch of particles travelled through to the end of the installed part of the line, without the need for any “threading” (all corrector elements were set to zero current). In the following hours the necessary calibrations of the beam instrumentation were made and many measurements were carried out, such as energy acceptance, aperture scans, dispersion and optical matching, in part also using higher single-bunch intensities of 3-4 x 1010 protons. On the second test weekend, at the beginning of November, some commissioning was also done with multiple bunches per extraction, accumulating a total intensity at the end of the line of 8.6 x 1013 protons over the two weekends.

Although the data are still being analysed, the basic theoretical model of the lines seems to be well confirmed. The trajectory stability was found to be very good and the layout of the beam diagnostics, which performed well, was shown to be appropriate. The new control system, with its extensive array of applications, performed excellently, greatly facilitating the smooth progress of the tests.

CCEpro5_03-05

The last part of the TI 8 line, in the LHC tunnel itself, and the injection system will be set up soon. The upstream part of the other transfer line, TI 2, is being installed. Since the main LHC magnets will be brought down through a shaft in TI 2 nearly halfway to the LHC, the downstream part of this transfer tunnel must remain empty of line elements to facilitate the transport of the LHC elements into the ring. It will be completed and commissioned once the installation of the main LHC magnets is over.

The commissioning of TI 8 was quickly and successfully achieved thanks to the dedication of the many people who have worked over the years on the two transfer lines. Following on from the commissioning of the LSS4 extraction and TT40 a year ago, this has served as a large-scale test-bed for components and concepts that will be used in the LHC. It also provided an early understanding of the behaviour of the transfer line, which should help to focus attention during the LHC sector test, planned for 2006, on the injection system and the main ring.

LHC must keep to 2007 start-up

At the 131st session of Council on 17 December, CERN’s director-general Robert Aymar confirmed that the organization’s top priority is to maintain the goal of starting up the Large Hadron Collider (LHC) in 2007.
Preparations for the LHC project are advancing well, with half of the most technologically challenging components – the cold masses for the dipole magnets that will steer high-energy protons around the LHC’s 27 km ring – having arrived at CERN. In October the new transfer line that delivers protons from the Super Proton Synchrotron (SPS) to the LHC tunnel worked on the first attempt. The line is based on 540 magnets supplied by the Budker Institute for Nuclear Physics in Novosibirsk, and has been set up with the help of a team from the institute.

CCEnew1_01-05

The discovery in 2004 of defects in newly installed components in the system that will distribute cryogenic cooling fluids around the ring meant that installation, which began in 2003, had to be put on hold. However, technical corrections have since been made, and in October the manufacture of new unflawed components began (see box). Repair of the faulty components started in November at CERN and the first modified items have been successfully installed in the LHC tunnel.

CCEnew4_01-05

Various options to make up the delay have been discussed, and a strategy has been established to limit the impact on the overall schedule for the LHC. One option considered was to shut down the SPS in 2006 in order to divert human resources to LHC installation. However, this will not be necessary as long as technicians can be seconded for a few months from other accelerator laboratories.

A status report presented to Council on the four large experiments for the LHC – ATLAS, CMS, LHCb and ALICE – recognized the great progress that is being made. The schedule to get ready for collisions in the LHC in 2007 will be tight, but there is confidence that with some effort the experiments will start on time.

The SPS programme reached a natural pause at the end of the 2004 run, with most of its approved experiments reaching their conclusion; the SPS will not run in 2005. “This allows the community to take stock of where they are,” said Aymar, “and to plan for an exciting and well-focused programme for future fixed-target physics at CERN.” This procedure began in September in the Swiss village of Villars, where the SPS Committee met to set priorities for 2006 and beyond. As a result, Council will be examining proposals for new experiments during the course of 2005.

bright-rec iop pub iop-science physcis connect