Comsol -leaderboard other pages

Topics

LHC consolidation work proceeds apace

The consolidation campaign for the LHC, which aims to ensure a safe final commissioning and reliable running of the collider is now well under way. On 9 February CERN’s management confirmed the restart schedule for the LHC resulting from the recommendations from the previous week’s Chamonix workshop.

After the incident in September 2008, magnets were immediately prepared to replace those damaged. Now consolidation work is underway to ensure a safe and reliable restart of the LHC later this year. the experiments have adequate data to carry out their first new-physics analyses and have results to announce in 2010. The new schedule also permits the possibility of lead–ion collisions in 2010.

In Chamonix there was consensus among all the technical specialists that the new schedule is tight but realistic. According to CERN’s director-general, Rolf Heuer, “The schedule we have now is without a doubt the best for the LHC and for the physicists waiting for data. It is cautious, ensuring that all the necessary work is done on the LHC before we start up, yet it allows physics research to begin this year.” This new schedule represents a delay of six weeks with respect to the previous schedule, which foresaw the LHC “cold at the beginning of July”. This delay arises from several factors such as the implementation of a new enhanced protection system for the busbar and magnet splices; installation of new pressure-relief valves to reduce the collateral damage in case of a repeat incident; application of more stringent safety constraints; and scheduling constraints associated with helium transfer and storage. The new pressure-relief system has been designed in two phases. The first phase involves the installation of relief valves on existing vacuum ports in the whole ring.

Calculations have shown that in an incident similar to that of 19 September 2008 – which damaged magnets in sector 3-4 – the collateral damage would be minor with this first phase. The second phase involves adding additional relief valves on all of the dipole magnets, which would guarantee minor collateral damage (to the interconnects and super-insulation) in all worst cases over the life of the LHC. The management has decided for 2009 to install the additional relief valves on four of the LHC’s eight sectors, concurrent with repairs in the sector 3-4 and other consolidation work already foreseen. The dipoles in the remaining four sectors will be equipped in 2010.

On 18 February, Steve Myers, Director for Accelerators and Technology, reviewed the discussions on the LHC that took place at Chamonix at the public session of the LHC experiments committee. In particular, he described the scenarios that were studied to implement the consolidation measures and resume operation. He also explained that the schedule ultimately adopted will make it possible to obtain more physics data sooner, even though the energy will be limited during this first period to 5 TeV per beam to ensure completely safe operation. During the last week of February the enhanced quench protection system had a full review from a panel made up of experts from other high-energy physics laboratories from around the world, including the Brookhaven National Laboratory, DESY, Fermilab and the international fusion project, ITER. The enhanced protection system measures the electrical resistance in the cable joints (splices) and is much more sensitive than the system existing on 19 September.

The system has two separate parts: one to detect and protect against splices with abnormally high resistance; the second to detect a symmetric quench. The planning schedule was reviewed to define priorities between these two parts, both of which need to be complete before the restart at the end of September. The review also covered areas such as the technical details of the implementation of the new system, how well it will perform during operation and how “robust” it will be after years of service. In the preliminary report the panel found that: “The machine-protection staff have demonstrated a deep understanding of the issues involved in the design of the high-resistance-splice detection system.” It has “full confidence that the new system will have the ability to give early warnings for suspicious splices measured at the level of 1 nΩ” and that “the symmetric quench protection system, once its design is complete, will be able to detect quenches at twice the normal detection level”.

Jefferson Lab starts its 12 GeV physics upgrade

CCjef1_03_09

The US Department of Energy’s (DOE) Thomas Jefferson National Accelerator Facility in Newport News, Virginia, has awarded four contracts as it begins a six-year construction project to upgrade the research capabilities of its 6 GeV, superconducting radio-frequency (SRF) Continuous Electron Beam Accelerator Facility (CEBAF).

The resulting 12 GeV facility – with upgraded experimental halls A, B and C and a new Hall D – will provide new experimental opportunities for Jefferson Lab’s 1200-member international nuclear-physics user community.

CCjef2_03_09

The contracts are the first to be awarded following DOE’s recent approval of the start of construction. The DOE Office of Nuclear Physics within the Office of Science is the principal sponsor and funding source for the $310 million upgrade, with support from the user community and the Commonwealth of Virginia.

Under a $14.1 million contract, S B Ballard Construction, of nearby Virginia Beach, will build Hall D and the accelerator tunnel extension used to reach it as well as new roads and utilities to support it. Hall D civil construction is expected to last from spring 2009 until late summer 2011.

Two further contracts are for materials for Hall D’s particle detectors and related electronics. Under a $3.3 million contract, Kuraray of Japan will produce nearly 3200 km of plastic scintillation fibres for the new hall’s largest detector – a barrel calorimeter approximately 4 m long and nearly 2 m in outer diameter.

This calorimeter will detect and measure the positions and energies of photons produced in experiments. Its precision will allow the reconstruction of the details of particle properties, motion and decay. Under a $200,000 contract, Acam- Messelectronic GmbH of Germany will provide some 1440 ultraprecise, integrated time- to-digital converter chips for reading out signals from particles in experiments.

Last, a $1.5 million contract has gone to Ritchie-Curbow Construction of Newport News for a building addition needed for doubling the refrigeration capacity of CEBAF’s central helium liquefier, which enables superconducting accelerator operation at 2 K.

CEBAF already offers unique capabilities for investigating the quark–gluon structure of hadrons. Since operations began in the mid-1990s, more than 140 experiments have been completed.

The experiments have already led to a better understanding of a variety of aspects of the structure of nucleons and nuclei, as well as the nature of the strong force. These include the distributions of charge and magnetization in the proton and neutron; the distance scale where the underlying quark and gluon structure of strongly interacting matter emerges; the evolution of the spin-structure of the nucleon with distance; the transition between strong and perturbative QCD; and the size of the constituent quarks.

The beautiful programme of parity violation in electron scattering has permitted the precise determination of the strange quark’s contribution to the proton’s electric and magnetic form factors, with results that are in excellent agreement with the latest results from lattice QCD. This programme has placed new constraints on possible physics beyond the Standard Model.

New opportunities

CCjef3_03_09

Careful study in recent years by users and by the US Nuclear Science Advisory Committee has shown that a straightforward and comparatively inexpensive upgrade that builds on CEBAF’s existing assets would yield tantalizing new scientific opportunities.

The DOE study Facilities for the Future of Science: A Twenty-Year Outlook recommended the 12 GeV upgrade as a near-term priority. This 20-year plan used plain language to explain why. Speaking of quarks, it read: “As yet, scientists are unable to explain the properties of these entities – why, for example, we do not seem to be able to see individual quarks in isolation (they change their natures when separated from each other) or understand the full range of possibilities of how quarks can combine together to make up matter.”

The 12 GeV upgrade will enable important new thrusts in Jefferson Lab’s research programme, which generally involve the extension of measurements to higher values of momentum-transfer, probing correspondingly smaller distance scales. Moreover, many experiments that can run at a currently accessible momentum-transfer will run more efficiently at higher energy, consuming less beam time.

The generalized parton distributions will allow researchers to engage in nuclear tomography for the first time

For the first time nuclear physicists will probe the quark and gluon structure of strongly interacting systems to determine whether QCD gives a full and complete description of hadronic systems. The 12 GeV research programme will offer new scientific opportunities in five main areas. First, in searching for exotic mesons, in which gluons are an unavoidable part of the structure, researchers will explore the complex vacuum structure of QCD and the nature of confinement. Second, extremely high-precision studies of parity violation, developed to study the role of hidden flavours in the nucleon, will enable exploration of particular kinds of physics beyond the Standard Model on an energy scale that cannot be explored even with the proposed International Linear Collider.

The combination of luminosity, duty factor and kinematic reach of the upgraded CEBAF will surpass by far anything available through this kind of research. This will open up a third opportunity: by yielding a previously unattainable view of the spin and flavour dependence of the distributions of valence partons – the heart of the proton, where its quantum numbers are determined. The upgrade will also allow a similarly unprecedented look into the structure of nuclei, exploring how the valence-quark structure is modified in a dense nuclear medium. These studies will yield a far deeper understanding, with far-reaching implications for all of nuclear physics and nuclear astrophysics.

Lastly, the generalized parton distributions (GPDs) will allow researchers to engage in nuclear tomography for the first time – discovering the true 3D structure of the nucleon. The GPDs also offer a way to map the orbital angular momentum carried by the various flavours of quark in the proton.

New equipment

The CEBAF accelerator consists of two antiparallel 0.6 GV SRF linacs linked by recirculation arcs. With up to five acceleration passes, it serves three experimental halls with simultaneous, continuous-wave beams – originally with a final energy of up to 4 GeV, but now up to 6 GeV, thanks to incremental technology improvements. Independent beams are directed to the three existing experimental halls, each beam with fully independent current, a dynamic range of 105, high polarization and “parity quality” constraints on energy and position.

The maximum energy for five passes will rise to 11 GeV for the three original halls

The new Hall D will be built at the end of the accelerator, opposite the present halls. Experimenters in Hall D will use collimated beams of linearly polarized photons at 9 GeV produced by coherent bremsstrahlung from 12 GeV electrons passed through a crystal radiator. To send a beam of that energy to that location requires a sixth acceleration pass through one of the two linacs. This means adding arecirculation beamline to one of the arcs. It also requires augmenting the accelerator’s present 20 cryomodules per linac with five higher-performing ones per linac. Each 25-cryomodule linac will then represent 1.1 GV of accelerating capacity. The maximum energy for five passes will rise to 11 GeV for the three original halls, with experimental equipment upgraded in each.

As of early 2009, not only have the first contracts been awarded, but solicitations have been issued for about 40% of the total construction cost. CEBAF’s upgrade is the highest-priority recommendation of the Nuclear Science Advisory Committee’s December 2007 report The Frontiers of Nuclear Science: A Long Range Plan. “Doubling the energy of the JLab accelerator,” the report states, “will enable three-dimensional imaging of the nucleon, revealing hidden aspects of its internal dynamics. It will test definitively the existence of exotic hadrons, long-predicted by QCD as arising from quark confinement.” Efforts to realize this new scientific capability are now well underway.

LHeC: novel designs for electron–quark scattering

CClec1_03_09

Abdus Salam, at the Rochester Conference in Tbilisi in 1976, considered the idea of “unconfined quarks” and leptons as a single form of matter, in contrast to the distinctions between them in the Standard Model. Some 30 years later, it is appropriate to ask if a high-performance electron–proton collider could be built to investigate such ideas, complementing the LHC and a pure lepton collider at the tera-electron-volt energy scale. Entering this unexplored territory for electron–quark scattering is a challenging prospect, but one that could yield vast rewards.

On 1–3 September 2008, some 90 physicists met at Divonne, near CERN, for the inaugural meeting of the ECFA–CERN Large Hadron Electron Collider (LHeC) workshop on electron–proton (ep) and electron–ion (eA) collisions at the LHC. The workshop will initially run for two years, and a diverse mixture of accelerator scientists, experimentalists and theorists will produce a conceptual-design report. This will assess the physics potential of an electron beam interacting with LHC protons and ions, as well as details of the electron–beam accelerator, the interaction region, detector requirements and the impact on the existing LHC programme. HERA, at DESY, was the previous ep machine at the energy frontier. By the time it ceased operation in 2007, its 15 year programme had led to many new insights into strong and electroweak interactions, provided much of the current knowledge of the parton densities of the proton and placed important constraints on physics beyond the Standard Model.

Physics potential

A new era of high energy and intensity for proton beams is now beginning with the switch-on of the LHC. Preliminary estimates suggest that the addition of an electron beam could yield ep collisions at a luminosity of the order of 1033 cm–2s–1 and a centre-of mass energy of 1.4 TeV (J Dainton et al. 2006). This would probe distance scales below 10–19 m (figure 1). In comparison, the best performance ever achieved at HERA was a luminosity of 5 × 1031 cm–2s–1 at an ep centre-of-mass energy of 318 GeV (figure 2).

The large luminosity and energy increases set the LHeC apart from other future ep colliders previously considered. If realized, it would lead to the first precise study of lepton–quark interactions at the tera-electron-volt scale and would have considerable discovery potential. The workshop in Divonne began with remarks from CERN’s chief scientific officer, Jos Engelen, who expressed CERN’s interest and support for the study. Encouragement from ECFA was sent via its chair, Karlheinz Meier of Heidelberg, and the involvement of the nuclear-physics community was highlighted by Guenther Rosner from Glasgow, now chair of the Nuclear Physics European Collaboration Committee (NuPECC). In his opening lecture, Guido Altarelli of Rome introduced the wide-ranging possibilities of ep physics at the “terascale” and urged that ep/eA collisions must happen at some point during the LHC’s lifetime. The chair of the LHeC steering group, Max Klein of Liverpool, then summarized previous promising work on the topic and the aims of the new workshop.

Following the opening session, the meeting split into smaller groups to discuss specialized issues in more detail, with each group reporting its findings at the conclusion of the meeting.

CClec2_03_09

An LHeC would be uniquely sensitive to the physics of massive electron–quark bound states and to other exotic processes involving excited or supersymmetric fermions. Beyond the search for new particles such as these, the LHeC would complement the LHC in the investigation of the Standard Model and in understanding new physics. Light Higgs bosons would be produced dominantly through WW fusion and could be precisely studied in decay modes such as bb, which is expected to be problematic at the LHC. At the LHeC, top quarks would be produced copiously, both singly and in pairs, in the relatively clean environment offered by ep scattering.

With LHeC data the parton densities of the proton could be measured at momentum- transfer-squared beyond 106 GeV2 and at small fractions of the proton momentum (with Bjorken-x below 10–6), which are previously unexplored regions (figure 3). The kinematic range covered would match that required for a full understanding of parton–parton scattering in LHC proton–proton (pp) collisions. LHeC data would constrain each of the quark flavours separately for the first time, giving unrivalled sensitivity to the heavy quarks and to the gluon density over several orders of magnitude in x. In the process the strong coupling-constant could be measured to unprecedented precision.

CClec3_03_09

Such an ep collider would provide an unrivalled laboratory for the study of strong-interaction dynamics. It would access a low-x region where quarks that are usually “asymptotically free” meet an extremely high background-density of partons. Various novel effects are predicted, including a well supported conjecture that, in protons at LHC energies, pairs of the densely packed partons begin to recombine into single quarks or gluons.

An LHeC would also allow for the scattering of leptons off heavy ions, which also has outstanding potential because all current knowledge of nuclear-parton distributions has been obtained in fixed-target experiments. The LHeC would extend the x and Qranges explored by up to four orders of magnitude, offering an understanding of the initial partonic states in LHC heavy-ion collisions and amplifying the sensitivity to the new physics of ultradense partonic systems. Electron–deuteron scattering would allow the first exploration of neutron structure at collider energies, leading to further unique studies of parton densities and to tests of long-proposed relationships between diffraction and nuclear shadowing.

Accelerator challenges

To realize these wide-ranging physics possibilities the main challenge lies in bringing the LHC’s protons or heavy ions into collision at high luminosity with a new electron beam, without inhibiting the ongoing hadron–hadron collision experiments. Working groups are pursing two basic lay-outs of the electron accelerator for the conceptual design report, in order to understand fully the advantages and consequences of each.

CClec4_03_09

An electron beampipe in the same tunnel as the LHC has the advantage of high luminosity, beyond 1033 cm–2s–1, at energies of 50–70 GeV for reasonable power consumption (figure 4). According to a preliminary study, synchronous ep and pp LHC operation appears to be possible. This set-up would require by-pass tunnels of several hundred metres around existing experiments. These ducts could be used to host the RF infrastructure and could be excavated in parallel with normal LHC operations. Injection to an electron ring could be provided by the Superconducting Proton Linac (SPL), which is under consideration as part of the LHC injection upgrade. A further option for an initial phase of the LHeC is to use multiple passes in the SPL for the full electron acceleration, which could produce energies of around 20 GeV.

CClec5_03_09

An alternative solution for the electron beam is a linear accelerator (linac) with somewhat reduced luminosity but with an installation that is decoupled from the existing LHC ring (figure 5). The linac could use RF cavity technology under development for the proposed International Linear Collider, in either pulsed or continuous-wave mode. Power and cost permitting, it could produce energies of 100 GeV or more and provide electron–quark collisions at a centre-of-mass energy approaching 2 TeV.

Detailed calculations of the LHeC electron-beam optics have led to proposals for the layout of the interaction region, which is also a major consideration for the detector design. The highest projected luminosities, which are required to probe the hardest of ep collisions, may be achieved by placing beam-focusing magnets close to the interaction point. However, measurements at small angles to the beampipe are also important for the study of the densest partonic systems at low x and the hadronic final state at high x. Among the many interesting ideas, one proposed design involves instrumenting the focusing magnets for energy measurements. A first detector study for ep and eA physics at the LHeC includes high-precision tracking and high-resolution calorimetry, which would lead to a new level of precision in ep collider experiments.

Following an interim report presented to ECFA at the end of November 2008, the conceptual design work on an ep/eA collider at the LHC continues, with a second major workshop meeting scheduled for 7–8 September 2009. If realized, this facility would become an integral part of the quest to understand fully the new terascale physics that will emerge as the LHC era unfolds.

Planck satellite takes off to chart the universe

CCpla1_03_09

ESA’s Planck spacecraft is the first European satellite dedicated to the study of the cosmic microwave background (CMB) radiation. Due to be launched on 29 April aboard an Ariane 5 rocket from ESA’s launch site in Kourou, French Guyana, Planck’s primary goal is to determine the cosmological parameters of the universe and to survey astronomical sources. Scientists are hopeful that it should also answer many other fundamental and astrophysical questions.

CCpla2_03_09

The satellite will orbit at the second Lagrangian point (L2) of the Earth–Sun system at 1.5 million km from the Earth (figure 1). From this position, Planck will explore the unknowns of the cosmic background radiation – the relic radiation that brings with it many secrets of the history and evolution of the universe. For 380,000 years following the Big Bang, all of the dramatic events that steered the evolution of the universe, its geometry and properties were imprinted and memorized in the CMB.

The CMB today permeates the universe and has an average temperature of 2.725 K, though observations have revealed slightly colder and hotter spots known as anisotropies. Highly accurate studies of where these anisotropies are and what produced them may allow researchers to decode a wealth of information about the properties of the universe. Planck’s task – 13.7 billion years after the Big Bang – is not an easy one, however, because the radiation signal is feeble and is embedded in all of the other galactic and extragalactic signals, each emitting at different frequencies.

Following in WMAP’s footsteps

The first two scientific missions to map the CMB and its anisotropies were NASA’s Cosmic Background Explorer (launched in 1989) and the Wilkinson Microwave Anisotropy Probe (WMAP, launched in 2001). The data from these two satellites confirmed that the universe is flat, that its expansion is accelerating and that only 4% consists of known forms of matter. Nevertheless, given the lower accuracy of previous experiments, many questions remain concerning the nature of dark energy (73% of the universe) and dark matter (23%), as well as the processes that marked the infancy of the universe.

CCpla3_03_09

Planck comes eight years after WMAP and is designed to improve significantly on those results. The satellite is equipped with both the Low Frequency Instrument (LFI) and the High Frequency Instrument (HFI). “Together, the two instruments will scan the universe in nine frequency channels, with a sensitivity that is 10 times better than that of WMAP,” says Reno Mandolesi of the Italian Institute of Space Astrophysics and Cosmic Physics in Bologna (IASF-BO/INAF). He is also the principal investigator of the consortium that built the LFI. “However, the main improvement of Planck, with respect to previous missions, is in the suppression and control of systematic effects. The HFI and LFI employ two different detection techniques and this drastically reduces the systematic effects. Both instruments operate at cryogenic temperatures, at which the intrinsic noise coming from the devices is reduced to a minimum,” he adds.

The systematic effects can also be controlled by an appropriate choice of orbit- and sky-scanning strategy. “WMAP was the first satellite to orbit round L2 and Planck will fly in a similar orbit. From L2 the noise from the Earth is drastically reduced,” confirms Mandolesi. Also, from this position the satellite’s telescope can always be protected from illumination from the Earth, the Sun and the Moon, thanks to the optimal design and observational strategy.

The LFI is an array of 22 radiometers, each one made of an antenna to capture the signal and cryogenically cooled (20 K) electronics – a combination of ultralow-noise amplifiers and high-electron-mobility transistors – for read-out. “Low-noise temperature fluctuations in the amplifiers are a crucial factor in the measurement,” says Mandolesi. “The LFI radiometers meet the requirements for both noise and bandwidth, with low power consumption at all frequencies – and they establish world-record low-noise performances in the 30–70 GHz range. This is particularly important considering that the main noise sources come from our own galaxy and have their minimum around the 70 GHz frequency,” he explains.

The HFI is an array of 48 bolometric detectors that is placed at the focal plane of the Planck telescope. These will measure the energy of the incident CMB radiation in six frequency channels between 100–857 GHz, with sensitivity in the lower frequencies close to the fundamental limit set by the photon statistics of the background. The HFI was designed and built by a consortium of scientists led by Jean-Loup Puget of the Institut d’Astrophysique Spatiale in Orsay. The detectors operate at the cryogenic temperature of 0.1 K, obtained using a cryochain of sorption, mechanical and dilution coolers.

Signals from the CMB are polarized in two types of mode, known as E-modes and B-modes. The E-modes have already been measured (Kovac et al. 2002; Page et al. 2007). All of the LFI channels and four of the HFI channels can measure the intensity of the CMB radiation as well as its linear polarization. “By combining the signals measured by the LFI and HFI, Planck might be able to discover the B polarization mode, which is linked to the existence of the primordial gravitational waves” says Mandolesi.

CCpla4_03_09

“In some cosmological models it could even be possible to find signatures that might correspond to scenarios with extra dimensions of the universe. Also, the mass and quantum fluctuations that occurred at 10–35s after the Big Bang, and might have affected the cosmic inflation, can be explored by studying the polarization modes of the CMB with high accuracy. Furthermore, Planck’s excellent sensitivity might allow the discovery of interesting physics hidden behind the non-Gaussian distribution of the temperature anisotropies predicted by many cosmological models,” he explains.

Planck will start collecting physics data after a three-month period of commissioning in orbit. Six months later the scientific teams will start the analysis, aimed at the early release of a catalogue of compact sources, the first to be made at so many frequencies. It is expected to become public about 15 months after the launch. A core team of about 100 scientists supporting the Data Processing Centre in Trieste will carry out the processing and analysis of LFI data. The HFI data will be processed by a distributed system involving several institutes in France and the UK. The satellite will accomplish two complete surveys of the sky over 14 months and the hope is that this will be extended to four surveys.

• For more about Planck, see www.esa.int/SPECIALS/Planck/.

CERN firms up the LHC schedule

In a workshop in Chamonix on 2–6 February, members of the LHC accelerator and experimental teams, as well as CERN’s management, met to formulate a realistic timetable to have the LHC running safely and delivering collisions. The main outcome is that there will be physics data from the LHC in 2009 and there is a strong recommendation to run the machine through the winter until the experiments have produced substantial quantities of data. Such extended running could achieve an integrated luminosity of more than 200 pb–1 at 5 TeV per beam.

Meetings in Chamonix were a feature of the annual winter shutdown at CERN during the LEP era, providing a forum where intense discussions led to a clear consensus on objectives for the following year. CERN’s director-general, Rolf Heuer, intends for similar meetings to guide operations during the LHC era. The first occasion provided a tough start, as the participants had to agree on the best way to proceed following the incident in sector 3-4 that brought LHC commissioning to a halt last September.

CCnew1_03_09

The crucial improvement since the incident in sector 3-4 is a new resistance-measurement system which can detect nano-ohm resistances in the joints. This new system would have prevented September’s incident and will prevent all imaginable failures of a superconducting joint in the future. The work on this new detection and protection system was reviewed at the workshop and is already making good progress. Following completion of the design of the two principal electronics boards, the first orders were placed in early February. At the same time, manufacture of the cable segments had begun and installation started in sector 4-5.

For any “unimaginable” failure of a joint, the installation of new pressure-relief valves will reduce the amount of damage that occurs, compared with last year. The new valves will prevent pressure build-up and collateral damage by allowing a greater rate of helium release in the event of a sudden increase in temperature. Discussions in Chamonix centred on whether to install these pressure-relief valves in one go or to stage their installation over the next two shutdowns. There were many interesting exhanges on this topic and opinions were divided. The CERN management is to make the final decision on this in the week beginning 9 February.

Meanwhile, work continues apace on the repairs at the LHC. At the end of January, a dipole from sector 1-2, which had been identified as having an internal splice resistance of 100 nΩ, was opened up after removal from the tunnel and was found to have little solder on the splice joint. It is likely that a similar small resistance was at the root of the incident in sector 3-4. The LHC teams can now detect a single defective splice in situ when a sector is cold and they have identified another dipole showing a similar defect in sector 6-7. This sector will be warmed up and the magnet removed. Each sector has more than 2500 splices, but the resistance tests can only be conducted on cold magnets. Three sectors remain to be tested: sector 3-4, where the incident occurred, and the adjoining sectors, 2-3 and 4-5.

Tests on the magnets were among the important topics under discussions at Chamonix. The participants agreed on teams to work on the detailed analysis of the measurements made during the cold tests of magnets in building SM18 before their installation in the tunnel. New analysis techniques will be devised to provide a complete picture of the resistance in the joints of all magnets installed in the LHC. The aim is to allow an early warning and early correction of any further suspicious splices.

• For up-to-date news, see The Bulletin at http://cdsweb.cern.ch/journal/.

Protons reach the J-PARC Hadron Experimental Hall

The Main Ring at the Japan Proton Accelerator Research Complex (J-PARC) has reached a new milestone with the successful extraction of a proton beam to the Hadron Experimental Hall and then to the beam dump.

CCnew2_03_09

J-PARC, a joint project of the Japan Atomic Energy Agency and the KEK laboratory, has been under construction at Tokai since 2001. With a 1600 m circumference and a 500 m diameter, the 50 GeV synchrotron of the Main Ring is the third and final stage in the accelerator complex. The first stage is the linac, followed by the 3 GeV synchrotron. The Main Ring will operate at 30 GeV in the first phase of the project.

The proton-beam tests at J-PARC started in November 2006 and had reached the initial goal for protons in the Main Ring of 30 GeV by December 2008. Then on 27 January, 30 GeV protons were extracted from the Main Ring to the secondary particle-production target, T1, located 250 m downstream in the Hadron Experimental Hall and were transported onwards to the beam dump.

The Hadron Experimental Hall, which is one of two facilities at the Main Ring, will provide beams of secondary particles produced by the protons. These beams will be the most intense secondary-particle beams at this energy and should facilitate several different experiments, including precise measurements of CP violation in K mesons and studies of the collective motions of strange quarks in hypernuclei. To make an abundance of secondary particles available from the primary proton beam has required the development of various methods for handling the high-intensity primary beam. This has required the construction of a dense radiation shield and magnets for the high-radiation area that are rugged enough and easily replaceable if problems arise.

Neutron and muon beams are already available in the Material and Life Science Facility at J-PARC. With the success of the Hadron Experimental Hall, an important goal is to send high-power neutrino beams to the Super-Kamiokande neutrino detector, 295 km away.

…and projects to upgrade the NSCL make excellent progress

Despite the winter weather, including more than 50 cm of snow in December, construction continued on the new office wing and the new experimental area for research with stopped and reaccelerated rare isotope beams at the NSCL at MSU. Construction milestones achieved by the end of 2008 included: completing the steelwork for the new experimental area; tearing down one of the original wings of the NSCL building to make space for the new offices; completing the office foundations and underground utilities; and drilling the well for the lift shaft. Work continues on the steel superstructure for the new office block and on masonry for the new experimental area, which is scheduled to be enclosed in February.

CCnew3_03_09

Indoors, meanwhile, faculty and staff at the NSCL are making strides towards implementing new research capabilities related to a new accelerated beam facility, ReA3. This upgrade, which includes a new linear accelerator and a new experimental area, is funded by MSU and should begin commissioning in early 2010.

ReA3 will provide unique low-energy, rare-isotope beams, which will be produced by stopping fast, separated rare isotopes in a gas-stopper and then reaccelerating them in a linear accelerator. It will make available reaccelerated beams of elements that are typically difficult to produce at facilities based on isotope separation on-line. Among the science opportunities that ReA3 will open is the possibility of measuring a remaining set of nuclear-reaction rates that are necessary for accurate models of nova nucleosynthesis and studying how exotic nuclei with large neutron halos interact at large intranuclear distances.

The balcony that will hold ReA3 and the electron-beam ion trap (EBIT) for charge breeding is complete and ready for devices to be mounted. Development continues on the various components to stop, transport, charge-breed and reaccelerate rare isotopes. These components include the linear gas stopper; the low-energy beamline system from the gas stopper to the EBIT and to the new stopped-beam area; the EBIT charge breeder and mass separator; and the linac.

The light-pulse horizon

“According to the general theory of relativity, space without aether is unthinkable; for in such space there (not only) would be no propagation of light … But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time.” Albert Einstein, 1920.

Aether, the pure air breathed by gods, is not much in fashion in laboratories today. Physicists speak instead of the vacuum, in the context of quantum physics, and quantum vacuum fluctuations that fill space that is free from real matter. How light slips through these fluctuations was first studied in the 1930s by Werner Heisenberg, H Euler, W Kockel and Victor Weisskopf, and later by Julian Schwinger. Their work revealed the first “effective” interaction – the new and unexpected scattering of light on light, and of light on the background electromagnetic field. This interaction originates in quantum vacuum fluctuations into electron–positron pairs and makes the electric field unstable to pair production. Thus any macroscopic electric field is metastable because in principle, it can decay into particles.

The critical field strength for this instability, E0, arises when a potential of V0 = 2mc2/e = 1 MV (where m is the electron’s mass and e its charge) occurs over the electron’s Compton wavelength – that is, when E0 = 1.3  ×  1018 V/m. This leads to vacuum decay into pairs at timescales of less than attoseconds (10–18 s). The back-reaction of the particles that are produced screens the field source, giving an effective upper limit to the strength of the electric field. However, as the applied external field decreases in strength, its lifespan increases rapidly: for a field strength of E = 5 × 1016 V/m, the lifespan is similar to the age of the universe so that, for all practical purposes, present-day field configurations are stable.

The Compton wavelength of an electron is one three-millionth of a typical optical wavelength, so vacuum fluctuations do not greatly obstruct the propagation of light. Moreover, as Schwinger showed, a coherent ideal plane light-wave cannot scatter from itself (or be influenced by itself) no matter what the field intensity is. This is the only known form of light to which the vacuum is exactly transparent within the realm of quantum electrodynamics. For non-ideal plane waves, space–time translation-invariance symmetry and quantum coherence only partially protect the propagation of light pulses.

CCpul1_03_09

A laser pulse of several kilojoules and just a few wavelengths long is all but a plane wave. Such a pulse pushes apart virtual electrons and positrons, in the near future up to an energy of many giga-electron-volts. If the virtual vacuum waves were to decohere, the light pulse would materialize into pairs. However, by quantum “magic” the deeply perturbed vacuum is restored after the pulse has passed. Thus a single pulse, even though it is not a plane wave, will at present-day intensities slip through the vacuum. Colliding light pulses provide a greater opportunity to interact with the vacuum structure because the magnetic field can be compensated and/or the electric wave-number doubled, thereby enhancing the light–vacuum interaction. Two superposed pulses do not so much interact with each other, but interact together with the fluctuations in the vacuum.

High-intensity pulsed lasers also offer a radical approach to accelerating real particles to high energies. The electromagnetic fields of the laser pulses can be huge: current off-the-shelf, high-power lasers can deliver electric fields as great as 10 GeV/μm (104 TeV/m). Metal will typically break down at fields of less than 100 MeV/m – a natural limit and the current standard for accelerator designs based on RF technology. The much higher fields available using lasers promise ultracompact accelerator technology, although the difficulties should not be underestimated. The shorter wavelengths involved imply far better control and precision than with RF acceleration. What helps to push laser technology ahead is the greater intensity of light that is available in comparison with RF. For this reason, laser-pulse technology is the most significant ingredient of laser acceleration, and great progress can now be achieved on timescales of a year.

CCpul2_03_09

This was not always so. Until the mid-1980s, efficient ultrashort pulse-amplification that would preserve the beam quality seemed to be unattainable, considering the damage caused to optical devices. A solution emerged in 1985 with the concept of chirped pulse amplification (CPA), in which a short pulse at an energy level as low as nanojoules is stretched by a large factor in time using dispersive elements, such as a pair of diffraction gratings (figure 1). This is possible because of the large number of Fourier frequencies that form the ultrashort pulse. Each frequency takes a different route and hence a different time to traverse the dispersive element.

Once the pulse has been stretched, the red part of the spectrum is ahead, followed by the blue. The stretching factor can be as large as 106 yet the operation does not significantly change the total pulse energy. Consequently the pulse intensity drops by the same ratio, i.e. 106, implying that the long pulse can be amplified safely, preserving the beam quality and laser components. This concept works so well that in modern CPA systems the pulse is stretched by a factor of 106, amplified by 1012, then compressed by a factor of 106 back to its initial time structure. A nano- to microjoule primary pulse turns into a pulse of up to kilojoules comprising nearly 1022 photons of (sub)micron wavelength. In a nutshell, this pulse is a table-top particle accelerator. The interaction with matter of light pulses containing joules or even kilojoules of energy (compared with the less than microjoules of the most powerful particle accelerators) generates intense bursts of radiation (figure 2).

Accelerating gradients

Nevertheless, laser particle acceleration has had its ups and downs. As the Woodward–Lawson theorem states, direct plane-wave laser acceleration of particles is not possible – you lose what you win in a perfect wave. However, if the intense pulse is so short that it “resonates” with the innate matter(plasma)-oscillations, a huge accelerating gradient is possible. The energy imparted to the particle in each acceleration step can be directly derived from the wave amplitude of the pulse. The short-pulsed nature of the laser is also of great interest in this acceleration method, as is the possibility of using circularly polarized light.

Particle-acceleration schemes use lasers to generate wake waves in plasma in the relativistic regime for electrons in the optical field. Enrico Fermi once contemplated a 1 PeV (109 MeV) accelerator girdling the Earth; laser acceleration may allow us to reach this energy on the scale of 1 km by employing a subpicosecond 15 MJ laser. The route to this goal would test ultrahigh-gradient acceleration theory at 10 TeV, which could be achievable with a laser of 15 kJ and a 50 fs pulse. Such an intense laser pulse is not yet available, but the proposed European Light Infrastructure (ELI) should offer an opportunity to explore this domain. The peak power of ELI will be in the exawatt (1018 W) region – that is, 100,000 times the power of the global electricity grid – albeit only over several femtoseconds.

CCpul3_03_09

Are there other ways to go from the laser pulse to an intense particle beam? If beam quality is not of great concern, it is possible to exploit the action of the pulse on a foil that is only a fraction of the wavelength thick. At the Trident Laser Facility at Los Alamos National Laboratory, Manuel Hegelich and his team shoot a high-contrast (no preceding light) pulse onto a thin, carbon-diamond nanofoil. Such a pulse is not reflected by the “pre-plasma” formed on the foil but propagates through the foil, where it picks up electrons. The cloud of relativistic, wave-riding electrons generates longitudinal electrical fields, which cause carbon ions to follow electrons, creating two “beams”. At ELI such a pulse–foil interaction could provide a source of high-energy relativistic heavy ions, because the pulse intensity that could be achieved would permit direct acceleration of ions in a relativistic regime (figure 3).

The plasma cloud emerging from the foil could form gamma-ray beams suitable for photonuclear physics. Einstein observed that a relativistic “flying mirror” (in this case the plasma) would “square” the relativistic Doppler effect, leading to a boost of photon energy, ω = 4γ2ω0, where ω0 is the original energy and γ the Lorentz factor (figure 4). This effect has been demonstrated, both by using the laser wakefield created on the surface of a solid and by using a relativistically moving plasma of thin foil propelled by the laser beam, from which another laser beam is reflected. It should soon lead to compact coherent X-ray and even gamma-ray light sources. Dietrich Habs and colleagues at the Munich-Centre for Advanced Photonics are pursuing an initial design effort. The gamma rays produced in this way are not only of high energy but also compressed by a factor 1/γ2 into an ultrashort pulse. The coherent pulse contains increased electromagnetic fields, so the technology leads to ultrahigh electrical-field strengths where the decay of the vacuum becomes observable.

It appears that coherent reflection of a femtosecond pulse is possible from a flying mirror of dense plasma with γ=10,000 – that is, from an electron cloud moving with an energy of 5 GeV. The resulting 400 MeV photon pulse would also be compressed from femtoseconds to 10–23 s. Such a pulse could, in principle, be focused into a femtometre-scale volume, the size of a nucleon. On such a small distance scale, 10 kJ would be enough to reach temperatures in the 150 GeV range, which should allow the study of the melting of the vacuum structure of the Higgs field and the electroweak phase transition. Clearly this is on the far horizon, but there are other distance/temperature scales of interest on the journey there. Such a system would allow studies of electromagnetic plasma at megaelectron-volt temperatures and exploration of the quark–gluon plasma on a space–time scale at least 1000 times as great as can currently be achieved. This would be truly recreating a macroscopic domain of the early universe in the laboratory.

CCpul4_03_09

Another fundamentally important aspect of the science possible with the extremely high fields in lasers concerns the immense acceleration, a, that electrons experience in the electromagnetic field of the pulses (e.g. up to a = 1030 cm/s2 for an electron in ELI). According to the equivalence principle, this corresponds to an equivalent external gravitation. The effect for the accelerated electron is that the distance, d = c2/a, to this event horizon becomes as short as the electron’s Compton wavelength, in which limit experiments can probe the behaviour of quantum particles in the realm of strong gravity. Work is under way to demonstrate Unruh radiation, a cousin of Hawking radiation. (Hawking radiation is thermal radiation in strong gravity, while Unruh radiation arises in the presence of strong acceleration.) Such experiments would allow the study of the extent and validity of the special and general theories of relativity, as well as test the equivalence principle in the quantum regime.

To conclude, high-intensity pulsed lasers, and in particular the proposed ELI facility, offer a novel approach to particle acceleration and widen the range of fundamental physics questions that can be studied (Mourou et al. 2006). Light pulses will be able to produce synchronized high-energy radiation and pulses of elementary particles with extremely short time structures – below the level of attoseconds. These unique characteristics, which are unattainable by any other means, could be combined to offer a new paradigm for the exploration of the structure of the vacuum and fundamental interactions. Ultra-intense light pulses will also address original fundamental questions, such as how light can propagate in a vacuum and how the vacuum can define the speed of light. By extension it will also touch on the question of how the vacuum can define the mass of all elementary particles. The unique features of ELI – its high field-strength, high energy, ultrashort time structure and impeccable synchronization – herald the entry of pulsed high-intensity lasers into high-energy physics. This is a new scientific tool with a discovery potential akin to what lay on the horizon of conventional accelerator technology in the mid-20th century.

CERN leads the way with novel beam extraction

In 2001 a team at CERN proposed a new scheme for ejecting beam from a circular particle accelerator over a few turns using magnets that generate nonlinear fields. The aim was to replace the so-called continuous transfer (CT) technique, which is used to transfer protons between the PS and the SPS for fixed-target physics and the CERN Neutrinos to Gran Sasso (CNGS) project.

CCbea1_03_09

CT dates from the 1970s and is based on slicing beam onto an electrostatic septum that is used to split off some of the orbiting beam. In the scheme, the horizontal tune (the number of betatron oscillations per turn, QH) is set to 6.25 so that the beam rotates by 90° in phase space every turn. A system of slow- and fast-pulsing dipoles (acting on a few milliseconds and microseconds, respectively) is used to displace the proton beam horizontally across the septum so that at each turn approximately one-fifth of the beam is sliced off by the septum blade (figure 1). This slice is then deflected by the field of the septum so that it enters into a second septum downstream – the magnetic extraction septum. The whole beam is extracted in five turns.

The choice of a five-turn extraction is dictated by the use of two PS cycles to fill the SPS ring, which has a circumference that is 11 times as large as that of the PS. By ejecting the beam over five turns at the end of two consecutive PS cycles, ten-elevenths of the SPS circumference is filled. One-eleventh of the circumference remains empty to avoid interference between the circulating beam and the transient times of the SPS injection kickers.

Making beamlets

In the new scheme, which has been named multiturn extraction (MTE), the beam is split horizontally into five beamlets – one in the centre and four in stable islands of the horizontal phase space. These islands are generated by nonlinear fields of sextupole and octupole magnets and are separated by sweeping the horizontal tune through the stable one-fourth resonance, QH = 6.25 (figure 2). The beamlets circulate in the PS until they are moved, turn by turn, beyond the magnetic extraction septum by dedicated slow and fast closed bumps. The separation of the beamlets that is necessary to avoid intercepting the extraction septum is controlled by the value of the horizontal tune at the end of the resonance crossing, as well as by the strength of the nonlinear magnets. This method no longer requires the electrostatic septum.

CCbea2_03_09

This approach has several advantages compared with the original CT extraction. First, there is no interaction between the beam and the septum blade, the losses of which limit high-intensity operations. Second, the beamlets trapped in the islands can have the same intensity, emittance and optical parameters at extraction. This eases the matching with the receiving accelerator, which would not be possible with CT. Third, several parameters, such as the strengths of the nonlinear magnets, the speed at which the resonance is crossed and the final horizontal tune, are available to adjust and optimize the parameters and separation of the beamlets simultaneously. In CT, only the fast-bump amplitude can be used to equalize the intensities or emittances of the beamlets. Moreover the MTE scheme can be time reversed, which could allow a multiturn injection (MTI) based on stable islands.

MTE in practice

To complement the theoretical analysis, extensive measurement campaigns began at the PS in 2002 to assess the feasibility of loss-free beam-splitting by crossing the stable one-fourth resonance. This was essential before undertaking any hardware upgrade of the PS machine, such as new octupole magnets and fast dipoles for a dedicated orbit bump. In 2004 the tests achieved the necessary loss-free beam-splitting even with a high-intensity, single-bunch beam of about 6 × 1012 protons. The next step was to ensure that an equal intensity was shared between the four beamlets trapped in the islands and the beam core. To avoid unwanted transient effects in the SPS, the scheme has to give a maximum difference of about 5% in the relative intensities of the islands and the core. In tests, the best beam-sharing that we achieved was about 18% in each of the four islands and about 28% in the central core. This intensity ratio is slightly out of specification but should be compared with the accuracy of the determination of the beamlets’ profiles and hence their intensity, which is a few per cent.

CCbea3_03_09

The positive outcome of the experimental tests led to the approval of the PS MTE project. This should provide a considerable reduction of beam losses in the PS, which is particularly crucial for the production of high-intensity proton beams for CNGS. The project, started in 2006, should last until 2010 with the peak effort for hardware production and installation occurring during the winter shutdown of 2007/2008 and first beam commissioning by mid-2008.

Implementing MTE has involved a considerable number of hardware modifications in the PS ring. The slow bump that is used to displace the split beam towards the magnetic extraction septum is generated by six magnets, each with independent power converters. This is necessary to shape the bump to optimize the available mechanical aperture. (Originally the extraction bump was generated by only four magnets that shared a common power supply.) The fast bump that is used to move the split beam across the extraction septum is generated by three fast dipoles (kickers) with three new pulse-forming networks (PFNs). In addition, new octupole magnets have been designed and built – two straight sections have each been equipped with two sextupoles and one octupole to generate and manipulate the stable islands used for beam trapping. Globally, a review of the mechanical aperture, in the light of higher requirements imposed by the split beam, implied the need for a larger vacuum chamber in the extraction region, including the complex y-shaped chamber at the extraction point. A second phase will be implemented during the winter shutdown of 2009/2010. This will aim to improve the performance of the kickers in the ring and in the transfer line, the latter being used to correct the trajectory variations among the extracted turns.

Extraction testing

In May and June 2008 beam splitting was resumed using the newly installed sextupoles and octupoles, again achieving a loss-free process with a single bunch of about 3 × 1012 protons. At the same time the new slow bump was commissioned so that it was ready for the extraction tests in July 2008 when the PFNs, completed and hardware-commissioned, became available for the beam tests. Then, on 1 August, five beamlets with almost the same intensity were successfully created from a single bunch of 3 × 1012 protons and extracted in the first part of the transfer line, TT2, to the SPS. Figure 3a shows the measured horizontal beam profile in the PS at the end of the splitting process, with a fit of the five beamlets superimposed. Figure 3b shows the intensity signal of a pick-up in the TT2 transfer line. Each of the five peaks corresponds to a beamlet extracted over a single turn, whereas the distance between them corresponds to the PS revolution time of 2.1 μs.

CCbea4_03_09

The rest of the commissioning period until the end of the SPS-physics run on 3 November 2008 was dedicated to studying the best longitudinal structure for beam delivery and injection into the SPS, and it included a campaign of measurements of the optical parameters in the transfer line between the PS and the SPS machines. In the end it was possible to extract from the PS a beam bunched on harmonic number h = 16, corresponding to 16 × 5 beamlets, for a total intensity of about 0.7 × 1013 protons. This beam was injected into the SPS, accelerated and then extracted onto the CNGS target to produce the first neutrinos from an MTE beam.

Figure 4 shows the signal from the fast beam-current transformer in the SPS after the injection of the second PS cycle. The two batches separated by the gaps required by the finite rise time of the SPS injection kickers are clearly visible, as well as the bunched structure. During the last part of the PS run, when the SPS was already shut down, it was possible to set up a new completely debunched MTE beam – that is, with the longitudinal structure that provides the best SPS performance – with a total intensity of about 1.3 × 1013 protons. This yielded typical extraction efficiencies of 97–98%, with peaks of 99%. This is the maximum theoretical efficiency, given the unavoidable losses owing to the finite rise time of the PS extraction kickers. The corresponding extraction efficiency for a CT beam with the same intensity is 95%. In addition, the losses for MTE are localized around the extraction magnetic septum, while in the case of the CT, the losses are distributed through a wider part of the machine circumference, affecting a larger number of active elements.

For the 2009 start-up the plan is to begin by delivering CT beams to the SPS, but to resume MTE operation with a view to replacing the low-intensity CT extraction for fixed-target physics by mid-2009. Soon after, the beam for CNGS will also be generated with MTE, with the intensity gradually increased towards the nominal value.

Mobilizing for the LHC

Investigations following the incident in Sector 3-4 of the LHC on 19 September have confirmed that the cause was a faulty electrical connection between two magnets. This resulted in mechanical damage and the release of helium from the magnet cold masses. CERN has published two reports on the incident and confirmed that the accelerator will be restarted in summer this year.

An interim report issued on 15 October gave the result of preliminary investigations. A more detailed report followed on 5 December, confirming that a small resistive zone developing in a bus connection in the circuit that conducts current between magnets probably caused the incident. Arising during the ramping-up of current in the main dipole circuit at the nominal rate of 10 A/s, in less than a second the zone led to a resistive voltage of 1 V at 9 kA. The resistance was small – 200 nΩ – dissipating of the order of 10 W at high current intensity. The power supply, unable to maintain the current ramp, tripped off and the energy-discharge switch opened, inserting dump resistors into the circuit to produce a fast decrease in current. In this sequence of events, the quench-detection, power converter and energy-discharge systems behaved as expected. Within a second, an electrical arc developed, puncturing the helium enclosure and leading to a release of helium into the insulation vacuum of the cryostat. After three and four seconds, the beam vacuum also degraded in beam pipes 2 and 1, respectively.

The insulation vacuum then started to degrade in the two neighbouring subsectors. (A vacuum subsector consists of two lattice cells, each with six dipoles and two quadrupoles, with “vacuum barriers” at both ends.) The spring-loaded relief discs on the vacuum enclosure opened when the pressure exceeded atmospheric, thus releasing helium into the tunnel, but they were unable to contain the pressure rise below the nominal 0.15 MPa in the vacuum enclosure of the central subsector. This resulted in large pressure forces acting on the vacuum barriers separating the damaged subsector from its neighbours.

Investigation teams confirmed the location of the electrical arc and, while they found no electrical or mechanical damage in neighbouring interconnections, they discovered contamination by soot-like dust, which propagated over some distance in the beam pipes. They also found damage to the multilayer insulation blankets of the cryostats. In addition, the forces on the vacuum barriers attached to the quadrupoles at the subsector ends were such that the cryostats housing these quadrupoles broke their anchors in the concrete floor of the tunnel and moved, with the electrical and fluid connections pulling the dipole cold-masses in the subsector from cold supports inside their undisplaced cryostats. The displacement of the quadrupole cryostats – short straight sections (SSS) – also damaged jumper connections to the cryogenic-distribution line.

As soon as the gravity of the incident was clear, a campaign for cryostating and testing of spare cold masses – both dipoles and quadrupoles – was immediately launched. After the sector had been warmed up, by the end of October, the repair programme began in earnest with the inspection of all the affected magnets – first underground and then at the surface. The programme also includes the inspection of the beam pipes and screens for contamination by soot-like metal dust and debris from the damaged insulation blankets. All the affected sections will be cleaned.

Virtually all the cold masses of magnets in the affected zone seem to be intact, with the possible exception of the bus bars in the end zone. The damage has been mainly to components located between the cryostat and the cold masses, as a result of the displacement that occurred. In all, from a total of 57 magnets (42 dipoles and 15 SSS) in the affected zone, 53 magnets (39 dipoles and 14 SSS) have been removed from the tunnel for inspection and/or cleaning or repair. Of the magnets to be re-installed, 39 (30 dipoles and 9 SSS) will have new cold masses, almost depleting CERN’s stock of spares. The decision was taken to reuse spare cold masses as much as possible to enhance operational safety. Nine of the dipoles removed are believed to be undamaged and will simply be inspected and retested. Five SSS will be reused after reconditioning of the cryostat (i.e. change of multilayer insulation blankets and the cold supports).

This work is being carried out in building SMI2, where the cryostat facility is based, and also in B904 at the Prévessin site. Meanwhile a temporary line for decryostating dipoles has been installed in B180 (West Hall) to recover quickly cryostat components that will be used for new cryodipoles based on new cold masses.

All magnets will undergo complete warm and cold testing in building SM18, where they were tested before original installation. They are being tested up to 12 850 A, which corresponds to a field of 9 T, compared with the 8.3 T for nominal LHC operation at 7 TeV. It will be possible to test up to five magnets a week, once more cryogenic capacity has been brought on line in February. In addition, the circuits of the main magnets are undergoing power tests to detect any abnormal resistances. As a result, a magnet in Sector 1-2 will also be replaced.

As of mid-January, all 53 magnets had been brought to the surface and the first eight replacement units had been installed in the tunnel. The goal is to have all the magnets in place in the tunnel by the beginning of April. Making the interconnections will start at the beginning of February, with enhanced quality control.

New electronic boards will protect the magnets by constantly measuring the resistance of the busbars and the interconnections. These additional electronics will also measure other parameters. Installation will start at the beginning of April. In addition, a better use of the present quench-protection scheme will help to single out possible bad connections inside cold masses already installed in the tunnel. The final stage will be the testing of the entire sector in June and July.

• For the two reports, see http://press.web.cern.ch/press/PressReleases/Releases2008/PR17.08E.html.

bright-rec iop pub iop-science physcis connect