Comsol -leaderboard other pages

Topics

ICE-cool beams just keep on going

Initial Cooling Experiment

The quality of a charged particle beam is characterized by the product of its radius and divergence – the emittance – and by the momentum spread. Together they define the part of the phase space that is occupied by the particles in an accelerator. In 1966 Gersh Budker proposed a method that would allow the compression, or “cooling”, of the occupied phase space in stored proton beams. His idea of electron cooling was based on the interaction of a monochromatic and well directed electron beam with the heavier protons circulating over a certain distance in a section of a storage ring. The electrons are produced continuously in an electron gun, accelerated electrostatically to a velocity equal to the average velocity of the circulating beam and then inflected into the beam. Both beams overlap for a distance, over which the cooling takes place, and then the electrons are separated from the ion beam and directed onto a collector.

The first successful demonstration of electron cooling took place in 1974 at the proton storage ring NAP-M at what is now the Budker Institute of Nuclear Physics (BINP) in Novosibirsk. A few years later CERN and Fermilab built dedicated facilities to study the cooling process, which was a prerequisite for the accumulation of antiprotons for the proposed conversion of proton accelerators to proton–antiproton colliders. The Initial Cooling Experiment (ICE) at CERN became operational in 1977 with the goal of determining which of two cooling methods would be more appropriate for high-energy antiprotons: electron cooling or the technique proposed by Simon van der Meer at CERN, namely, stochastic cooling. The tests on electron cooling took place in 1979 (see box).

From ICE to LEAR

As is well known, CERN chose stochastic cooling for the Antiproton Accumulator that was used to feed the SPS when operating as a proton–antiproton collider. However, the request by physicists for a programme with low-energy antiprotons allowed the ICE electron cooler to continue, with a new lease of life. Thirty years and two reincarnations later, essentially the same device is now used routinely to cool and deliver low-energy antiprotons to experiments on CERN’s Antiproton Decelerator (AD).

The first electron cooling

In its first reincarnation, the ICE cooler was used on the Low Energy Antiproton Ring (LEAR). This decelerator ring was built to deliver intensities of a few thousand million (109) antiprotons in an ultra-slow extraction mode to up to three experiments simultaneously over many hours. Operation in LEAR required a static vacuum level less than 10–11 torr, which meant that the cooler needed a major upgrade of its vacuum system. The high gas load coming from the cathode and collector regions of the cooler had made its operation on ICE very problematic and the best obtainable vacuum was in the order of 10–10 torr. Higher pumping speeds and a careful choice of materials were therefore needed if there was to be any significant improvement in the vacuum.

A team from CERN and Karlsruhe carried out an extensive study of various vacuum techniques between 1981 and 1984, resulting in a new design for the complete vacuum envelope, which was built using high-quality AISI 316LN stainless steel. In addition, the whole system was designed to be bakeable at 300 °C in situ for 24 hours, requiring permanent jackets to provide the necessary thermal insulation. The use of non-evaporable getter (NEG) strips developed for the Large Electron–Positron Collider provided an increase in pumping speed and three such modules were initially installed on the cooler. The choice of NEGs was evident as space limitations excluded any other type of pumping system, such as cryopumps or sputter ion pumps.

With this hurdle overcome preparations started for the integration of the cooling device with LEAR. To fit into one of the 8 m long straight sections of the machine, the interaction length of the cooler had to be reduced by half. Luckily the drift solenoid had been designed in two equal parts so removing one half was not a problem. The high voltage and the control systems of the device were also completely refurbished and a dedicated equipment building was erected close to the LEAR ring. The installation of the cooler took place during the summer of 1987 followed by the conditioning of the cathode and further tests to monitor the evolution of the LEAR vacuum in the presence of the electron beam. By the autumn of 1987 the cooler was ready to cool its first beam. The first cooling tests took place on a 50 MeV proton beam injected directly from Linac 1 and the initial results confirmed all expectations.

After protons the attention turned to antiprotons and the use of electron instead of stochastic cooling to improve the duty cycle of the deceleration in LEAR. To deliver high-quality antiproton beams to the different experiments in the South Hall, the operators applied stochastic cooling after injection at 609 MeV/c and then at various plateaus during the deceleration process. It would normally take around 20 minutes to obtain a “cold” beam at 100 MeV/c, the lowest momentum in LEAR. The use of electron cooling reduced this time to 5 minutes as cooling was needed for only 10 seconds on each of the intermediate plateaus, compared with 5 minutes per plateau with stochastic cooling. Hardware modifications required to render the operation of the cooler as reliable and effective as possible included the replacement of the collector with one that had a better collection efficiency (>99.99%), a new control system to synchronize the power supplies for the cooler with the LEAR magnetic cycle, and the implementation of a transverse feedback system (or “damper”) to counteract the coherent instabilities observed with such dense particle beams.

Apart from being the first cooler to be used routinely for accelerator operations, this apparatus was also the first to demonstrate the cooling and stacking of ions. In 1989 a machine experiment was devoted to studies on O6+ and O8+ ions coming from Linac 1. By applying electron cooling during the longitudinal stacking pro-cess this succeeded in increasing the intensity by a factor of 20. Later these ions were accelerated to an energy of 408 MeV/u and extracted to an experiment measuring the distribution of dose with depth in types of plastic equivalent to human tissue.

The years of operation on LEAR also allowed detailed studies of the cooling process. A full investigation into the influence of the machine’s optical parameters demonstrated that cooling was not effective over the whole radius of the electron beam and that having a finite value of the dispersion function in the cooling section could enhance the process significantly. Before these studies it was believed that a circulating ion beam with transverse dimensions comparable to the electron beam size would produce stronger cooling.

LEIR ring

In a separate study the electron beam was neutralized by accumulating positively charged ions using electrostatic traps placed at either end of the cooling section. By neutralizing the space charge of the electron beam, the induced drift velocity of the electrons would become negligible and hence the equilibrium emittances of the ion beam would be reduced further. Even though a neutralization factor of more than 90% could readily be obtained, it proved to be very difficult to stabilize this very high level of neutralization. Secondary electrons produced in the collector would be accelerated out of the collector region and oscillate back and forth between the collector and the gun. At each passage through the cooling section they would excite the trapped ions causing an abrupt deneutralization.

Another important modification to the cooler was the development of a variable-current electron gun

Another important modification to the cooler was the development of a variable-current electron gun. The gun inherited from ICE was of the resonant type and offered little operational flexibility. The new gun was of the adiabatic type with the peculiarity that it had been designed to operate in a relatively low magnetic field – a prerequisite for its integration in LEAR. Online control of the electron beam intensity was possible by simply varying the voltage difference between the cathode and the “grid” electrode.

Towards the end of the antiproton programme on LEAR, the cooler was paving the way for the conversion of this ring to the Low Energy Ion Ring (LEIR), which would cool and accumulate lead ions for CERN’s new big accelerator, the LHC. A series of machine experiments using lead ions with various charge states (52+ to 55+) not only demonstrated the feasibility of the proposed scheme, but also brought to light an anomalously high recombination rate between the cooling electrons and the Pb53+ ions (which had initially been the proposed charge state) leading to lifetimes that were too short for cooling and stacking in LEAR. It was decided to use Pb54+ ions instead, as they are produced in equal quantities to the 53+ charge state.

On to the AD

After 10 years on LEAR, the cooler was moved to the AD in 1998 where it continues to provide cold antiprotons for the “trap” experiments in their quest to produce large quantities of antihydrogen. Recently the AD team attempted a novel deceleration technique using electron cooling. The idea is to ramp the cooler and the main magnetic field of the AD simultaneously to a lower-energy plateau. This allows the antiproton beam to be kept cold throughout the deceleration pro-cess avoiding the adiabatic blow-up that all beams experience when their energy is reduced. The first tests were very modest, decelerating 3.5 × 107 antiprotons from 46.5 to 43.4 MeV, but future experiments will concentrate on decelerating the beam below 5.3 MeV.

The experience gained with the upgraded ICE cooler on LEAR provided the stepping stones for the design of a new state-of-the-art cooler for the I-LHC project to provide ions for the LHC. This is the first of a new generation of coolers incorporating all of the recent developments in electron cooling technology (adiabatic expansion, electrostatic bend, variable density electron beam, high perveance and “pancake” solenoid structure) for the cooling and accumulation of heavy ion beams. High perveance, or intensity, is necessary to rapidly reduce the phase-space dimensions of a newly injected “hot” beam, while variable density helps to efficiently cool particles with large betatron oscillations and at the same time improve the lifetime of the cooled stack. Adiabatic expansion also enhances the cooling rate because it reduces the transverse temperature of the electron beam by a factor proportional to the ratio of the longitu-dinal magnetic field between the gun and the cooling section.

From the archives

The new cooler, built in collaboration with BINP, was commissioned at the end of 2005 and has since been routinely used to provide high-brightness lead-ion beams required for the LHC. In parallel there have been studies to determine the influence of the cooler parameters (electron beam intensity, density distribution, size) on the lifetime and maximum accumulated current of the ions.

Electron cooling will certainly be around at CERN for quite a few more years. With the AD antiproton physics programme extended until 2016, the original ICE cooler will be nearly 40 years old when it finally retires. If the Extra Low ENergy Antiproton (ELENA) ring comes to life, it will require the design of a new cooler with an energy range of 50 to 300 eV to cool and ultimately decelerate antiprotons to only 100 keV. The possibility of polarized antiprotons at high energy in the AD will also require either an upgrade of the present cooler or the construction of a new one capable of generating a high-current electron beam at 300 keV. Of course the LEIR electron cooler will continue to deliver lead ions for the LHC and, with a renewed interest for a fixed-target ion programme, other ion species could also find themselves being cooled and stacked in LEIR.

A small experiment with a vast amount of potential

CCtot1_07_09

While most of the LHC experiments are on a grand scale, the subdetectors for TOTEM, which stands for TOTal cross-section, Elastic scattering and diffraction dissociation Measurement at the LHC, are not longer than 3 m, although they extend over more than 440 m. Despite reduced dimensions, TOTEM’s potential resides in making some unique observations. In addition to the precise measurement of the proton–proton interaction cross-section, TOTEM’s physics programme will focus on the in-depth study of the proton’s structure by looking at elastic scattering over a large range of momentum transfer. Many details of the processes that are closely linked to proton structure and low-energy QCD remain poorly understood, so TOTEM will investigate a comprehensive menu of diffractive processes – the latter partly in co-operation with the CMS experiment, which is located at the same interaction point on the LHC.

The measurement of the total proton–proton interaction cross-section with a luminosity-independent method requires a detailed study of elastic scattering down to small values of the squared four-momentum transfer, together with the measurement of the total inelastic rate. Early measurements at CERN’s Intersecting Storage Rings (ISR), which were confirmed at CERN’s SppS collider and at the Tevatron at Fermilab, revealed that the proton–proton interaction probability increases with collider energy. However, the nature of the correct growth with energy remains a delicate and unresolved issue. A precise measurement of the total cross-section at the world’s highest-energy collider should discriminate between the different theoretical models that describe the energy dependence. The value of the total cross-section at LHC energies is also important for the interpretation of cosmic-ray air showers. All of the LHC experiments will use TOTEM’s measurement to calibrate their luminosity monitors, in order to calculate the probability of measuring rare events.

Sophisticated detectors

The study of physics processes in the region close to the particle beam, which is complementary to the programmes of the LHC general-purpose experiments, requires appropriate detectors. In the case of elastic and (most) diffractive events, intact protons in the final state need to be detected at a small angle relative to the beam line, therefore special proton detectors must be inserted into the vacuum beam pipe of the LHC. The TOTEM Collaboration had to invest heavily in the design of sophisticated detectors characterised by a high acceptance for particles produced in the busy region close to the beam pipes. All of the three subdetectors – Roman Pots and two particle telescopes, T1 and T2 – will detect charged particles emitted by the proton–proton collisions at interaction point 5 (IP5) on the LHC and will have trigger capabilities to allow an online selection of specific events.

CCtot2_07_09

The Roman Pots are special movable devices that are inserted directly into the beam pipe by bellows, which are compressed as the pots are pushed towards the beams circulating inside the vacuum pipe. They are called “Roman” because they were first used by a group of Italian physicists from Rome, in the early 1970s, to study similar physics at the Intersecting Storage Rings, the world’s first high-energy proton–proton collider. They are known as “Pots” because the vessels that house the delicate detectors, which can localize the trajectory of protons passing within 1 mm of the beam (with a precision of around 20 μm), are shaped like a vase.

In the TOTEM experiment, there are four Roman Pot stations, each composed of two units, separated by a distance of a few metres. Each unit consists of two pots in the vertical plane, which approach the beam from above and below, and one pot that moves horizontally. They are placed on both sides of the interaction point, at distances of 147 m and 220 m.

The proton detectors in the Roman Pots are silicon devices designed by Vladimir Eremin, Nikolai Egorov and Gennaro Ruggiero with the specific objective of reducing the insensitive area at the edge facing the beam to only 50 μm. This can be compared with a dead area typically more than 10 times larger for silicon detectors currently used elsewhere. High efficiency up to the physical border of the detector is an essential feature to maximize the experiment’s acceptance for protons scattered elastically or diffractively at polar angles down to a few microradians at the interaction point. Radiation-hardness studies indicate that this edgeless detector remains fully efficient up to a fluence of about 1.5 × 1014 protons/cm2.

CCtot3_07_09

The inelastic rate is measured by the telescopes T1 and T2. These are two charged-particle trackers situated close to the beam pipes in the CMS cavern at distances of about 10.5 m and 13.5 m on either side of the interaction point; indeed, T1 is within the CMS end-cap. By providing a full azimuthal coverage around the beam line, these telescopes will be able to reconstruct the tracks of charged particles coming from the proton–proton collisions and so allow the determination of the primary interaction vertex.

Each T1 tracker is made up of five subdetector planes perpendicular to the beam line. Each plane consists of six cathode-strip chambers (CSCs) – multiwire proportional chambers filled with a gas mixture, with cathode layers segmented into parallel strips. The advantages of this kind of detector are that it utilizes a well proven technology, provides a simultaneous measurement of three spatial co-ordinates (one from the anode wire plane and two from the cathode-strip planes) and uses a safe gas mixture (Ar/CO2/CF4). As T1 is installed in a high-radiation environment, the chambers have been tested in the gamma-irradiation facility at CERN. They have shown stable performances at doses several times higher than those expected for the design running conditions and exposure time. Tests with cosmic rays and muon beams have shown performances as expected.

CCtot4_07_09

The T2 tracking chambers are based on the gas electron multiplier (GEM) technology, invented by Fabio Sauli and Leszek Ropelewski at CERN, which combines a good spatial resolution with a high rate capability and a good resistance to radiation. In each T2 arm, 20 semi-circular GEM planes, with overlapping regions, are interleaved on both sides of the beam vacuum chamber to form 10 detector planes with full azimuthal coverage. In GEM detectors, in contrast to CSCs, the signal is collected on thin polyimide foils covered by a thin layer of copper on both sides. These foils, densely pierced and contained between two electrodes, are able to achieve high amplification and performance. GEM technology was chosen for T2 for the radiation hardness of the chambers and the flexibility of the read-out geometry. The read-out plane in the T2 chambers has been designed with strips that give a good resolution on the pseudo-rapidity co-ordinate, while pads give the phi co-ordinate for tracking and trigger purposes. Assembled “quarters” were tested with cosmic rays before the installation at IP5 and precommisioning tests have shown a good efficiency and resolution, matching the expected values.

CCtot5_07_09

The read-out of all TOTEM sub-systems is based on the custom-developed digital Very Forward ATLAS–TOTEM (VFAT) chip, which also contains trigger capability. The data acquisition (DAQ) system is designed to be compatible with the CMS DAQ to make common data-taking possible at a later stage.

CCtot6_07_09

The collaboration has recently completed the installation of the Roman Pot stations at 220 m and the subdetector T2. T1 is going to be installed in autumn. In the future two more Roman Pot stations will be put in place at 147 m. The first measurements of the LHC luminosity and individual cross-sections will be performed by TOTEM as soon as the LHC collider becomes operational. The collaboration is looking forward to having adequate data to carry out their first new physics analyses and to having results to announce in 2010.

• The TOTEM Collaboration has about 100 members from 10 institutions in seven countries. Karsten Eggert from CERN is the spokesperson; Angelo Scribano, from the University of Siena and INFN Pisa, is the chair of the Collaboration Board; and Ernst Rademacher from CERN is the technical co-ordinator.

Superconducting RF separator emerges from its long sleep

cryostat for deflector RFI

The first superconducting high-frequency device made for particle physics has begun a new life at the U-70 proton synchrotron at the Institute for High Energy Physics (IHEP) at Protvino. It forms a key part of the new 200 m long high-intensity beam line for 12.5 GeV/c sup positively charged secondary particles, which was commissioned last December. With a content of 25% kaons the beam will be used in the OKA project, which will search for new physics in rare kaon decays. The high fraction of kaons in the beam is provided by the superconducting RF separator, the two niobium deflectors of which are cooled by liquid helium at a temperature of 1.8 K.

The separator was originally designed and constructed in 1970–1977 at the Institute für Kernphysik of the Kernforschungszentrum Karlsruhe, under the leadership of Herbert Lengeler of CERN. Until 1981 it was successfully used to provide a beam enriched in kaons and antiprotons for the Omega spectrometer at CERN. On completion of the experimental programme with the separated beam at Omega, the deflectors were stored at CERN under high vacuum for 17 years. Then, after vacuum-leak tests and other necessary preparations, the deflectors were transported from CERN to IHEP in 1998 (CERN Courier April 1998 p12).

Preparations for a new life for the deflectors began in 2006. First, comprehensive tests took place, together with the restoration of nominal working parameters of deflector RF2, which was damaged at CERN during preparation for the shipment to Russia. Then the two deflectors were placed 76.3 m apart on the beam line and connected to the cryogenic system, which has a refrigerating power of 250 W for superfluid helium at a temperature of 1.8 K. At the same time as the tests the group led by Boris Prosin designed and implemented an RF-feed and phasing system for the deflectors, based on modern microwave elements and the rubidium frequency-standard as a source of the signal. A set of power amplifiers and the common DC power supply, previously used in the antiproton cooling system at CERN, were connected to the output of the microwave system and a completely oil-free high-vacuum system was designed and arranged for each deflector.

Fridhelm Caspers from CERN and Axel Matheisen from DESY have provided important advice and practical help with some equipment for the preparation of the separator. Successful commissioning of the superconducting device was also substantially aided by the use of equipment remaining from a “warm” RF separator, which was developed at CERN 40 years ago for joint IHEP–CERN experiments at U-70 with the French bubble chamber “Mirabelle”. This separator had then been preserved in a very good condition at IHEP.

Stable working of the cryogenic system and the RF separator during the April run at U-70 provided the start of the data-taking at the OKA experimental facility for the study of rare kaon decays. Future efforts will aim to increase the intensity and quality of the separated beam and thereby ensure the success of the OKA experiment.

The ESA Planck spacecraft heads off to its final destination after successful launch

CCnew2_06_09

Planck, ESA’s new spacecraft to map the cosmic microwave background, successfully took its first steps into space on 14 May when it was launched together with the far-infrared space telescope Herschel from Europe’s Spaceport in French Guiana. The two spacecraft were on board an Ariane 5 launcher that took off from Kourou at 13.12 UTC.

Planck is designed to map tiny irregularities in fossil radiation left over from the very first light in the Universe, emitted shortly after the Big Bang. Herschel, equipped with the largest mirror ever launched into space, will observe a mostly uncharted part of the electromagnetic spectrum to study the birth of stars and galaxies as well as dust clouds and planet-forming discs around stars (XMM-Newton observes emission from matter around a black hole).

Almost 26 minutes after launch, Herschel and then Planck were released separately on an escape trajectory towards the second Lagrangian point (L2) of the Sun–Earth system, some 1.5 million km from Earth in the opposite direction to the Sun. This triggered the execution of automatic sequences on board, including switch-on of the high-frequency radio transmitters. Nine minutes later, the first signals from both spacecraft were acquired by ESA’s New Norcia and Perth stations. Shortly afterwards, telemetry confirmed that both spacecraft were in good health.

On 5 June, Planck carried out the critical mid-course manoeuvre to place the spacecraft on its final trajectory for arrival at L2 early in July. The manoeuvre, in which Planck’s main thrusters make repeated “pulse burns”, lasted about 46 hours. This pulse-burn technique is necessary because Planck is slowly spinning as it travels through space, rotating at 1 rpm. The thrusters, which are fixed to the spacecraft and are not steerable, can only burn when they are oriented in the correct direction, which occurs for 6 seconds during each 60 second rotation. The successful manoeuvre provided an overall change in speed of 155 m/s in an initial speed of 105,840 km/h with respect to the Sun. A “touch-up” manoeuvre was scheduled for 17 June to provide a final 5–10 m/s correction.

LHC restart remains on schedule

CCnew4_06_09

At the 151st session of the CERN Council on 19 June, director-general Rolf Heuer confirmed that the LHC remains on schedule for a restart this autumn, albeit about 2–3 weeks later than originally foreseen. Following the incident of 19 September 2008 that brought the LHC to a standstill, a great deal of work has been done to understand the causes of the incident and to ensure that a similar incident cannot happen again.

The root cause of the September incident was a faulty splice in the high-current superconducting cable between two magnets in LHC sector 3-4. New non-invasive techniques have been developed to investigate the splices, of which there are some 10,000 around the LHC ring, and to determine whether they are safe for running or whether they need to be repaired. As part of this process one more sector of the LHC, sector 4-5, is currently being warmed up. This will bring increased confidence that the splices are fully understood.

Sector 4-5 has been measured at a temperature of 80 K, indicating at least one suspect splice. By warming the sector, the results of this measurement can be checked at room temperature, thereby confirming the reliability of testing at 80 K. If the 80 K measurements are confirmed then any suspect splices in this sector will be repaired. More importantly, validation of the 80 K measurements will allow the splice resistance in the last three sectors to be measured at this temperature – thereby avoiding the time needed for re-warming. The measurements in these sectors will provide the information needed to determine the start-up date and initial operating energy of the LHC in the range 4–5 TeV. Running at 4 TeV should be possible without further repairs, whereas 5 TeV could require extra work to be done.

A key part of the modifications being made to the LHC is the quench-protection system (QPS), which triggers evacuation of the stored magnetic energy quickly and safely should a part of the LHC’s superconducting system warm up slightly and cease to be superconducting. Following the September incident, a new enhanced QPS system was designed and is now under construction. The new system will be fully tested and operational in late summer 2009 and will protect the LHC from incidents similar to that of last September.

Work on the new QPS is just one aspect of the work in the LHC tunnel being carried out by teams from CERN, who are supported by scientists from other particle-physics laboratories around the world. New pressure-relief valves are being installed, the ultrahigh-vacuum system is being improved and the systems anchoring the LHC magnets to the floor are being strengthened. All of this contributes to preparing the machine for a long and safe operational lifetime.

“We’ve received an unprecedented level of support from physics labs and institutes around the world through the manpower that they have provided to help us through the repairs and consolidation, as well as the invaluable advice we’ve received from the external committees that have studied the measures we’re taking,” Heuer explained. “It’s a sign of the increasingly global nature of particle physics and we’re extremely grateful for the solidarity we’re seeing.”

• CERN publishes regular updates on the LHC in its internal Bulletin, available at www.cern.ch/bulletin, as well as via Twitter (www.twitter.com/cern) and YouTube (www.youtube.com/cern). Further details of the 151st session of the CERN Council are available at http://council.web.cern.ch/council/en/Governance/NewsGovJune09.html.

Knocking on the door again

CCnew5_06_09

The LHC’s anti-clockwise beam transfer system was tested on 6 and 7 June. Particle bunches were sent from the SPS through the 2.8 km transfer line towards the LHC where it intersects just before the LHCb cavern. The beam went down the transfer line and stopped just before reaching the LHC tunnel, where a beam stopper – 4 m of graphite – is physically placed in the beam line to prevent the beam from taking the last step into the LHC. Part of the LHCb detector was turned on during the beam test, allowing the reconstruction of tracks through the Vertex Locator.

CERN openlab enters phase three

CCnew6_06_09

On 2–3 April CERN’s director-general, Rolf Heuer, officially launched the third phase of the CERN openlab at the 2009 annual meeting of the CERN openlab Board of Sponsors. During his introductory speech Heuer stressed the importance of collaborating with industry and building closer relationships with other key institutes, as well as the European Commission. The board meeting provided an opportunity for partner companies (HP, Intel, Oracle and Siemens), a contributor (EDS, an HP company) and CERN to present the key achievements obtained during openlab-II and the expectations for openlab-III.

Each phase of CERN openlab corresponds to a three-year period. In openlab-I (2003–2005) the focus was on the development of an advanced prototype called opencluster. CERN openlab-II (2006–2008) addressed a range of domains from platforms, databases and the Grid to security and networking, with HP, Intel and Oracle as partners and EDS, an HP company, as a contributor. Disseminating the expertise and knowledge has also been a key focus of openlab. Regular training sessions have taken place and activities include openlab contributions to the CERN School of Computing and the CERN openlab Summer Student Programme, with its specialized lectures.

With the start of the third phase of CERN openlab, new projects have already been initiated with the partners. These are structured into four Competence Centres (CC): Automation and Controls CC; Database CC; Networking CC; and Platform CC. Through the Automation and Controls CC, CERN, Siemens and ETM Professional Control (a subsidiary of Siemens) are collaborating on security, as well as the move of automation tools towards software engineering and handling of large environments. In partnership with Oracle, the Database CC focuses on items such as data distribution and replication, monitoring and infrastructure management, highly available database services and application design, as well as automatic failover and standby databases.

One focus of the Networking CC is a research project launched by CERN and HP ProCurve to understand the behaviour of large computer networks (with 10,000 nodes or more) in high-performance computing or large campus installations. Another activity involves the grid-monitoring and messaging projects carried out in collaboration with EDS, an HP company. The Platform CC project focuses on PC-based computing hardware and the related software. In collaboration with Intel it addresses important fields such as thermal optimization, application tuning and benchmarking. It also has a strong emphasis on teaching. During the third phase, the team will not only capitalize on and extend the successful work carried out in openlab-II, but it will also tackle crucial new areas. Additional team members have recently joined and the structure is now in place to collaborate and work on bringing these projects to fruition.

The openlab team consists of three complementary groups of people: the young engineers hired by CERN and funded by the partners (21 people over the past eight years); technical experts from partner companies involved in the openlab projects; and CERN management and technical experts working partly or fully on the joint activities. The people involved are not concentrated in a single group at CERN. They span many different units in the IT department, as well as the Industrial Controls and Electronics Group in the engineering department, since the arrival of Siemens as an openlab partner. The distributed team structure permits close collaboration with computing experts in the LHC experiments, as well as with engineers and scientists from the various openlab partners who contribute greatly to these activities. In addition, significant contributions are made by the students participating in the CERN openlab Summer Student Programme, both directly to the openlab activities and more widely to the Worldwide LHC Computing Grid, the Enabling Grids for E-sciencE project and other Grid- and CERN-related activities in the IT Department. Since the inception of openlab, more than 100 young computer scientists have participated in the programme, where they spend two months at CERN. This summer the programme will be welcoming 14 students of 11 different nationalities.

• The activities carried out from May 2008 to May 2009 are presented in the eighth CERN openlab annual report available from the CERN openlab web site at www.cern.ch/openlab.

CHEP ’09: clouds, data, Grids and the LHC

CCche1_06_09

The CHEP series of conferences is held every 18 months and covers the wide field of computing in high-energy and nuclear physics. CHEP ’09, the 17th in the series, was held in Prague on 21–27 March and attracted 615 attendees from 41 countries. It was co-organized by the Czech academic-network operator CESNET, Charles University in Prague (Faculty of Mathematics and Physics), the Czech Technical University, and the Institute of Physics and the Nuclear Physics Institute of the Czech Academy of Sciences. Throughout the week some 500 papers and posters were presented. As usual, given the CHEP tradition of devoting the morning sessions to plenary talks and limiting the number of afternoon parallel sessions to six or seven, the organizers found themselves short of capacity for oral presentations. They received 500 offers for the 200 programme slots, so the remainder were shown as posters, split into three full-day sessions of around 100 each day. The morning coffee break was extended specifically to allow time to browse the posters and discuss with the poster authors.

A large number of the presentations related to some aspect of computing for the up-coming LHC experiments but there was also a healthy number of contributions from experiments elsewhere in the world, including Brookhaven National Laboratory, Fermilab and SLAC (where BaBar is still analysing its data although the experiment has stopped data-taking) in the US, KEK in Japan and DESY in Germany.

Data and performance

CCche2_06_09

The conference was preceded by a Worldwide LHC Computing Grid (WLCG) Workshop, summarized at CHEP ’09 by Harry Renshall from CERN. There was a good mixture of Tier 0, T1 and T2 representatives in the total of the 228 people present at the workshop, which began with a review of each of the LHC experiment’s plans. All of these include more stress-testing in some form or other before the restart of the LHC. The transition to the European Grid Initiative from the Enabling Grids for E-sciencE project is clearly an issue, as is the lack of a winter shutdown in the LHC plans. There was discussion on whether or not there should be a new “Computing Challenge”, to test the readiness of the WLCG. The eventual decision was “yes”, but to rename it STEP ’09 (Scale Testing for the Experimental Programme), schedule it for May or June 2009 and concentrate on tape recall and event processing. The workshop concluded that ongoing emphasis should be put on stability, preparing for a 44-week run and continuing the good work that has now started on data analysis.

Sergio Bertolucci, CERN’s director for research and scientific computing, gave the opening talk of the conference. He reviewed the LHC start-up and initial running, the steps being taken for the repairs following the incident of 19 September 2008 as well as to avoid any repetition, and the plans for the restart. He compared the work currently being done at Fermilab, and how CERN will learn from this in the search for the Higgs boson. Les Robertson of CERN, who led the WLCG project through the first six years, discussed how we got here and what will come next. A very simple Grid was first presented at CHEP in Padova in 2000, leading Robertson to label the 2000s as the decade of the Grid. Thanks to the development and adoption of standards, Grids have now developed and matured, with an increasing number of sciences and industrial applications making use of them. However, Robertson thinks that we should be looking at locating Grid centres where energy is cheap, using virtualization to share processing power better, and starting to look at “clouds”: what are they in comparison to Grids?

The theme of using clouds, which enable access to leased computing power and storage capacity, came up several times in the meeting. For example, the Belle experiment at KEK is experimenting with the use of clouds for Monte Carlo simulations in its planning for SuperBelle; and the STAR experiment at Brookhaven is also considering clouds for Monte Carlo production. Another of Robertson’s suggestions for future work, “virtualization”, was also one of the most common topics in terms of contributions throughout the week, with different uses cropping up time and again in the conference’s various streams.

Other notable plenary talks included those by Neil Geddes, Kors Bos and Ruth Pordes. Geddes, of the UK Science and Technology Facilities Council Rutherford Appleton Laboratory, asked “can WLCG deliver?” He deduced that it can, and in fact does, but that there are many challenges still to face. Bos, of Nikhef and the ATLAS collaboration, compared the different computing approaches across the LHC experiments, pointing out similarities and contrasts. Femilab’s Pordes, who is executive director of the Open Science Grid, described work in the US on evolving Grids to make them easier to use and more accessible to a wider audience of researchers and scientists.

The conference had a number of commercial sponsors, in particular IBM, Intel and Sun Microsystems, and part of Wednesday morning was devoted to speakers from these corporations. IBM used its slot to describe a machine that aims to offer cooler, denser and more efficient computing power. Intel focused on its effort to get more computing for less energy, making note of work done under the openlab partnership with CERN (CERN openlab enters phase three). The company hopes to address this partially by increasing computing-energy efficiency (denser packaging, more cores, more parallelism etc) because it realizes that power is constraining growth in every part of computing. The speaker from Sun presented ideas on building state-of-the-art data centres. He claimed that raised floors are dead and instead proposed “containers” or a similar “pod architecture” with built-in cooling and a modular structure connected to overhead, hot-pluggable busways. Another issue is to build “green” centres and he cited solar farms in Abu Dhabi as well as a scheme to use free ocean-cooling for floating ship-based computing centres.

It impossible to summarize in a short report the seven streams of material presented in the afternoon sessions but some highlights deserve to be mentioned. The CERN-developed Indico conference tool was presented with statistics showing that it has been adopted by more than 40 institutes and manages material for an impressive 60,000 workshops, conferences and meetings. The 44 Grid middleware talks and 76 poster presentations can be summarized as follows: production Grids are here; Grid middleware is usable and is being used; standards are evolving but have a long way to go; and the use of network bandwidth is keeping pace with technology. From the stream of talks on distributed processing and analysis, the clear message is that much work has been done on user-analysis tools since the last CHEP, with some commonalities between the LHC experiments. Data-management and access protocols for analysis are a major concern and the storage fabric is expected to be stressed when the LHC starts running.

Dario Barberis of Genova/INFN and ATLAS presented the conference summary. He had searched for the most common words in the 500 submitted abstracts and the winner was “data”, sometimes linked with “access”, “management” or “analysis”. He noted that users want simple access to data, so the computing community needs to provide easy-to-use tools that hide the complexity of the Grid. Of course “Grid” was another of the most common words, but the word “cloud” did not appear in the top 100 although clouds were much discussed in plenary and parallel talks. For Barberis, a major theme was “performance” – at all levels, from individual software codes to global Grid performance. He felt that networking is a neglected but important topic (for example the famous digital divide and end-to-end access times). His conclusion was that performance will be a major area of work in the future as well as the major topic at the next CHEP in Taipei, on 17–22 October 2010.

Final magnet for sector 3-4 goes underground

CCnew4_05_09

The final magnet – a quadrupole short straight section – to refit sector 3-4 of the LHC was lowered into the tunnel and transported to its location on 30 April, two weeks after the 39th and final, repaired dipole magnet was lowered and installed. This magnet system was the last of the spares to be prepared for use in the refurbished sector .

With all of the necessary magnets now underground, work in the tunnel will continue to connect them together. In total 53 magnets were removed from sector 3-4 following the incident on 19 September 2008. Of these, 16 magnets had sustained minimal damage and so were refurbished and put back into the tunnel; the remaining 37 were replaced by spares, depleting the number of reserve magnets to nearly zero. Work will continue on the surface to repair the remaining damaged magnets to replenish the pool of spares.

Since the start of the repair work in sector 3-4, the Vacuum Group has been cleaning the beam pipes to remove metallic debris and soot created by the electrical arc at the root of the incident. All 4800 m of the beam pipes in sector 3-4 were first surveyed centimetre by centimetre to document the damage before the cleaning work began. The cleaning process itself involves passing a brush through the pipe to clean the surface mechanically, followed by vacuuming to remove any debris both inside and outside the beam pipe. This procedure is repeated ten times, followed by a final check with an endoscopic camera. By the end of April some 70% of the affected zone had been cleaned.

CCnew5_05_09

Work meanwhile continues on the installation of new pressure release ports to allow a greater rate of helium escape in the event of an incident similar to that of 19 September. This is now proceeding in the areas outside the arc sections – in particular on the inner triplets (the focusing magnets either side of the collision point). The ports have been slightly modified to fit the geometry of these magnets.

The root of the incident on 19 September was a splice failure interconnection between two magnets and since then CERN has developed highly sensitive methods to detect resistances of splices at the nano-ohm level. These have revealed a small number of splices with abnormally high resistance, which are being investigated, understood and dealt with. Now a new test has been developed to measure the electrical resistance of the connection joining the busbars of the superconducting magnets together. Each busbar consists of a superconducting cable surrounded by a larger copper block. Although the copper cannot carry the same level of current as the superconducting cable for sustained periods, it plays the essential role of providing a low resistance path to the current when a magnet or a busbar quenches: the copper gives time to the protection system to discharge the stored energy. The new test allows the electrical continuity of the copper part to be checked and so provides another important quality control safety check for the electrical connections.

Careful tests have revealed that in some cases, the process of soldering the superconductor in the interconnecting high-current splice can melt the solder joining the superconducting cable to the copper of the busbar, and thereby impede its ability to do its job if a quench occurs. As a result, the teams at work on the consolidation process are improving the soldering process, and checking the whole of the LHC for similar faults. A test has been done for sectors at room temperature and studies are now going on to allow the same procedure at cryogenic, but non-superconducting temperatures. By mid-May, three sectors had been tested at room temperature, and five potentially faulty interconnections found. These are being repaired accordingly.

• For up-to-date news, see The Bulletin.

BEPCII/BESIII accumulates 100 million ψ(2S) in Beijing

CCnew6_05_09

After five years of construction, the upgraded Beijing Electron–Positron Collider (BEPCII) and the new Beijing Spectrometer (BESIII) finished accumulating their first large data set of more than 100 million ψ(2S) events on 14 April. This is the world’s largest ψ(2S) data set. Data taking started on 6 March, following a scan of the ψ(2S) peak. During the following month, as machine commissioning continued, the peak luminosity of BEPCII increased steadily from 1.4 to 2.3×1032 cm–1s–2, with beam currents of 550 mA for both electrons and positrons.

The commissioning of the upgraded accelerator and the new detector began in summer 2008, with the first event observed on 18 July. Approximately 13 million ψ(2S) events were obtained last autumn, providing data for studies of the new detector and for calibration. The results show that the detector performance is as expected: efficiency, resolution and stability all meet specifications. The new data sample of 100 million ψ(2S) events will allow more-detailed studies of detector performance, as well as many physics analyses, for example of hc, ψc, and ηc charmonium states. After some accelerator studies, BEPCII and BESIII will now turn to running at the J/ψ peak, with the goal of collecting a high-statistics sample of J/ψ events.

BEPCII, the upgrade of BEPC at the Institute of High Energy Physics (IHEP) in Beijing, is a two-ring collider operating between 1 and 2.2 GeV beam energy in the charm energy region. It has a design luminosity of 1×1033 cm–2s–1 at 1.89 GeV, which is an improvement by two orders of magnitude on its predecessor. The BESIII detector features a beryllium beam pipe; a small-cell, helium-based drift chamber; a time-of-flight system; a CsI(Tl) electromagnetic calorimeter; a 1 T superconducting solenoid magnet; and a muon identifier using the magnet yoke interleaved with resistive plate chambers. The BESIII collaboration consists of groups from Germany, Italy, Japan, Russia, the US, as well as many Chinese Universities and IHEP.

bright-rec iop pub iop-science physcis connect