On 16 March, operation of the K Long Experiment (KLOE) detector ended after 23 months of continuous running at the DAFNE collider at Frascati. During this time the detector collected an integrated luminosity of 2.3 fb-1, corresponding to the observation of some 6.2 billion Φ decays. These data are in addition to the 450 pb-1 sample collected in shorter runs in 2000, 2001 and 2002.
DAFNE, the Frascati Φ-factory, has been performing increasingly well, delivering 200 pb-1 a month by the end of 2005. The efforts by the DAFNE and KLOE teams to ensure good data-taking conditions have resulted in their collecting a large homogeneous data sample in terms of machine background, beam energy and detector performance. Smooth trigger and data-acquisition operations, and continuous running of detector calibration ensured high-quality data.
KLOE has many unique aspects, in particular detector performance, the special environment at the Φ factory, the unique possibility of kaon species tagging, an open trigger and complete recording of all data. These allow the physics investigated to include such varied topics as precision measurements of kaon properties, the study of scalar mesons and the measurement of the hadronic cross-section at less than 1 GeV, which is necessary for calculating the muon anomaly. The Φ-meson decays are also a copious source of η and η’ mesons.
With the analysis of 450 pb-1 of data, KLOE has reached accuracies of a fraction of 1% in the measurements of the kaon absolute branching ratios and lifetimes. The results have already removed a problem with the unitarity of the quark-mixing matrix that dates back more than 30 years. The new data set will lead to improvements of all published results, especially in the Ks sector, and to new measurements of the poorly known hadronic cross-section near threshold.
DAFNE will resume operation by the FINUDA collaboration in a few months to investigate hypernuclei. Plans to upgrade the collider to DAFNE2 and the detector to KLOE2 are being studied.
The tracker for the CMS experiment at CERN passed an important milestone in March when the first cosmic-muon tracks were observed in one of the end caps. CMS is one of the two large multi-purpose detectors being constructed at the Large Hadron Collider. Its tracker system, comprising a barrel detector and two end caps, contains 25,000 silicon-microstrip sensors covering 210 m2, with 9.6 million electronic readout channels. Its construction involves teams from the whole of Europe and the US, with the final assembly at CERN.
The two tracker end caps (TECs) feature silicon-strip modules mounted on wedge-shaped carbon-fibre support plates, or “petals”. Up to 28 modules are arranged in radial rings on both sides of these plates; one eighth of an end cap is populated with 18 petals and is called a “sector”.
One of the TECs, TEC+, is being constructed at the RWTH (Rheinisch-Westfälische Technische Hochschule) Aachen and testing began earlier this year. A total of 400 silicon-strip modules are read out simultaneously, using close-to-final readout and power-supply components and data-acquisition software. The first sector has already been thoroughly tested, demonstrating a channel inefficiency of less than 1% and common-mode noise of only 25% of the intrinsic noise.
To understand the behaviour of the TEC sector better, including the response to real particles, basic functionality testing was followed by a run with cosmic muons. Thousands of tracks have been recorded and will be used to study tracking performance and to exercise various track-alignment algorithms.
The next important step will be to test the first sector under CMS operating conditions, with the silicon modules working at a temperature of less than -10 °C. The remaining seven sectors will then be assembled and in autumn the TEC+ will be delivered to CERN.
As the many pieces of the Large Hadron Collider (LHC) and its experiments come together at CERN, Canada’s contributions to the project are moving into their final positions. One of the hadronic end-cap calorimeters built at the Tri-University Meson Facility (TRIUMF) was recently installed in the ATLAS detector, and the first of the resistive twin-aperture quadrupoles for the “beam cleaning” regions in the LHC, designed at TRIUMF and built by Alstom Canada Inc, should be installed in the tunnel in June. However, the pulse-forming networks (PFNs) for the LHC injection kickers will soon become the first components from Canada to be completely installed.
The LHC will have fast-pulsed magnet systems – the kickers – to inject the two proton (or heavy-ion) beams into the main ring. Two pulsed systems are required, each comprising four magnets, four PFNs and four high-voltage thyratron-based switches. Each PFN consists of two 28 cell, 10 Ω lines connected in parallel at their ends. To kick the beam buckets from the Super Proton Synchrotron into the LHC ring, each system must produce a magnetic field pulse of 1.3 T.m strength, with a rise time of not more than 900 ns, an adjustable flattop duration up to 7.86 μs, and a fall time of not more than 3 μs. The total ripple in the field must be less than ±0.5%.
The energy in a PFN is provided by a resonant-charging power supply (RCPS), which is used to reduce as much as possible the number of untriggered discharges of the thyratrons. The performance of the electrical circuit of the complete system, including a 66 kV RCPS and a 5 Ω PFN, was carefully simulated, and components were selected for the PFN on the basis of theoretical models in which a ripple of less than ±0.1% was attained.
As part of the Canadian contribution to the LHC, TRIUMF has built and tested in-house five RCPSs and nine PFNs. After shipment to CERN, the RCPSs and PFNs are thoroughly tested before insertion into the tunnel sections where injection into the LHC will occur. Installation began in May 2005, and the final systems should be installed this spring.
The first measuring period for external users at the vacuum ultraviolet free-electron laser (VUV-FEL), the new ultraviolet and X-ray radiation source at DESY, ended successfully on 27 February. Now the facility is gearing up for its second run in May.
The facility’s centre-piece is the 300 m long FEL, which is the world’s first – and until 2009, only – source of intense laser radiation at VUV and soft X-ray wavelengths. In January 2005, it generated its first laser pulses with a 32 nm wavelength, the shortest wavelength ever achieved with a FEL, and then started up for users in August. It is available for research groups from all over the world for experiments in areas such as cluster physics, solid-state physics, plasma research and biology. Four experimental stations are currently available, at which different instruments can be operated alternately.
Since the official start-up in August, a total of 14 research teams from 10 countries have carried out experiments ranging from generating and measuring plasmas to the first investigations of experimental methods for studying complex biomolecules, which will later be used at the European X-ray FEL (XFEL). As expected, the laser pulses of the VUV-FEL are shorter than 50 fs. This allows researchers to trace various processes on extremely short time scales by taking time-resolved “snapshots” of the reaction process. Investigating such time-resolved processes with radiation of short wavelengths is one of the most important new applications of this kind of X-ray laser.
Before user experiments resume in May, the DESY team is carrying out machine studies to improve the stability of the facility, increase the energy of the laser pulses, and shorten the wavelength of the radiation to around 15 nm. At the same time, various studies are being done to prepare for the planned XFEL, which will be 3.4 km long and generate even shorter wavelengths, down to 0.085 nm, when it comes into operation in 2013. The VUV-FEL should produce its shortest wavelength of 6 nm in 2007, after an additional accelerator module is installed.
Last year was a time for rejuvenation and building at CERN as a major part of the accelerator complex was shut down while preparations for the Large Hadron Collider (LHC) took place. During the shutdown, which started in November 2004 and continued throughout 2005, the Proton Synchrotron (PS) and the Super Proton Synchrotron (SPS) began an extensive renovation programme that will continue into the next decade. The LHC will depend on the injector complex that feeds it to deliver reliable and top-performance beams when it starts up in 2007. This comprises Linac2, Linac3, the PS Booster, the Low Energy Ion Ring (LEIR) and the PS and SPS.
The programme to renovate the main magnets of the PS, which has been operating since 1959, benefited from the long shutdown. The oldest accelerator of the injector complex had shown signs of its age, going offline for two weeks in 2003 when two magnets failed. The magnets were replaced, but to ensure it is in good condition when the LHC is turned on, the PS and CERN’s other accelerators in the LHC injector chain started a consolidation programme. By renovating parts that are at the end of their useful life and updating obsolete components and systems, the consolidation programme intends to identify and resolve potential problems before operations are affected.
Wear and tear in the PS, which was still equipped with many of its original components, resulted from radiation degrading the materials and mechanical fatigue from pulsed magnetic forces. During 2005, 25 of the 100 main magnets were removed, renovated and re-installed in the PS tunnel. To move the 35-tonne magnets from the tunnel to the workshop in nearby building 180, the 45-year-old PS locomotive was restored. In the workshop, teams from the Budker Institute of Nuclear Physics (BINP) in Novosibirsk, supervised by specialists from CERN, replaced the coils and pole-face windings and re-glued loose laminations. After testing, the renovated magnets were re-installed in the tunnel and re-aligned, ready for start-up in April 2006.
The SPS has also shown signs of age. In 2005, leaks appeared in the hydraulic circuits of some of the accelerator’s dipoles, but after a thorough investigation, a way was found to make repairs. Those repairs and other upgrades will be completed during the 2006-07 shutdown.
New construction
The SPS is almost the last link in the chain that will supply beams to the LHC. The final connection will be made by two transfer lines, TI 2 and TI 8, that will take beams from the SPS. TI 8 was commissioned in 2004, and progress continued on TI 2 during 2005, with components installed and tested up to some 250 m before the shaft where the LHC magnets start the underground journey to their final locations. Upstream of TI 2, the beam extraction in the long straight section of the SPS has been converted into a fast extraction. Four upgraded kicker magnets have been installed to deflect the beam into the gap of existing septum magnets, which bend the beam horizontally out of the SPS ring. New extraction protection devices have also been installed to cope with the high-intensity beam for the LHC.
The recent shutdown also allowed time to work on Linac3 and LEIR. Together, they will provide heavy-ion beams to the LHC experiments in 2008. LEIR is the successor to the Low Energy Antiproton Ring and reuses much of the former machine’s equipment. At the beginning of 2005, Linac3 was equipped with a new 14.5 GHz electron cyclotron resonance ion source (ECRIS) to increase the beam intensity. The configuration of the source was based on R&D done under a European Framework 5 project and the source itself was supplied by the Commissariat à l’Energie Atomique, Grenoble. In spring 2005 a beam was transported successfully from Linac3 to LEIR through the transfer line, which had been almost completely rebuilt.
LEIR itself was installed last summer and commissioning began when the first beam (of O4+ ions) was run in October. Preparation then began for the first studies of electron cooling, using collisions with an electron beam in a section of LEIR to reduce the dimensions of the ion beam. This focuses the beam and frees space to accumulate several pulses from Linac3 in LEIR. The cooling system, built by BINP, has been commissioned with electrons and the strong perturbations its magnetic system has on the ion beam have been corrected. The first cooling measurements took place at the end of the 2005 run, and the goal is to complete commissing in 2006.
The new control centre
While various teams worked on improvements needed for different aspects of the LHC’s operation, others were working to bring control of the future accelerator complex together in one room. The new CERN Control Centre (CCC) began operating on 1 February 2006 and was officially inaugurated on 16 March in a ceremony with members of the CERN Council.
The CCC, a sleek, futuristic room filled with a multitude of monitoring screens, combines the control rooms of all the laboratory’s accelerators, as well as piloting cryogenics and technical infrastructures. The new centre has 39 control consoles laid out in four zones, one dedicated to each of the technical infrastructure, the PS complex, the SPS and the LHC. The cryogenics consoles are positioned between the LHC zone and the technical infrastructure zone. During peak operation periods there could be up to 13 operators working on any one shift, not counting the many experts responsible for assisting them. Built and installed in just 15 months, the centre is the first part of the LHC project to start up. The operators for accelerator testing are already on site, as the machines spring back into life.
By bringing together all of the operators and facets of the LHC injector chain, the CCC will guarantee a high-quality beam. It will also manage the beams to other experimental facilities at CERN. Similar to a rail network that uses the same infrastructure to send passengers towards various destinations, the accelerators of CERN can transport several beams simultaneously and adapt each one to a given facility. The PS, for example, can prepare beams for the LHC while also feeding the Antiproton Decelerator (AD) and fixed-target experiments at the SPS. This multitasking is an important feature of accelerator and beam operations at CERN.
Now the machines are all coming back to life. The Isotope Separator On Line facility (ISOLDE) already started operation in April. Serviced by the PS Booster, ISOLDE had run during 2005, when it received record numbers of protons from the booster, as the PS and SPS were not operational. The PS service to the East Hall is scheduled to recommence on 22 May, and the AD should start up on 6 June. As of 15 June the SPS will provide the beam for the North Area, where several fixed-target experiments will be ready and waiting. On 29 May however, a major new project will come to life as commissioning begins for the CERN Neutrinos to Gran Sasso project. This facility will mark a new phase in the 30 years of the SPS when it delivers protons to generate a beam of neutrinos that will travel underground 730 km to the Gran Sasso Laboratory in Italy. It will continue the tradition of neutrino beams at CERN, which began with the PS and then moved to the SPS, and will test the recent improvements to the accelerator complex as the countdown continues towards the LHC start-up.
Successful beam acceleration in the Large Hadron Collider (LHC) at CERN will require accurate and robust control of a variety of machine parameters. With a sufficiently accurate model, it might be possible to control these parameters by the “set it and forget it” method, more often referred to by control specialists as open-loop control. However, in complex systems such as the LHC it becomes advantageous to measure continuously the value of the parameters to be controlled and to adjust the strength of correction elements to maintain the desired values. This method is called closed-loop, or feedback, control.
In addition to correction of absolute position, beam control in the transverse (horizontal and vertical) directions in a synchrotron must regulate two parameters in each plane: betatron tune and chromaticity. The beam in a synchrotron is focused by quadrupole magnets, the equivalent of focusing lenses in optics. The beam particles oscillate transversely in these confining fields, similar to a mass on a spring. This is known as betatron motion and the frequency of oscillation is the betatron tune. In addition, the momentum spread of the beam causes particles with different momenta to experience different focusing, a property of the accelerator known as chromaticity, which is corrected with sextupole magnets.
Equally important is that inevitable magnetic-field errors cause the betatron motions in horizontal and vertical planes to become coupled to each other, and this coupling must be carefully controlled. In the “mass on a spring” model, the horizontal and vertical motions are equivalent to two independent masses vibrating on separate springs, and coupling is a third spring that joins the two masses. This coupling may be corrected with skew quadrupole magnets. Coupling control is often one of the more difficult problems in accelerator control. Inadequate coupling control makes it impossible to control betatron tune properly and also reduces the area of the stable transverse space available to the beam.
Historically, control of tune, chromaticity and coupling has been open loop. However, the LHC pushes design frontiers to the limit, and successful beam acceleration will require closed-loop feedback control of these transverse parameters. In 2002 a collaboration was established between CERN and the Collider-Accelerator Department at the Brookhaven National Laboratory. The purpose was to benefit the LHC from the tune-feedback programme at Brookhaven, and to benefit Brookhaven from CERN expertise. This collaboration is now sponsored by the US LHC Accelerator Research Program (LARP), funded by the US Department of Energy, and has been expanded to include Fermilab. The collaborative effort paid off spectacularly at the beginning of the 2006 run of the Relativistic Heavy Ion Collider (RHIC), with robust control of tune and coupling up the acceleration ramps.
Figure 1 shows data on betatron tunes from a typical development ramp early in RHIC Run 6, with tune and coupling feedback enabled. The drop in tune near the end of the acceleration ramp follows from the fact that RHIC is currently running with polarized protons. The working point used during the acceleration ramp is chosen to minimize growth in the emittance of the beam; once the machine is at full energy the working point is shifted to minimize the effect on the protons of depolarizing resonances. The feedbacks were turned off at the end of the beta squeeze. With the feedbacks on, the largest departures from the desired tunes were around 10-3, while the rms variation of tune was a few 10-4.
The accomplishment of successful ramps with feedback control of tune and coupling was the result of an effort that evolved over several years. Early efforts at RHIC were persistently confounded by two obstacles. The first was a problem of dynamic range. To avoid blowing up the transverse size of the beam, and thereby reducing the beam brightness available to the physics experiments, the beam excitations needed to measure and control tune must be very small. The power in the resulting betatron signal is of the order of femtowatts (10-15 W), while the power delivered to the pickup by the beam unrelated to the betatron tune is in the range of tens of watts. We therefore devoted our attention to this dynamic-range problem, attempting many solutions, all with only partial success. Ultimately, CERN provided the solution by way of an analogue front-end using direct diode detection, or “3D” (Gasior and Jones 2005).
The second obstacle to tune feedback at RHIC was linear coupling, which rotates the planes of the betatron oscillations away from the horizontal and vertical in which the magnet portion of a tune-feedback loop applies corrections. When this rotation approaches 45° the magnet loop then applies tune corrections in the wrong plane relative to the tune measurement, and the tune-feedback loop is driven unstable. RHIC (like the LHC) requires strong sextupoles to compensate for natural chromaticity; unfortunately, vertical offset in the sextupoles introduces coupling, and vertical-orbit fluctuations from ramp to ramp in RHIC were often sufficient to cause the tunes to become fully coupled.
In 2004 we fully understood the coupling problem, so efforts to implement tune feedback ceased, and we began to implement coupling feedback. We reconfigured the tune-measurement system to measure both projections of the tunes in both planes during tune tracking. Due to hardware limitations, this could be done in only one ring at a time. However, the excellent quality of the resulting data made it clear that we could implement coupling feedback. Over the course of the next two years this was studied in some detail and a decoupling algorithm was formulated (Jones et al. 2005 and Luo et al. 2005).
For the 2006 run at RHIC a new system for measuring baseband tune – or baseband Q (BBQ) – was developed. This incorporates measurement of both tunes in both planes in both rings, as well as the 3D analogue front ends. The system was extensively commissioned on analogue test resonators before working with a real beam, both for tune and coupling measurement. Within minutes of the first circulating bunched beam in RHIC, the BBQ was measuring tune and coupling “out of the box”. During the period of machine set-up and tuning in preparation for developing acceleration ramps, the control-system interface to the magnets was completed, together with measurements of overall system loop gains and the design of the loop filters.
Ramping began on the evening of 15 February. The beam was lost early in the first ramp, which was done without tune and coupling feedback to establish a baseline. For the second ramp the feedback loops were closed and the beam was delivered to full energy, with tune control of around 0.001 or better, with the machine well decoupled throughout the ramp. This successful ramp was the world’s first attempt to implement simultaneous tune and coupling feedback during beam acceleration – good news for the LHC. There is now a reasonable expectation, given sufficient attention to integration with the controls and magnet systems, that an operational tune- and coupling-feedback system will be available early in the LHC commissioning.
As the tune- and coupling-feedback system for RHIC moves towards full operational integration as a “non-expert” system, the focus for instrumentation has shifted to chromaticity control and feedback. As valuable as robust tune and coupling feedback will be for LHC commissioning, the most urgent need will be for chromaticity measurement and control, to combat the chromatic effect of “snapback” transients at the beginning of the acceleration ramp.
Many approaches to the problem of fast and accurate chromaticity measurement during ramping are being investigated. The most promising approach implemented so far tracks tune while simultaneously modulating the beam momentum very slightly. Measurement of the resulting tune modulations has permitted determination of chromaticity during ramping with an accuracy of around a unit, and a bandwidth of about 1 Hz. This method has been operational in RHIC for the past two years as a non-expert measurement under sequencer control (Cameron et al. 2005). During the coming weeks and months both this and other methods will be further evaluated at RHIC, in close collaboration with Fermilab and CERN, and we look forward to reporting here on successful results from these efforts.
At the heart of the ATLAS experiment at the Large Hadron Collider (LHC), silicon sensors will provide accurate detection of charged particles produced in the collisions. The Semiconductor Tracker (SCT) consists of silicon microstrip sensors located 25-55 cm from the LHC beams, subdivided in a central part of four concentric barrels around the beam pipe, and endcaps of nine discs on either side. February saw two major milestones for the ATLAS tracker within a week – the first stage of the integration of the barrels with other parts of the tracking system and the arrival of the endcap silicon tracker that has been assembled in Liverpool.
The ATLAS tracker project was conceived in 1993 at a meeting in the UK where a small international group of physicists and engineers sketched out plans for a tracking system for the LHC. After four years of development, 40 institutes around the world agreed to start the construction of the SCT. Eight years later the tracker is now a reality at CERN and is being integrated into ATLAS ready for physics.
The central barrels and two endcaps of the SCT together hold 4088 silicon modules (60 m2 of silicon), which can record the trajectory of charged particles with 20 μm precision (less than the diameter of a human hair). The complete system comprises 6 million detector elements, each with its own amplifier and memory. It is larger than any existing silicon tracking system. Careful work at each stage of the project has ensured that more than 99.5% of the channels are working.
The modules for the SCT barrels were produced by four collaborations centred in Japan, Scandinavia, the UK and the US, and sent to the UK for precision assembly on cylindrical structures at Oxford University. The fourth and final barrel arrived at CERN in September 2005 and was integrated into the full barrel assembly shortly afterwards. In the latest integration stage, on 17 February, dozens of physicists and engineers from the collaboration gathered to witness the insertion of the barrel SCT into the Transition Radiation Tracker (TRT). The SCT and the TRT are two of the three major parts of the ATLAS inner detector – the third and final part is the pixel detector, which will be added in the very centre of the tracker.
While the SCT central detector is already complete at CERN, the two endcaps are making good progress as well. More than 2000 modules with sensors and readout electronics have been produced in laboratories in the UK, Spain, Germany, the Czech Republic, the Netherlands, Switzerland and Australia, and were then sent to the two endcap assembly sites at the University of Liverpool, and NIKHEF in Amsterdam.
Each endcap is a 2 m long, light and strong carbon-fibre cylinder containing a series of nine discs on which the modules are mounted in rings, so as to surround the LHC beams. Each disc contains cooling circuits to take away the excess heat produced by the electronics, to maintain an operating temperature of -7 °C, which is chosen to minimize radiation damage in the harsh LHC environment. Control signals and data are sent through optical fibres to and from each sensor, minimizing noise and heavy cabling. On 23 February the first endcap arrived safely at CERN from Liverpool and the second is nearing completion at NIKHEF
The Australian Synchrotron under construction in Melbourne is due to begin operation in April 2007. This third-generation light source is an electron-accelerator laboratory comprising a full-energy injection system (linac plus booster synchrotron) and a 3 GeV storage ring. It has the capacity for more than 30 beamlines, with nine to be built in the first phase of facility development.
Although Australia has a long and distinguished history in nuclear and particle physics, the Australian Synchrotron is the largest accelerator in the country and the only one of its type in the Antipodes. The storage ring has a circumference of 216 m and is housed in a building with office and laboratory space for more than 100 staff and beamline users.
Commissioning of the injection system is well under way, with the 100 MeV linac now in routine operation. The first turn in the booster was achieved in February, rapidly followed by hundreds of thousands of turns. The beam has been stored at 100 MeV for 1 s from one injection to the next. The injection system ramps at a rate of 1 Hz to accelerate the beam from 100 MeV to 3 GeV in a few hundred milliseconds. Conditioning of the booster RF system is under way and the electron beam will soon be accelerated to full energy.
Installation of the storage ring is almost complete, with only a few of the magnets and vacuum chambers left to assemble. The klystrons that will provide the RF power to the storage-ring accelerating cavities are being commissioned on site during March and will be ready for the first injected beam, which is scheduled for June. The front ends that interface the beamlines to the storage ring are being installed, while beamline installation is due to start in December. Beamline commissioning with photons on sample is expected to be well under way by March 2007.
The Australian science community recommended consideration of an initial suite of 13 beamlines to cover almost the whole range of research being done in Australia, aiming to meet 95% of the anticipated needs of the Australian Synchrotron research community. Nine of these are being developed now, and others will be developed as funding allows. Contracts have been awarded for beamlines for powder diffraction, protein crystallography, X-ray absorption spectroscopy, infrared spectroscopy and soft X-ray spectroscopy beamlines, and the infrared spectroscopy beamline contract is imminent. Designs are well advanced for small- and wide-angle scattering, microspectroscopy and imaging, and medical therapy beamlines, as is design of a second protein-crystallography beamline that will also cater for small-molecule research.
The accelerator systems and building were funded entirely by the Victorian State government at a cost of AU$157 m. The beamlines are being funded through a partnership to which state governments, leading universities, research institutions and the New Zealand government have already committed AU$40 m.
There would be great benefits if clinicians around the world could gain access to a common support resource in diagnosing breast cancer. MammoGrid, a three-year project under the Fifth Framework Programme (FP5) of the European Community was completed in 2005 and its partners are now exploring the possibilities for developing a commercial product based on the project’s results.
Led by CERN, MammoGrid involves the universities of Oxford, Cambridge and the West of England in the UK, together with Mirada Solutions of Oxford, and the universities of Pisa and Sassari and hospitals in Udine and Torino in Italy. The project was conceived within the Technology Transfer Group and the Physics Department at CERN, and an FP5 project was established with total resources of €1.9 m.
Breast cancer is the most common cancer in women, and mammograms as images are extremely complex with many degrees of variability across the population. Breast-cancer screening procedures suffer from several complications with a relatively high error rate. It is estimated that around 30% of mammograms give false results. Early and unequivocal diagnosis is therefore a fundamental requirement for early diagnosis and reduced cancer mortality.
One effective way to manage disparate sources of mammogram data is through a federation of autonomous multi-centre sites spanning national boundaries. Such collaboration is now being facilitated by Grid-based technologies, which are emerging as open-source standards-based solutions for managing distributed resources. In the light of these new computing solutions, the goal of the MammoGrid project was to develop a Grid-aware medical application to manage a Europe-wide database of mammograms.
The MammoGrid solution utilizes Grid technologies in seamlessly linking distributed data sets and allowing effective co-working among mammogram analysts throughout Europe. Thanks to the Grid infrastructure it is possible to exchange data and images, and carry out remote and more accurate radiological diagnosis. This in turn should lead to decreasing biopsies, standardization of quality-control procedures, improvements in the training of radiologists and provision of sufficient statistics for complex epidemiological studies.
One of the aims of the project was to build a demonstrator for testing in hospitals in Cambridge and Udine. Since the project reached its completion in 2005, the MammoGrid partners have been negotiating a licence and a partnership agreement with an industrial company. Commercialization is still at an early stage, however, and CERN’s Technology Transfer Group is exploring opportunities to disseminate the project results further, both to hospitals and industry. A non-exclusive licence based on the results of the MammoGrid project has been made available and a few companies are interested in using the demonstrator to build a fully functioning operational tool for oncological studies and cancer screening.
Over the past decade, the HERMES experiment at HERA, DESY, has successfully explored the spin structure of the nucleon. Unlike the H1 and ZEUS experiments, which detect collisions between electron and protons travelling in opposite directions in beams stored in HERA, HERMES has scattered HERA’s 27.5 GeV polarized electron beam off polarized nucleons at rest in a sophisticated target cell of polarized hydrogen or deuterium gas. This target, which has run throughout the decade, has been a key to the experiment’s success.
To achieve its goals, the design of the target had to overcome three major challenges. These were to develop a gas target of high polarization with unequalled areal density; to measure its electron and nuclear polarization online to a precision of 3%; and to operate a target over long periods in the environment of a high-energy storage ring, without affecting the operation of the collider experiments too much.
Meeting the challenges
The first challenge dates back to the design achieved while preparing a proposal for the FILTEX experiment, which was submitted to CERN in 1985. The idea was that antiprotons circulating in the Low Energy Antiproton Ring at CERN were to be polarized by spin-dependent attenuation of the beam, a process known as spin-filtering. To achieve a reasonable build-up time of around 10 hours, this required a hydrogen filter target with high polarization, P ∼ 1, and an areal density, t = 1014 atoms/cm2. These figures represent a benchmark that still holds today. For a deep inelastic scattering (DIS) experiment in a high-energy electron ring, luminosity in the order of 1031/cm2 is needed; for a 30 mA electron current, this requires target figures comparable to those in the FILTEX proposal. However, the densities of gas-jet targets available in the 1980s were a few 1011/cm2, and the most dense thermal atomic-beam target recently developed has been for the Relativistic Heavy Ion Collider at Brookhaven with a density of (1.3±0.2) × 1012/cm2.
The areal density of a polarized jet can be boosted by a factor of around 100 by using a storage cell or vessel, as Willy Haeberli of the University of Wisconsin proposed in 1965. Figure 1 illustrates this principle. Polarized atoms enter the T-shaped storage cell ballistically, without hitting the walls, via a narrow feed tube. On their way out, they perform many collisions with the walls (∼300) resulting in an increase of the gas density.
In the 1980s and early 1990s, high-intensity atomic-beam sources and radiation-resistant coatings for the cell walls were developed, and the first test of a high-density storage-cell target in a storage ring was performed in 1992 in the Heidelberg Heavy-Ion Test Storage Ring. This had a target density t = (0.96±0.04) × 1014 atoms/cm2 and a measured polarization P = 0.46±0.01 in a low magnetic field, which was expected to double in a strong guide field (Zapfe et al. 1996).
During the same period, the use of a polarized storage cell target for DIS experiments was being discussed. A first letter of intent to DESY dates back to 1988 and in 1990 the HERMES collaboration submitted a proposal for the study of the nucleon’s spin structure. After the successful target tests and encouraging results on the electron polarization, HERMES was approved in 1992 and constructed during the following two years. For the commissioning run in 1995, an optically pumped 3He target was operated to study the neutron-spin structure (de Schepper et al. 1998). Then in 1996, the hydrogen target set-up was installed. The elliptical, 40 cm long storage cell operating at 100 K was protected by a narrow tungsten collimator – the “bottleneck” of the HERA electron ring.
The challenge of a precise polarization measurement independent of the stored beam was met by using a polarimeter that measured the complete substate population of a sample beam extracted from the centre of the cell. This made possible the precise online determination of the target parameters, i.e. the polarization of protons and electrons, Pz and Pe, respectively, and the fraction of molecules, which for a high-quality surface was at most a few percent. One of the potentially harmful effects is RF depolarization caused by harmonics of the HERA bunch frequency of 10.4 MHz, so a strong guide field was carefully chosen to avoid resonances. With all these precautions, stable operation with longitudinally polarized hydrogen was obtained, leading to high-quality data on the proton-spin structure.
In 1998, the target was converted to one of longitudinally polarized deuterium with nuclear spin one. This allowed not only vector polarization Pz but also second-rank tensor polarization Pzz to be produced. The latter is related to the structure function b1 of the deuteron, which HERMES measured for the first time. Owing to the low magnetic moment of the deuteron, decoupling of nuclear and electron spin at the guide field of 0.33 T was nearly complete, resulting in close to ideal performance, i.e. no detectable depolarization by the cell walls. In addition, recombination to molecules was also negligible. The experiment collected a large data sample of high-quality deuterium data, in particular during the successful run in 2000. The extremely stable target performance during this run is shown in figure 2.
For the next phase, from 2001 to 2005, a transversely polarized hydrogen target was required to study transversity, the last missing leading-twist structure function of the nucleon. For this purpose, a dipole magnet with a large gap and high uniformity was developed with a field limited to 300 mT. This resulted in acceptable synchrotron radiation power levels and high target polarization.
All the running with the polarized target at HERMES was performed in parallel with the operation of the collider detectors H1 and ZEUS. The areal density achieved with the storage-cell target was only about an eighth of the density allowed before it would adversely affect the stored electron beam. There were also special studies using unpolarized gas such as H2, D2, He, N2, Ne, Ar, Kr and Xe in the target cell. In this case the density was chosen according to the maximum allowed reduction of the electron-beam lifetime, yielding higher statistics relative to running with the polarized target. In additional special end-of-fill runs, when the collider experiments were switched off, the remaining beam of 12-15 mA was consumed within about an hour at extra-high densities, yielding high statistics data samples with little extra time.
End of an era
In the course of running the HERMES experiment, the physics interest has moved from semi-inclusive to exclusive measurements, such as deeply virtual Compton scattering). Clean exclusive measurements require the detection of recoil particles, e.g. the proton. However, a recoil detector turned out to be incompatible with the polarized storage cell. Therefore, the HERMES collaboration decided to run during the final phase from 2006 to 2007 with a recoil detector in addition to the standard forward spectrometer and an unpolarized high-density, very compact storage-cell target. On 13 November 2005 polarized running ended and the target was removed. Commissioning of the recoil detector to replace the target set-up began in February 2006.
The removal of the target marks the end of a very fruitful era extending over 20 years. Groups from Beijing, Erlangen, Ferrara, Heidelberg, Liverpool, Marburg, Munich, Wisconsin, Yerevan and elsewhere have contributed to the target’s outstanding performance and stability. In this way, the original idea for FILTEX, which led to the HERMES target, has enabled 20 years later a wealth of new results on nucleon-spin structure. Fortunately, after ten years of operation in HERA, there is a good chance that the present target may serve future experiments. The project for the Facility for Antiproton and Ion Research (FAIR) at GSI, Darmstadt, with its planned antiproton source, has again stimulated interest in using spin filtering to produce polarized stored beams of antiprotons, this time for measurements by the Polarized Antiproton Experiments (PAX) at the facility’s High Energy Storage Ring. Tests with protons and antiprotons in preparation for PAX are foreseen at Forschungszentrum Jülich and CERN. The HERMES target may thus play a key role in paving the way for a new experiment at FAIR, aimed at studying hadron structure in the interaction of polarized protons with polarized antiprotons.
• The contribution of numerous students, postdocs, senior scientists and technicians to the unprecedented performance of the HERMES target is gratefully acknowledged. Special thanks are owed to my colleagues G Court, P Dalpiaz-Ferretti, D Fick, G Graw, W Haeberli, B Povh, and K Rith; to P Lenisa, the target coordinator from 2000 to 2005; to the funding agencies, in particular the Bundesministerium für Bildung und Forschung in Germany and INFN in Italy; and to the HERMES and DESY management.
The operation and performance of the hydrogen and deuterium target over the full running period are summarized in a paper (Airapetian et al. 2005) in which many additional references can be found.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.