Comsol -leaderboard other pages

Topics

Hurricane Isabel gives accelerators a severe test

cerncebaf1_1-04

Hurricane Isabel was at category five – the most violent on the Saffir-Simpson scale of hurricane strength – when it began threatening the central Atlantic seaboard of the US. Over the course of several days, precautions against the extreme weather conditions were taken across the Jefferson Lab site in south-east Virginia. On 18 September 2003, when Isabel struck North Carolina’s Outer Banks and moved northward, directly across the region around the laboratory, the storm was still quite destructive, albeit considerably reduced in strength. The flood surge and trees felled by wind substantially damaged or even devastated buildings and homes, including many belonging to Jefferson Lab staff members.

For the laboratory itself, Isabel delivered an unplanned and severe challenge in another form: a power outage that lasted nearly three-and-a-half days, and which severely tested the robustness of Jefferson Lab’s two superconducting machines, the Continuous Electron Beam Accelerator Facility (CEBAF) and the superconducting radiofrequency “driver” accelerator of the laboratory’s free-electron laser. Robustness matters greatly for science at a time when microwave superconducting linear accelerators (linacs) are not only being considered, but in some cases already being built for projects such as neutron sources, rare-isotope accelerators, innovative light sources and TeV-scale electron-positron linear colliders.

cerncebaf2_1-04

Hurricane Isabel interrupted a several-week-long maintenance shutdown of CEBAF, which serves nuclear and particle physics and represents the world’s pioneering large-scale implementation of superconducting radiofrequency (SRF) technology. The racetrack-shaped machine is actually a pair of 500-600 MeV SRF linacs interconnected by recirculation arc beamlines. CEBAF delivers simultaneous beams at up to 6 GeV to three experimental halls. An imminent upgrade will double the energy to 12 GeV and add an extra hall for “quark confinement” studies.

On a smaller scale, Jefferson Lab’s original kilowatt-scale infrared free-electron laser (FEL) is “driven” by a high-current cousin of CEBAF, a 70 MeV SRF linac with a high-current injector. The FEL serves multidisciplinary science and technology as the world’s highest-average-power source of tunable coherent infrared light. An upgrade to 10 kW is in commissioning – as it was when Isabel began threatening.

The power outage

The accelerator site lost electrical power at approximately 11 a.m. on 18 September during the hurricane, and was not restored until about 8:30 p.m. on 21 September. About an hour later, after the Central Helium Liquefier (CHL) control system was restored, it was found that all cryomodule temperatures were approaching ambient values. Without power, the CHL could not keep the CEBAF linacs or the FEL’s driver linac cooled to the 2 K temperature required for superconducting operation. The liquid helium in both systems warmed up, boiled off, and was vented harmlessly but expensively to the atmosphere.

Some $200,000 (~€160,000) in helium was lost – two-thirds of the overall inventory and an amount equivalent to a year’s worth of losses from routine operation (comparable in mass to more than 70,000 litres of 4.22 K liquid helium). The episode was also costly to users’ experiments and experimental schedules, to other aspects of CEBAF’s status of operational readiness, and to the FEL upgrade commissioning. One consequence, in parallel with recovery activities, has been to revisit the construction-era determination that installing CHL backup power would not be cost-effective; now various options for reducing vulnerability to power outages are being considered.

cerncebaf3_1-04

For the six weeks following the incident, key activities included the procurement of replacement helium, pumping out of all the linac vacuum elements, cooling down the entire CEBAF accelerator complex, RF conditioning of the linacs and the commissioning of CEBAF’s electron beam. The experimental nuclear/particle-physics programme was resumed on 2 December, a delay of six weeks, and commissioning the upgrade of the FEL was restarted.

The recovery

In anticipation of the approaching storm, the laboratory had stored one-third of its liquid-helium inventory, thus enabling the “cooldown” of the CEBAF linacs to begin on 8 October. The plan was to cool eight or nine cryomodules per day, starting with the first three cryomodules. All cooldown rates were within the guidelines (150 K per hour) to avoid excessive thermal stresses and degradation of the cavity quality factor (“Q disease”). While no major mechanical-vacuum-related issues caused concern down to 4 K, cooldown to 2 K (the temperature at which liquid helium is superfluid) was approached gingerly indeed, being a further test of vacuum integrity.

Nearly 38,000 litres of 4 K liquid helium arrived at the laboratory early on 16 October, and the cryomodules were topped off at a rate of about 2400 litres per hour. The tasks to follow were cooldown to 2 K, further filling and stabilization, re-establishing a very low-gradient tune of all the cavities, operation of the cavities to determine their waveguide vacuum response, and RF conditioning of the cavities. Waveguide vacuum had been a challenge during CEBAF’s early operational years in the 1990s, but it had cleared up with use.

On 20 October the cryogenic group, using manual control, pulled CEBAF’s liquid-helium temperature down to below the lambda point, the transition to the superfluid state. Some 17 hours later, on 21 October, more than 99% of the CEBAF and FEL accelerator modules had been cooled to 2.09 K, the normal operating temperature. The End Station Refrigerator was then powered up to start delivering liquid helium for experiment preparation – the cooling of magnets and cryogenic target work.

cerncebaf4_1-04

As the liquid helium crossed the lambda point, one ailing cryomodule in north linac zone 5 was monitored carefully. The pressure in the waveguide feeds to the first and second of its four cavity pairs went up at a rapid rate; within hours the ion pump had saturated in the millitorr range and it tripped soon after. The four cavities in question were assessed as inoperable due to vacuum considerations; they were the only casualties of the storm. Out of CEBAF’s total of 300, they represent at most 1% of the machine’s overall energy reach. RF conditioning has since revealed no other low-level leaks. After 36 hours of thermal stabilization at 2 K, the tasks at hand consisted of attaining low-gradient (2 MV/m) RF operation, followed by the assessment of vacuum stability with high-power RF operation.

The RF recovery of CEBAF and the FEL, following the complete cryogenic recovery, eliminated two major uncertainties: resonant frequencies were close enough to 1497 MHz that the cavities could be brought up without difficulty, and gradients in excess of 5 MV/m were sustained without extensive vacuum faults. By 7 November the CEBAF linacs had run in a stable fashion with RF but without electron beam for more than a week – an opportunity unlike any since CEBAF’s commissioning a decade ago. The newly installed high-performance cryomodule in south linac zone 21 had operated in a quiet mode at an equivalent RF energy gain of 60 MeV, considerably higher than its operating voltage (approximately 40 MeV) before the Isabel shutdown.

The coming months

The observations and data processing on CEBAF’s cavities to date give a more than 95% confidence level in the energy estimation of the linacs, promising a 5 GeV physics run for the next six months followed by possible runs at 5.5 GeV and above later in 2004. The linac RF appears at least comparable to its pre-Isabel capability, notwithstanding the four nonfunctional cavities in north linac zone 5.

A single global phase adjustment allowed the electron beam to go through the entire linac complex, indicating that the RF phases were reproducible after the temperature cycle. A detailed analysis of cavity and waveguide vacuum response during cooldown is being performed to gain an understanding of the actual vacuum conditions and the molecular species contributing to them. This will help us to understand better the vacuum trips and to set them properly in order to eliminate unnecessary trips due to lack of knowledge in the software. The warm window temperatures of the cavities are being monitored carefully and simultaneously with residual gas analyses to provide input for a knowledge-based control system for the RF vacuum trips, which should lead to improved linac performance.

cerncebaf5_1-04

Thanks to the dedicated work of the physicists, engineers and operating crew (including throughout the traditional four-day Thanksgiving weekend), the implementation of all the above has brought CEBAF back to operational readiness. The machine is now rising to the significant challenge, never attempted before, of developing electron beams with very stringent characteristics for simultaneous use in all three halls.

A continuous-wave parity-quality beam of 40 microamperes is required in Hall C for the G0 experiment, with very small relative “helicity correlations” in beam properties (e.g. less than 20 nanometers of movement in the beam spot when the beam helicity is flipped, averaged over the entire experimental run time). In Hall A a 100 microampere beam is needed for the hypernuclear experiment, with the stringent requirement of a relative energy spread of less than 25 in one million. Finally, a low-current but high-quality beam is destined for Hall B. The first beam for the physics experiment in Hall C, the G0 engineering run, was delivered on schedule on 2 December. The experimental programme in Hall B has also begun, while Hall A is being prepared for the start of the hypernuclear experiment in January.

The recovery of the FEL followed on the heels of that of CEBAF, and as I write today the high-power commissioning has fully resumed, with lasing achieved at nearly the kilowatt level and the laser power level continuing to rise steadily.

Acknowledgments

The achievements reported here were made possible by the dedication and hard work of the Jefferson Lab staff.

The time projection chamber turns 25

cerntpc1_1-04

A time projection chamber (TPC) provides a complete, 3D picture of the ionization deposited in a gas (or liquid) volume. It acts somewhat like a bubble chamber, albeit with a fast, all-electronic read-out. The TPC’s 3D localization makes it extremely useful in tracking charged particles in a high-track-density environment, and for identifying particles through their ionization energy loss (dE/dx). To honour the 25th anniversary of the TPC, a symposium was organized at the Lawrence Berkeley National Laboratory on 17 October 2003, with workshops that included presentations on the past, present and future of the TPC.

The TPC was invented by Dave Nygren at the Lawrence Berkeley Laboratory (LBL) in the late 1970s. Its first major application was in the PEP-4 detector, which studied 29 GeV e+e collisions at the PEP storage ring at SLAC. Since then TPCs have been used to study e+e collisions at PEP, at the TRISTAN collider, at the KEK laboratory and at the Large Electron Positron (LEP) collider at CERN. A TPC could also be the central detector at future e+e linear colliders.

The device has also figured in a number of experiments involving heavy-ion collisions at machines such as LBL’s Bevalac and the Relativistic Heavy Ion Collider (RHIC) at Brookhaven; and now the ALICE collaboration is building a large TPC to study heavy-ion collisions at the Large Hadron Collider (LHC). TPCs have also been used in a whole host of non-accelerator experiments.

TPCs in particle physics

The PEP-4 TPC (figure 1) was built to combine charged-particle tracking with good particle identification by measuring the specific energy loss (dE/dx) of charged particles. This 2 m long cylindrical TPC had an inner diameter of 40 cm and an outer diameter of 2 m, and had most of the features of newer TPCs.

Charged particles from e+e collisions in the centre of the TPC ionized molecules in a mixture of 80% argon and 20% methane gas at 8.5 atmosphere. A central membrane (the cathode) that was charged to -75 kV produced a strong electric field (figure 2). Under the influence of this field, ionization electrons drifted to one of the two end caps. A solenoidal magnetic field minimized the transverse diffusion and bent the charged particles to allow momentum measurement.

cerntpc2_1-04

The end caps were divided into six sectors, each one containing a 183-anode multiwire proportional chamber (MWPC). Drifting electrons were accelerated in the strong electric fields around the wires and acquired enough kinetic energy to ionize the gas and produce an avalanche. A single drift electron produced about 1000 electrons in the wire.

The wire signals were sampled 10 million times per second to a 9-bit accuracy by an analogue storage unit based on a charge-coupled device (CCD). The signals were then digitized at a slower rate using inexpensive ADCs. The wire data were used to measure particle energy loss. Because of the high gas pressure, the ionization could be measured accurately and the dE/dx resolution achieved was an unprecedented 3%. This meant that pions, kaons and protons could be identified over most of the kinematic range.

Charged particles were tracked using data from 15 rows of 7 x 7.5 mm2 metallic pads located under the wires. When an electron produced an avalanche on an anode wire, a cloud of positively charged ions remained in the gas. The image charge that formed on the metallic pads was then measured using a charge-sensitive preamplifier. By measuring the relative charge on several adjacent pads, the ionization could be localized to approximately 250 µm. These pads were also read out by the CCD system.

Later TPCs used many of the techniques pioneered by PEP-4. Some notable examples were the ALEPH and DELPHI TPCs at LEP, the TOPAZ experiment at TRISTAN and the early vertex chambers for the CDF experiment at Fermilab. The ALEPH TPC at LEP was one of the larger examples, measuring 3.6 m in diameter and 4.4 m in length, with twice as many dE/dx measurements as the PEP-4 TPC. Both of the LEP TPCs used flash ADC systems instead of CCDs.

TPCs have also been used in a number of smaller experiments, such as in studies of muon decay and capture. The MuCap experiment at the Paul Scherrer Institute, for example, is building a 10 atmosphere hydrogen-gas TPC to measure muon lifetime.

TPCs for heavy ions

With the growth of research with relativistic heavy-ion collisions in the early 1980s, TPCs found another home. The 3D picture of ionization is ideal for tracking particles in high-density environments – hundreds or thousands of particles from a single collision – in which other detectors are overwhelmed by the huge multiplicity. The first large-acceptance TPC was in the Equation-of-State experiment (EOS), which studied heavy-ion collisions at energies of a few GeV per nucleon at the Bevalac. The rectangular EOS TPC measured 150 x 96 x 75 cm. Electrons drifted downwards in a uniform electric field and were amplified by 3000 times in a MWPC. Data were read out from 15 308 pads, which sensed the image charge from positive ions in the same way as in PEP-4.

One key development in using TPCs in heavy-ion collisions concerns the electronics. In the high-track-density environment many pads are required and each pad detects signals from such a large number of tracks that it must be read out by a waveform digitizer. The CCD analogue storage units used with earlier TPCs required considerable power and difficult calibrations. They were also expensive. So EOS used a new technique, the switched capacitor array (SCA), developed by Stuart Kleinfelder.

The EOS SCA consists of an array of 128 capacitors, each connected to an input by a switch. By rapidly opening and closing the switches, the capacitors can be connected to the input one by one, forming an analogue storage unit. The sampling rate is matched to the drift time of electrons across the TPC, and the capacitors are read out by an inexpensive (but slow) analogue-to-digital converter. This scheme reduced the cost and power consumption of waveform digitizers, making TPCs a practical tool for the study of heavy-ion collisions. Packaging was integral to the success of the electronics. The preamplifiers, SCAs, digitizers and multiplexers were mounted directly on the TPC, and a handful of optical fibres replaced 15,000 cables.

After completing its service at the Bevalac, the EOS TPC was moved to the Brookhaven Alternating Gradient Synchrotron (AGS), where it was used to study heavy-ion collisions in experiment E895 and proton-ion collisions in experiment E910, and then to Fermilab for experiment E907 probing higher energy proton-ion collisions. By the time E907 ends, EOS will have seen more than 15 years of service at three different laboratories.

EOS pioneered techniques that were used in many other experiments. At CERN the NA35, NA36 and NA49 experiments all used TPCs to study heavy-ion collisions. NA49 took the construction of the devices to new volumes with four huge TPCs – the largest pair measuring 3.8 x 3.8 x 1.3 m. The complex was read out by 182,000 SCAs; these SCA chips had integral ADCs.

cerntpc3_1-04

The next step up in heavy-ion collision energy was the RHIC at Brookhaven, and it was natural that TPCs would play a key role. Two original TPC-based proposals merged into the STAR detector, which has a 4 m diameter, 4 m long TPC at its centre. The TPC for STAR follows the geometry of the PEP-4 and ALEPH TPCs, but relies on 138,000 pads that are read out by SCA digitizers for both dE/dx and tracking information. The system is much faster than previous experiments: it can digitize an event containing 70 million volume elements to 10 bit precision and transmit it to the data-acquisition system in 10 microseconds. Figure 3 shows an event in the STAR TPC.

The ALICE heavy-ion experiment at the LHC is built around a mammoth TPC, 2.5 x 5.5 m with 750,000 pads. Analogue-to-digital-converter technology has matured and ALICE has replaced SCAs with custom integrated circuits, each containing 16 x 10-bit ADCs with digital filters for tail suppression and zero-suppression circuitry. Data can be read out 1000 times per second.

Sometimes longitudinally drifting electrons may not be optimal. The collaborations for STAR and CERES (NA45 at CERN) have built cylindrical TPCs where the electrons drift outwards radially from a cylindrical central cathode towards anodes on a concentric outer cylinder. This geometry is advantageous when tracks are parallel to the cylinder’s axis, but it introduces many complications. The electric and magnetic fields are no longer parallel, which leads to complex electron-drift trajectories. Curved pad-planes are then required, or the idealized cylindrical geometry must be complicated. For these reasons, radial-drift TPCs have a more complex structure and poorer resolution than linear-drift devices. However, sometimes these factors can provide a worthwhile trade-off.

Non-accelerator applications

TPCs are also used in many non-accelerator experiments such as double-beta decay and dark-matter searches. Often these experiments use dense media, such as liquids, where the active detector volume also serves as the experimental target (for neutrinos or dark matter) or a radioactive source (for double-beta decay, proton decay, etc.).

The first laboratory observation of double-beta decay, in 1987, by Steve Elliott, Alan Hahn and Michael Moe, used a thin layer of 82Se deposited on the central cathode of a TPC. Though very successful, this technique was limited to relatively small sample volumes. Most current efforts use a single material, such as liquid 136Xe, as both a source and drift medium. One particularly ambitious group, the Enriched Xenon Observatory collaboration, plans to use a liquid-xenon TPC to localize double-beta decay events and then insert a probe into the xenon to extract the 136Ba daughter product for detailed study.

cerntpc4_1-04

Liquid-xenon TPCs are also being used as imaging detectors to track photons with energies of a few MeV. The photon directions are determined by reconstructing double-Compton interactions. A liquid-xenon imager has already been used to study the galactic centre. The technology might also be used to search for photons from smuggled nuclear material.

Liquid-argon TPCs have been studied for many years under the aegis of the ICARUS (Imaging Cosmic and Rare Underground Signals) project. The current T600 prototype, based on 476 tonnes of liquid argon in a volume of 275 m3, recently completed a 68 day engineering run. The collaboration’s goal is a 3000 tonne detector in the Gran Sasso Laboratory in Italy, which will study solar and atmospheric neutrinos, terrestrial neutrinos from CERN’s Super Proton Synchrotron (SPS), and also proton decay. The solar neutrino study may be quite challenging in terms of backgrounds.

One interesting idea being pursued by several groups is to use drifting ions rather than drifting electrons in a gas or liquid. Both positively and negatively charged ions have been considered; the latter can be formed when an ionization electron attaches itself to a previously uncharged molecule. The advantage of ion drift is that the diffusion can be much smaller. One big drawback is that positively charged ions cannot induce avalanches, greatly complicating the detection of the signal. The much slower drift velocity seems to offer both advantages and disadvantages. Ion-drift TPCs have been considered for a variety of applications, including double-beta decay and dark-matter and axion searches.

Future directions

The most exciting technological developments in gaseous TPCs concern electron amplification, where two new technologies are replacing wire chambers. Gas Electron Multipliers (GEMs) are plastic foils that are metal coated on both sides, with 50-100 µm diameter holes punched in them.

The metal coatings are charged to a potential difference of a few hundred volts, creating strong electric fields in the holes. Electrons drifting into the holes ionize the gas, creating an avalanche much like that formed around anode wires in conventional chambers. GEMs have several advantages over wire chambers. They are easily supported, eliminating wire sag and instability, and can be placed very near to read-out pads, reducing diffusion after amplification. The high hole density provides an even amplification over a large area. Positive ions generated in the avalanche drift naturally away from the amplification region, eliminating the build-up of space charge in the amplification region.

Micromesh gaseous structure chambers (Micromegas) use a thin metal mesh instead of anode wires. The mesh can be supported a small distance above the pads. A simple wire grid above the Micromegas produces a potential difference with the mesh, so electron avalanches form in the strong electric fields around the mesh elements. Like GEMs, Micromegas can be placed very close to read-out pads, greatly reducing diffusion. They also have the same advantage as GEMs for positive-ion elimination.

Both GEMs and Micromegas have a somewhat lower gain than wire chambers. However, two or three layers of GEMs or Micromegas can be cascaded by placing the foils or meshes on top of each other, thereby multiplying the gains. GEMs and Micromegas are beginning to replace wire chambers in some experiments, most notably in the COMPASS experiment at the SPS at CERN. They are also prominent in R&D for future linear colliders and for upgrades of the detectors at RHIC.

Over the past 25 years, TPCs have grown into a proven, mature and flexible technology. With these new developments the next quarter century looks equally bright.

Cool times ahead for muons at Fermilab

cernnews3_12-03

The Neutrino Factory and Muon Collider Collaboration – or Muon Collaboration for short – has finished constructing the MuCool Test Area at Fermilab. Researchers will use the US$1.5 million complex to test the RF cavities and liquid-hydrogen absorbers needed for a muon cooling channel. Spare power units located in Fermilab’s 400 MeV linear accelerator will provide RF power of 201 and 805 MHz, and cryogenic equipment will cool the absorbers to 20 K.

The 130 members of the Muon Collaboration have worked for some six years on the design of very intense beams of muons. To cool muon beams in the transverse direction, the collaboration has designed an alternating series of liquid-hydrogen absorbers and RF cavities. The absorbers reduce the momenta of beam particles in both transverse and longitudinal directions, while the cavities reaccelerate the muons in the longitudinal direction.

cernnews4_12-03

By the end of December the group will fill and begin testing the first absorber, which was built by KEK in Japan, and around one year from now the group will receive and test a 201 MHz cavity built by Lawrence Berkeley National Laboratory. In the longer term, the international MICE Collaboration plans to use a muon beam line at the Rutherford Appleton Laboratory in the UK to test the system with 200 MeV/c muons. Success will enable – in about 10 years – a new generation of muon sources with a 1000 times higher muon rate and a 100 times reduction in beam phase space, making neutrino factories possible.

The Muon Collaboration consists primarily of particle and accelerator physicists from laboratories and universities in the US, with additional participation from institutions in Europe and Japan. To secure funding the collaboration is breaking new ground. Traditionally, research and development for accelerators has taken place at national laboratories, with limited participation from university scientists. But tight budgets and ambitious goals have led project leaders in this case to look for more participation and funding from the universities. The collaboration receives money from the US Department of Energy (DOE), the National Science Foundation, the State of Illinois, and some modest funds from Japan. The MuCool buildings at Fermilab are entirely paid for by DOE funds directly provided to the collaboration.

MAGIC opens up the gamma-ray sky

cernnews5_12-03

The MAGIC telescope, a new-generation instrument for ground-based gamma-ray astronomy, was inaugurated on 10 October at the Roque de los Muchachos astronomical site on the Canary Island of La Palma. The MAGIC (Major Atmospheric Gamma Imaging Cherenkov) telescope’s aim is to observe high-energy gamma rays of galactic or extragalactic origin. However, since the Earth’s atmosphere is opaque for gamma rays in the multi-GeV energy range, detection must be performed indirectly. When they are absorbed in the atmosphere, high-energy gammas lead to the creation of a shower of secondary particles. It is the flashes of Cherenkov radiation emitted by the charged particles of these air showers that the telescope measures.

MAGIC is the largest and most sensitive air Cherenkov telescope ever built, with a tessellated mirror of 17 m diameter and an energy threshold as low as about 30 GeV (in phase 2 this will be lowered to around 15 GeV). This makes the investigation of the previously unexplored gap between the sensitivity regions of satellite-borne detectors and earlier ground-based experiments possible. The observational programme for the new instrument will open up an entirely new window, not only in gamma-ray astronomy – comprising studies of quasars, active galactic nuclei, black holes and supernova remnants – but also in the search for dark matter and the effects of quantum gravity.

The high performance of MAGIC relies on many technological innovations. Its extremely fast re-positioning time of less than 20 s, for example, results from the strategy of making all the moving parts of the telescope as lightweight as possible – an issue that is particularly important for the observation of transient phenomena such as gamma-ray bursts. The telescope’s more challenging design issues included a mirror support consisting of a carbon-fibre reinforced structure and a low-mass 577 pixel photomultiplier camera with transmission of the analogue signals by optical fibres.

MAGIC is the first instrument of the European Cherenkov Observatory that is planned for the La Palma site. Within the next two years MAGIC will be accompanied by a second telescope of equal size, which will then enable the stereoscopic observation of air showers. In the long term, a third telescope with a mirror diameter of 34 m is planned.

The study of high-energy gamma rays is a time-consuming task and requires the observation of many sources. The MAGIC telescope therefore forms one part of a worldwide network of ground-based and satellite-borne detectors.

Further reading

http://magic.mppmu.mpg.de.

Site announced for new X-ray free-electron laser

At a press conference on 29 October, DESY announced that the new X-ray Free-Electron Laser (XFEL) is to be built in the German federal states of Hamburg and Schleswig-Holstein. Construction of the European X-ray laser project, which was approved by the German Federal Ministry of Education and Research on 5 February 2003, is to start in 2006.

The 3.3 km long facility will begin on the DESY site in Hamburg-Bahrenfeld and run in a north-western direction to the town of Schenefeld (in the district of Pinneberg), which borders on Hamburg. Here, the experimental hall with its 10 measuring stations is to be erected. “We are pleased that we have found an ideally suited location for the new XFEL in the vicinity of DESY,” said Albrecht Wagner, chairman of the DESY Directorate. “Looking far into the future, one could think of connecting the linear accelerator of the X-ray laser with particle accelerators already existing on the DESY site in order to open up new opportunities for science.”

Heavy feet arrive for the LHC…

cernnews6_12-03

The first 800 jacks for one sector of the Large Hadron Collider (LHC) have arrived at CERN from India in recent weeks. After the final acceptance of the pre-series jacks at the end of October, they can now be used to support the LHC superconducting magnets. The jacks are designed to adjust the positions of the magnets, which weigh more than 32 tonnes, with an accuracy of one-twentieth of a millimetre. This is equivalent to moving the weight of eight Indian elephants by the breadth of a human hair.

The 80 kg jacks were designed by the Centre for Advanced Technology (CAT) in India and are being built by Avasarala Automation in Bangalore and the Indo-German Tool Room in Indore. The close collaboration between CAT, the Department of Atomic Energy in India and CERN began in 1996, and the first two batches of jack prototypes were delivered to CERN in November 2000. These prototypes, which are still supporting the test strings in hall SM18, were used to test the manufacturing principles and design of the jacks. Supporting all the superconducting magnets (dipoles and quadrupoles) of the LHC will require 7000 jacks, which are being continuously delivered to CERN up to mid-2005.

The jacks consist mainly of a central column of variable height supported on spherical bearings at each end so the column can be tilted. Each dipole rests on three jacks (and another special type for fine corrections), and can be positioned and adjusted with very high precision in all three dimensions. After acceptance of the pre-series, the road is clear for the jacks to be used to support the LHC magnets.

…while correction coils proceed at full speed

cernnews7_12-03

The 1000th correction coil made in India for the LHC was handed over at a ceremony at the Department of Atomic Energy (DAE) in Mumbai at the end of September. The small celebration took place during the 9th meeting of the DAE-CERN Joint Committee that follows the progress of the Indian “in-kind” contributions to the LHC. The many items India is providing for the LHC construction include the very valuable and important contribution of superconducting correction windings – 1232 sextupole windings and 616 octupole/decapole windings. Production in India is now running at full speed and in fact exceeds the planned rates.

Grid technology goes to Pakistan

cernnews9_12-03

As a natural extension of its participation in the Large Hadron Collider (LHC) project, Pakistan has begun a deeper involvement in the LHC Computing Grid (LCG). A first step towards this was the Grid Technology Workshop held in Islamabad on 20-22 October, which was organized by Pakistan’s National Centre for Physics (NCP) in collaboration with CERN. The primary goal of the workshop was to provide hands-on experience in Grid technology to Pakistani scientists, engineers and professionals, enhancing their skills in Grid-related tools such as Grid architecture, Grid standards and the Globus toolkit.

The workshop was inaugurated by CERN’s director-general, Luciano Maiani, who explained that Grid technology will be crucial in exploiting the physics potential of the LHC, which is currently being constructed at CERN by a broad international collaboration that includes Pakistan, as well as China, India, Japan and other eastern and far eastern countries. The inauguration was attended by a number of dignitaries, including Parvez Butt, chairman of the Pakistan Atomic Energy Commission (PAEC), and ambassadors from countries such as Iran, Bangladesh, South Korea and Mynamar. In addition to the participants from Pakistan, several people from CERN attended the workshop.

A variety of talks on the first day was followed by a two-day tutorial session for the 45 participants, who came from 14 different scientific, research and development organizations and universities across Pakistan, including the NCP, the PAEC, the Commission on Science and Technology for Sustainable Development in the South (COMSATS), and the National University of Science and Technology. The Grid tutorials were based on a testbed consisting of nine servers, including computing elements servers, storage element servers, a resource broker server, a server for the Berkley Database Information Index and TopGIIS, a Replica Catalogue Server (RLS), and worker node and user interface servers. The host and user certificates for the Grid testbed machines were issued by the French CNRS certificate authority.

The workshop was very well publicized in the local newspapers and on television, and the participants found it both interesting and useful. The next step is to launch an LCG testbed partner site node on the same resources, and the whole exercise will lead to participation in the Data Challenge 2004 (DC04) for LHC computing.

Auger ready for ultra-high-energy cosmic rays

cernnews11_12-03

With the completion of its 100th surface detector at the end of October, the Pierre Auger Observatory became the largest cosmic-ray air-shower array in the world. The observatory, which aims to detect ultra-high-energy cosmic rays, so far encompasses a 175 square kilometre array of detectors, and will ultimately comprise 1600 surface detectors on 3000 square kilometres of the Argentine Pampa.

Each Auger surface unit consists of a cylindrical tank filled with 10,000 litres of pure water, a solar panel, and an antenna for wireless transmission of data. Phototubes register Cherenkov light produced in the water by charged particles in cosmic-ray showers, which are triggered at an altitude of 10 to 20 km. The particle showers strike several tanks almost simultaneously, and the slight differences in the detection times at the various tank positions allows the arrival direction of the cosmic ray to be determined. The Auger particle detectors are spaced 1.5 km apart in order to sample each air shower’s density at numerous locations on the ground.

In addition to the tanks, the observatory will also feature 24 fluorescence telescopes that can pick up the faint ultraviolet glow emitted by air showers in mid-air. The fluorescence telescopes can be operated only during dark, moonless nights, but they provide an independent means of measuring the energy in the showers, and hence of the primary cosmic ray.

The Pierre Auger collaboration is in the process of preparing a proposal for a second site for its observatory, to be located in the US. Featuring the same design as the Argentinean site, the second detector array would scan the northern sky for the sources of the most powerful cosmic rays.

Superconductivity links physics and medicine

cernnews1_11-03

The 2003 Nobel prizes in physics and physiology or medicine both have connections with the field of particle physics. Alexei Abrikosov, Vitaly Ginzburg and Anthony Leggett have received the physics prize for “pioneering contributions to the theory of superconductors and superfluids”, while Paul Lauterbur and Peter Mansfield were rewarded for discoveries in magnetic resonance imaging (MRI), which is in turn a major use for superconducting magnets.

The most important superconducting materials technically have proved to be type-II superconductors, which allow superconductivity and magnetism to exist at the same time. Type-I superconductors expel a magnetic field in what is known as the Meissner effect, and lose their superconductivity at high magnetic fields. However, type-II superconductors – generally alloys of various metals – exhibit only a weak Meissner effect, or none at all, and retain their superconductivity at high magnetic fields. Superconducting magnets are now routinely used in particle accelerators, and the magnets used for CERN’s Large Hadron Collider (LHC) are based on coils of niobium-titanium alloy, a type-II superconductor.

Abrikosov, who is now at Argonne National Laboratory, was working at the Kapitsa Institute for Physical Problems in his native Moscow when he succeeded in formulating a new theory to explain the behaviour of type-II superconductors, which cannot be explained by the earlier BCS theory (Nobel prize 1952). In Abrikosov’s theory, the external magnetic field penetrates the type-II material through channels within vortices in the “electron fluid” in the material. This theory was based on work in the 1950s by Ginzburg at the P N Lebedev Physical Institute in Moscow.

Superfluidity is another low-temperature phenomenon that will become large scale with the LHC. Liquid helium, which is used to cool superconducting magnets, becomes superfluid at temperatures below its boiling point and takes on heat transfer properties that allow efficient heat removal over the large distances involved in the LHC. The helium used to cool the LHC magnets is the common isotope, helium-4, which becomes superfluid around 2 K. However, helium also exists as a rarer isotope, helium-3, which is superfluid only at much lower temperatures of mK. While helium-4 is a boson, helium-3 is a fermion, so the two isotopes have quite different quantum properties. The contribution of Leggett, now at the University of Illinois, Urbana, was to develop the decisive theory, while at Sussex in the UK in 1970s, to explain how helium-3 atoms interact and become ordered in the superfluid state.

There is a link between this year’s physics prize and the prize in medicine, as one of the major uses for superconducting magnets is in MRI. The nuclear resonance phenomenon used in MRI was first demonstrated, for protons, in 1946 by Felix Bloch and Edward Mills Purcell, who received the Nobel prize in 1952. In a further connection with particle physics, Bloch went on to be the first director-general of CERN until 1955. It was 20 years, however, before Lauterbur, from Urbana, Illinois, discovered in 1973 how to create 2D pictures by introducing gradients in the magnetic field. Peter Mansfield of Nottingham in the UK developed this idea further by showing how the resonance signals could be mathematically analysed to make a useful imaging technique. He also demonstrated how extremely fast imaging could be achieved. Today, MRI is used to examine almost all organs of the body, and is especially valuable in imaging the brain and spinal cord.

bright-rec iop pub iop-science physcis connect