Comsol -leaderboard other pages

Topics

LBNL delivers front end of SNS

cernsns1_12-02

The US Spallation Neutron Source (SNS) project involves no fewer than six US national laboratories. Its accelerator systems consist of the front end built at Lawrence Berkeley National Laboratory (LBNL), a linear accelerator (linac) being built by Los Alamos National Laboratory (LANL) with superconducting radiofrequency (RF) cavities supplied by Jefferson Laboratory, and an accumulator ring and associated transfer lines being built by Brookhaven National Laboratory (BNL). The target system and conventional facilities are the responsibility of Oak Ridge National Laboratory (ORNL) in Tennessee, and the initial complement of experimental stations is to be supplied by Argonne National Laboratory (ANL) and ORNL. The front end creates an intense negative hydrogen-ion beam, chops it into “minipulses”, and accelerates it to 2.5 MeV. The linac then brings the beam to its full energy of 1 GeV, and the accumulator ring compresses the macropulses into sub-microsecond packets to be delivered to the spallation target.

The SNS front end represents a prototypical injector for the kind of so-called proton driver accelerators that are under construction or being planned worldwide. Accelerators that include an accumulator ring, such as the SNS, typically use negative hydrogen-ion beams, but the design approach lends itself to genuine proton beams as well.

Beamline elements

The two-chamber ion source was developed from an earlier model built for the Superconducting Super Collider. A magnetic dipole filter reflects energetic electrons from the main plasma and allows only low-energy electrons to pass into the second chamber, thus favouring the creation of negative hydrogen ions. The discharge is sustained by a 2 MHz RF system and requires up to 45 kW pulsed power at 6% duty factor (1 ms, 60 Hz). The main RF power, as well as low-amplitude 13.56 MHz power used to facilitate ignition at the beginning of every discharge pulse, is delivered through a porcelain-coated antenna immersed in the plasma. A newly developed coating technology brings the uninterrupted running time between services in reach of the desired value of 3 weeks; in fact, one single antenna was used over a period of 2 months during the final commissioning phase. The creation of the negative hydrogen ions is enhanced by a minute amount of caesium dispensed on the inside of the secondary discharge chamber surrounding the outlet aperture. When negative ions are extracted from a plasma, a copious amount of electrons is extracted as well, and a second dipole magnet configuration deflects most of them to a dumping electrode inserted in the main extraction gap, thus keeping the power of the removed electrons at manageable levels.

The low-energy beam-transport (LEBT) system makes use of purely electrostatic focusing by two einzel lenses (ring-shaped electrodes that at first slow the beam down, make it expand, and then, upon reaccelerating, squeeze it into a converging envelope). The second of these lenses is split into four quadrants to provide DC beam-steering as well as pre-chopping capabilities, dividing the 1 ms macropulses into 645 ns packets separated by 300 ns gaps. The rise and fall times of these minipulses were measured to be less than 25 ns, and the beam-in-gap current is reduced to less than 0.1-1% of the pulse amplitude. The electrostatic focusing principle allows for a very short LEBT length of 120 mm, and avoids time variations in the degree of space-charge compensation generally encountered with pulsed beams in magnetic focusing structures. It also provides for a short transition length between the LEBT and the subsequent RF quadrupole (RFQ) accelerator.

cernsns2_12-02

The RFQ is the main accelerator of the front end, and boosts the beam energy from 65 keV to 2.5 MeV. The RFQ fields are applied to four modulated vanes, and parasitic dipole modes are eliminated by p-mode stabilizers (straight bars running across the RFQ that shift the resonant frequency of the dipole modes and eliminate steering forces on the beam). The four RFQ cavities are built as hybrid structures, with high-conductivity copper on the inside brazed to a stiff outer shell. Dynamic tuning is achieved by regulating the temperature difference between the cavity walls and the vane tips, and the RFQ can be operated at full power (about 750 kW pulsed) within 2 minutes of a cold start. The 402.5 MHz klystron system used for front-end commissioning at LBNL was provided by LANL, and will be replaced by a more modern system that is part of the series procured by LANL for the first part of the linac.

After the RFQ, the medium-energy beam-transport (MEBT) system receives the beam and hands it over to the subsequent drift-tube linac (DTL). The MEBT includes the main travelling-wave chopper system designed to give the minipulses sharp flanks of 10 ns rise and fall times, and to attenuate the chopped/unchopped beam-current ratio to the nominal value of 0.01%. The active deflector plates and power switches of the MEBT chopping system were supplied by LANL, and have not yet been commissioned. A water-cooled molybdenum target is installed at the centre of the MEBT to absorb the chopped beam fraction, and a so-called “anti-chopper” guides those particles back to the beam axis that missed the chopper target during the pulse ramping. Fourteen quadrupole magnets provide transverse matching, and four rebuncher cavities control the bunch length.

Beam diagnostics were built and commissioned by LBNL, ORNL, LANL, and BNL members of the SNS Diagnostics Collaboration; they include two current monitors, six beam-position monitors that provide input for six steerer pairs, and five wire scanners to measure horizontal and vertical beam profiles. For the commissioning activities at LBNL, an external slit/harp emittance device was added at the end of the MEBT to assess the transverse beam quality. This type of emittance scanner uses a movable entrance slit to select various locations across the beam and a 32-wire detector system to measure the local beam divergence at each of these positions. The external scanner will be used during the recommissioning period at ORNL, and might later be replaced by an in-line device. The front-end beam was also used to test a laser-based profile-monitor prototype, and the results are promising for the possible use of this type of monitor in the superconducting linac sections. It is planned to eventually install beam-scrapers in the MEBT that can be used to clip beam halo and reduce beam spill in the high-energy part of the linac. Five units of a newly designed low-level RF system were built by LBNL, and supported the phase-synchronized operation of the RFQ klystron and the MEBT rebuncher cavities. The EPICS control system was supplied by the LBNL members of the SNS Global Controls group, and even allowed remote read-out of operational parameters from ORNL; remote operation would have been possible, but was not exercised in this period.

As a result of the front-end commissioning effort, several facts were established. The ion source reliably produces beams at the nominal duty factor of 6% with intensities exceeding 50 mA and uninterrupted periods of operation expected to reach 2 weeks or more. Electrostatic focusing works well with high-intensity beams in the LEBT. The RFQ transmission ranges above 90%, and the RFQ clips most of the low-intensity emittance wings generated by LEBT aberrations. All MEBT subsystems function as designed (only the main chopper system was not tested) – about 99% transmission was achieved without using any steerer, and the sensitivities to quadrupole and rebuncher tuning closely mirror simulation results. The transverse MEBT output emittances are just slightly above the nominal value, and can be reduced further by halo-scrapers. The most spectacular result is represented by the 50 mA pulsed beam-current measured at the end of the MEBT, almost 30% above the design goal of 38 mA.

Starting on 31 May, the SNS front end was partially disassembled and shipped to ORNL by 15 July. It is now fully installed at the SNS site, and recommissioning is planned to begin later this year.

LHC and the Grid: the great challenge

cernview1_12-02

“May you live in interesting times,” says the old Chinese proverb, and we surely do. We are at a time in history when many fundamental notions about science are changing rapidly and profoundly. Natural curiosity is blurring the old boundaries between fields: astronomy and physics are now one indivisible whole; the biochemical roots of biology drive the entire field; and for all sciences the computational aspects, for both data collection and simulation, are now indispensable.

Cheap, readily available, powerful computational capacity and other new technologies allow us to make incredibly fine-grained measurements, revealing details never observable before. We can simulate our detectors and basic physical processes at a level of precision that was unimaginable just a few years ago. This has led to an enormous increase in the demand for processor speed, data storage and fast networks, and it is now impossible to find at one location all the computational resources necessary to keep up with the data output and processing demands of a major experiment. With LEP, or at Fermilab, each experiment could still take care of its own computing needs, but that modality is not viable at full LHC design luminosities. This is true not only for high-energy physics, but for many other branches of experimental and theoretical science.

Thus the idea of distributed computing was born. It is not a new concept, and there are quite a few examples already in existence. However, applied to the LHC, it means that the success of any single large experiment now depends on the implementation of a highly sophisticated international computational “Grid”, capable of assembling and utilizing the necessary processing tools in a way that
is intended to be transparent to the user.

Many issues then naturally arise. How will these various “Grids” share the hardware fabric that they necessarily cohabit? How can efficiencies be achieved that optimize its use? How can we avoid needless recreations of software? How will the Grid provide security from wilful or accidental harm? How much will it cost to implement an initial Grid? What is a realistic timescale? How will all this be managed, and who is in charge?

It is clear that we have before us a task that requires significant advances in computer science, as well as a level of international co-operation that may be unprecedented in science. Substantial progress is needed over the next 5-7 years, or else there is a strong possibility that the use of full LHC luminosity will not be realized on the timescale foreseen. The event rates would simply be too high to be processed computationally.

Most of these things are known, at least in principle. In fact, there are national Grid efforts throughout Europe, North America and Asia, and there are small but significant “test grids” in high-energy physics already operating. The Global Grid Forum is an important medium for sharing what is known about this new computing modality. At CERN, the LHC Computing Grid Project working groups are hard at work with colleagues throughout the high-energy physics community, a principal task being to facilitate close collaboration between the LHC experiments to define common goals and solutions. The importance of doing this cannot be overstated.

As is often the case with high technology, it is hard to plan in detail because progress is so rapid. And creativity – long both a necessity and a source of pride in high-energy physics – must be preserved. Budgetary aspects and international complexities are also not simple. But these software systems must soon be operational at a level consistent with what the detectors will provide, in exactly in the same way as for other detector components. I believe it is time to depart from past practice and to begin treating software as a “deliverable” in the same way we do those other components. That means bringing to bear the concepts of modern project management: clear project definition and assignments; clear lines of responsibility; careful evaluations of resources needed; resource-loaded schedules with milestones; regular assessment and review; and detailed memoranda to establish who is doing what. Will things change en route? Absolutely. But as Eisenhower once put it: “Plans are useless, but planning is essential.”

Several people in the software community are concerned that such efforts might be counter-productive. But good project management incorporates all of the essential intangible factors that make for successful outcomes: respect for the individuals and groups involved; proper sharing of both the resources available and the credit due; a degree of flexibility and tolerance for change; and encouragement of creative solutions.

As has happened often before, high-energy physics is at the “bleeding edge” of an important technological advance – indeed, software is but one among many. One crucial difference today is the high public visibility of the LHC project and the worldwide attention being paid to Grid developments. There may well be no other scientific community capable of pulling this off, but in fact we have no choice. It is a difficult challenge, but also a golden opportunity. We must make the most of it!

Gamma-ray facility inaugurated in Namibia

cernnews3_11-02

The first telescope of the high-energy stereoscopic system (HESS), named in honour of Victor Hess, the discoverer of cosmic radiation, was officially inaugurated in Namibia in September. HESS is a system of large Cerenkov telescopes intended for high-energy gamma-ray astrophysics.

The HESS collaboration identified the Gamsberg area of Namibia as an ideal location for a high-energy gamma-ray observatory in January 1998. With the support of the Namibian government and local landowners, construction began in 1999. HESS was originally conceived as a two-phase project, with four telescopes being installed in the initial phase and a further 12 identical telescopes being added later. The first telescope began operation this summer, with phase one scheduled for completion by 2004. Options for phase two are currently being studied.

The physics motivation behind HESS is to pinpoint the origins of high-energy cosmic rays through the study of cosmic gamma rays from around 100 GeV to several TeV. Although Hess began the work that led to the discovery of cosmic rays in 1911, there is still very little known about their origins. The majority of primary cosmic rays are atomic nuclei, whose trajectories through space are influenced by interstellar and intergalactic magnetic fields. Such fields, however, do not affect gamma rays, and so their detection will point right back to the source.

LHCb receives delivery from Russia

cernnews4_11-02

The LHCb collaboration, dedicated to studying CP violation in B-meson decays at CERN’s Large Hadron Collider (LHC), has received the first components of its calorimeter system from Russia. The first 1200 of 3300 electromagnetic calorimeter (ECAL) modules and the first two of 52 hadron calorimeter (HCAL) modules were delivered to CERN in September. The so-called shashlik-type (lead-scintillator sandwich) ECAL modules are being produced by Russia’s Institute for Theoretical and Experimental Physics in collaboration with CERN. The HCAL tile calorimeter is the responsibility of the Institute of High Energy Physics in Protvino, with contributions from the Horia Hulubei National Institute for Physics and Nuclear Engineering in Bucharest, Romania; the Institute of Physics and Technologies in Kharkiv, Ukraine; and CERN. Series production of a Preshower detector is under preparation at the Institute for Nuclear Research in Moscow. The fast 40 MHz calorimeter detector readout electronics are the responsibility of French (Annecy, LAL-Orsay and Clermont-Ferrand) and Spanish (Barcelona) LHCb groups.

LHCb’s calorimeter has been designed for speed, since it will be used for triggering on collisions arising from the LHC’s 40 MHz bunch crossing rate. All three sub-detectors (Preshower, ECAL and HCAL) are based on fast scintillators with wavelength-shifting fibre readout. The HCAL (which will be used exclusively for triggering) uses iron as its passive medium, while the ECAL uses lead. While also participating in the trigger, another important function of the ECAL will be to reconstruct neutral pions and photons from B-meson decays. Production is set to continue at a rate of 10 ECAL modules per day and one HCAL module every two weeks.

Grid technology developed by ALICE

The ALICE experiment, which is being prepared for CERN’s Large Hadron Collider, has developed the ALICE production environment (AliEn), which implements many components of the Grid computing technologies that will be needed to analyse ALICE data. Through AliEn, the computer centres that participate in ALICE can be seen and used as a single entity – any available node executes jobs and file access is transparent to the user, wherever in the world a file might be.

For AliEn, the ALICE collaboration has adopted the latest Internet standards for information exchange (known as Web Services), along with strong certificate-based security and authentication protocols. The system is built around open-source components and provides an implementation of a Grid system applicable to cases where handling many distributed read-only files is required.

AliEn aims to offer a stable interface for ALICE researchers over the lifetime of the experiment (more than 20 years). As progress is made in the definition of Grid standards and interoperability, AliEn will be progressively interfaced to emerging products from both Europe and the US. Moreover, it is not specific to ALICE, and has already been adopted by the MammoGrid project (supported by the European Union), which aims to create a pan-European database of mammograms.

ALICE is currently using the system for distributed production of Monte Carlo data at more than 30 sites on four continents. During the last year more than 15,000 jobs have been run under AliEn control worldwide, totalling 25 CPU years and producing 20 Tbyte of data. Information about AliEn is available at http://alien.cern.ch.

MiniBOONE goes live at Fermilab

cernnews5_11-02

The MiniBOONE experiment at Fermilab inthe US saw its first neutrinos in September. Designed to test the controversial neutrino oscillation result from the Los Alamos LSND experiment, which is so far the only accelerator-based signal for oscillation, the experiment will take data for two years. That will allow the MiniBOONE collaboration to study the entire LSND allowed region with high sensitivity.

The LSND result remains controversial, since it is difficult to reconcile with oscillation results from Superkamiokande in Japan and the Sudbury Neutrino Observatory in Canada without invoking an extra type of neutrino. Confirmation would therefore require a major rethink of current particle theory. If the LSND result is correct, MiniBOONE expects to see around 1000 electron neutrinos in the pure muon-neutrino beam over the next two years.

More to physics than meets the eye

cernparticles3_11-02

Physics is the study of natural phenomena by humans equipped with brains. Without humans there would still be nature, but without brains there would be no understanding. The mechanisms of the brain therefore have a direct bearing on the way we see physics.

The brain is equipped with memory and various levels of processing for the data from readout systems linked to delicate sensory organs such as the eyes and ears. Through these sensors, the brain is subjected to a stream of confusing input. Consciousness is the process of interpreting and making sense of all this data.

The brain becomes conditioned to recognize certain signals as being important, and rejects the rest. A baby soon learns to differentiate the image of its mother’s face from the surrounding visual clutter. Later, it learns how to filter language from unresolved noise, and later still, to recognize the systematic shapes of written words. On encountering a word for the first time, the brain must absorb it letter by letter, and then work out what the new word means. Once learned, words are no longer read letter by letter. Instead, the brain directly perceives the pattern of the whole word. Such pattern recognition is a much faster process, but can be error-prone, as anyone who has proofread a document will have discovered.

Modern computers can process basic information much faster than any human brain. However, computers have yet to match the brain’s remarkable ability to perceive and recognize patterns and make judgements. (An example of this ability is given by caricatures, in which a well known face is immediately recognizable from a rudimentary sketch that exaggerates key features.)

Making the invisible visible

cernparticles4_11-02

Quantum physics underlies the rest of physics, but it is very different to everyday experience. One way of sidestepping this difficulty is the technique of particle tracking. The elementary particles taking part in a collision are themselves invisible, but if the collision is suitably choreographed, its results can be reconstructed.

One of the first tracking detectors was the cloud chamber, which was developed by C T R Wilson, and with which Patrick Blackett dramatically revealed the splendour of a subatomic collision process at Cambridge in the 1920s. A cloud chamber is filled with gas or vapour made metastable by a sudden expansion. A charged particle passing through the chamber rips electrons from the gas atoms, and the resulting ionization causes local condensation, leaving a visible trail behind the particle, just as a high-flying aeroplane leaves a white vapour trail in its wake.

From bubbles to electronics

cernparticles1_11-02

Cloud chambers worked very well when physicists studied cosmic rays or used low-energy, low-intensity laboratory sources of particles, such as radioactive nuclei. However, cloud chambers were ill-suited to recording nuclear collisions using beams supplied by the post-Second World War high-energy accelerators.

The invention of the bubble chamber by Donald Glaser in 1952 rose to the new challenge. This chamber used a liquid target, rather than gas, offering a proportionally denser obstacle to particle beams. It could also be made larger. Many generations of research physicists explored elementary particles via this route, and glorious photographs of particle interactions gave a fresh view of an otherwise invisible world. The bubble chamber did for the physics of the microworld what the Hubble Space Telescope is doing for astronomy.

However, the graphic images provided by bubble chambers brought new problems. The main one was that there were far too many pictures. Researchers had to carefully sort through millions of photographs from bubble chamber exposures to find what was new.

Experience taught researchers how to project the photographs onto a horizontal table where they could be viewed from different angles (particularly at grazing incidence) to reveal tracks and details that did not show up initially. Electronic instruments were developed to digitize and analyse bubble chamber information, but the pattern recognition abilities of the human eye remained unsurpassed.

About 20 years ago, the advent of high-intensity beams and fast electronics eventually led to new detector techniques and to the demise of the bubble chamber. Information on particle trajectories from today’s colliding beam machines is now recorded electronically. This has the immediate advantage of being partly 3D, and also enables the detector to be “triggered” for special physics situations. A bubble chamber could be compared to a TV camera monitoring traffic at a busy junction. This is useful for general information, but not for spotting speeding cars. For this, a radar gun monitors all passing cars, but only triggers the camera if the car is speeding.

With digital information, the door was also open to automatic pattern recognition, in which a computer could be “taught” to recognize a particular kind of behaviour. Without fast-tracking the analysis in this way, discoveries would become unacceptably rare.

cernparticles2_11-02

In a big particle detector like ALEPH at CERN’s LEP electron-positron collider, the tracking is carried out in a central cylinder surrounding the point where the particles collide. The remainder of the detector picks up supplementary information (notably energy), differentiates between different kinds of particles, and monitors penetrating muons as they exit the apparatus. Such an electronic detector records discrete hits in successive planes of sensors, and these hits have to be analysed to build up a complete picture of the collision event. (In the same way, a TV image is built up of separate pixels, but from a distance the image looks continuous.)

A track reconstruction procedure helps to reveal trajectories. Because of the number and complexity of the collisions, this track reconstruction must be done by computer. It must also be done accurately and reliably. If only a few collisions are being selected for the fast-tracked “discovery lane”, each one must be exactly right.

An approach pioneered by Hans Drevermann at the ALEPH experiment uses special techniques to check the findings of the track reconstruction procedures. The “Dali” system provides a new generation of classic pictures of electronically recorded particle tracks that rival those of the bubble chamber era. To achieve this, colour and projection, as well as sheer ingenuity, play important roles. The result is an intuitively appealing way of presenting the collisions, which is invaluable for illustrating talks and physics publications.

cernparticles5_11-02

Dali uses a number of visual tricks to enhance track visibility. One is a “fish-eye” view which artificially “inflates” the central tracking region compared with the rest of the detector. Other Dali techniques involve unconventional transformations of radial co-ordinates that help reveal momentum and direction of track curvature (thereby giving a handle on what kind of particle is involved). In this work, the use of colour has developed into an art form.

Tracking is not the only aspect of recording and interpreting what happens in a collision process. After passing through the central tracker, the emerging particles deposit energy in calorimeters. These are cellular, and the deposited energy is usually spread over a number of adjacent cells. This too has to be displayed.

Together, these imaging ideas have provided a new standard for displaying particle collisions. As well as aiding actual physics analysis, these displays provide a “trademark” presentation of results at meetings and conferences, where images that are immediately recognizable and intuitively understandable are at a great advantage. They also help the physicist to perceive and understand what he is studying. The experience gained over more than 10 years with the ALEPH event display system will now be harnessed for the next generation of CERN experiments at the Large Hadron Collider.

LHC test-bed progresses to second phase

cernnews2_10-02

A complete cell of CERN’s Large Hadron Collider (LHC) began tests at the laboratory in June. String 2, as the cell is known, has been built to validate LHC systems and operating procedures. It succeeds an original string made up of early prototypes, which was dismantled in December 1998. The present facility was first operated last year, although without its full complement of magnets. The full cell now consists of six dipole magnets, two straight sections (each comprising a quadrupole and corrector magnets), a prototype cryogenic distribution line and an electrical feedbox.

In its original configuration, String 2 had only three dipoles, all of which were prototypes. The three dipoles that have been added to make up the full cell are pre-production magnets that will form part of the future accelerator. The full cell is almost 120 m long and it is curved like the future accelerator to mimic the LHC as closely as possible. The amount of instrumentation and the complexity of the String 2 processes are also close to those of an LHC sector.

First tests went according to plan. Following mechanical checks to ensure there were no leaks and that the string could withstand the pressures that occur during a transition from the superconducting to the normal state (a quench), the assembly was cooled down to its nominal temperature of 1.9 K in just under 10 days. Powering up the circuits then followed without incident, with the dipole circuit reaching its nominal current of 11,860 A, corresponding to a magnetic field of 8.335 T, on 17 June. An experimental programme that will run until the end of the year is now under way.

ALICE crystals arrive at CERN

cernnews3_10-02

The first 500 crystals for the ALICE experiment’s photon spectrometer (PHOS) arrived at CERN in May after a journey via Moscow from the town of Apatity in the Russian arctic region. The experiment is optimized to study heavy-ion collisions and is scheduled to start data-taking in 2007. These crystals are the first of 17,000 that will make up the experiment’s PHOS – a sort of thermometer for the deconfined plasma of quarks and gluons that ALICE physicists hope to study. Denser than iron, the crystals’ lead tungstate scintillates when struck by photons, allowing the photon spectrum to be measured.

ALICE is not the only Large Hadron Collider experiment that will employ lead tungstate crystals; CMS will use some 80,000 of them in its electromagnetic calorimetry. Such a large order tied up the existing production capacity for such crystals and meant that ALICE had to look elsewhere. The solution was to recommission facilities at a former military factory in Apatity in the Murmansk region of northern Russia. Crystals are grown in ovens at more than 1000°C in a process that takes 60-70 h, during which the temperature must be constant and all vibration avoided. With 25 furnaces in operation at Apatity, around 100 people are employed in producing the ALICE crystals. Each furnace can produce 100 crystals per year. Meanwhile, a mechanical and optical testing device has been set up at the Kurchatov Institute in Moscow, where every crystal undergoes certification before being sent on to CERN.

Physicists create font for antimatter

Have you ever been frustrated by the difficulty of representing antiparticles in a Microsoft Word document, where you have to resort to writing “-bar” after the letter denoting the particle – for example as in K-bar? Now help is at hand, at least for Apple Macintosh users, in the form of a font that allows bars, or “overlines”, to be added to English characters and the most commonly used Greek characters. Physicists from the University of Mississippi in the US have developed the font, LinguistA, which allows you to make a K-bar, for example, by simply typing shift-5 followed by K.

More information is available at http://www.arxiv.org/abs/hep-ex/0208028.

bright-rec iop pub iop-science physcis connect