Bluefors – leaderboard other pages

Topics

Raising the dead detectors

semicon1_4-99

Silicon detectors placed as close as possible to particle beams measure the trajectories of particles as they emerge from collisions. At CERN’s flagship accelerator, the Large Electron­Positron collider (LEP), silicon detectors can expect to be traversed by around ten thousand million particles per square centimetre over their lifetime. However, at CERN’s next big accelerator, the Large Hadron Collider (LHC), this number will rise to a mammoth thousand million million passing particles per square centimetre.

Silicon detectors used as they have been in the past would not be able to cope with this enormous integrated particle flux. After prolonged exposure to passing particles, defects begin to appear in the silicon lattice as atoms become displaced, leaving lattice vacancies and atoms at interstitial sites. These have the effect of temporarily trapping the electrons and holes (holes are missing electron states that behave like positively charged particles), which are created when particles pass through the detector. Since it is these electrons and holes that announce the passage of a particle, lattice defects destroy the signal.

Substantial progress has been made in designing radiation-hard detectors by paying special attention to the design of detectors and improving on silicon’s properties. Now, serendipity seems to have brought another new solution. Using experience gained in searches for cold dark matter particles, where small signals demand the sensitivity of cryogenic detectors, a group of physicists at Bern decided to see what would happen to radiation damaged silicon detectors when they were cooled to cryogenic temperatures. The researchers found that, below 100 K, dead detectors come back to life.

The explanation for this phenomenon appears to be that, at such low temperatures, the electrons and holes that are normally present in silicon detectors, which form a constant so-called “leakage” current, are themselves trapped by the radiation-induced defects. Moreover, the rate at which these electrons and holes become untrapped is greatly reduced as a result of the reduced thermal energy. Consequently, a consistent and large fraction of radiation-induced defects remains filled. This means that electrons and holes that are released by passing particles cannot be trapped and the signal is not lost.

However, the story is not as simple as this. Closer investigation reveals that, for extreme radiation doses, exceeding those expected after 10 years of LHC operation, the signal is only partially recovered. Understanding this behaviour requires further study.

Another advantage of low-temperature operation is the possibility of using simplified detector designs and low purity material, thus opening the way to a substantial reduction in cost.

The Lazarus effect

semicon2_4-99

This is not the first time that low temperatures have been used to improve the performance of particle detectors. In fact, the Lazarus effect is very similar to a technique that has been known since 1981 to physicists using charge-coupled device (CCD) detectors.

CCDs were first tried in a test beam at CERN by the NA32 collaboration, which went on to use them successfully in its experiment. Similar detectors have been used for Stanford’s SLD vertex detector and its upgrade. These detectors were run at a relatively tropical 220 K to reduce the leakage current and to freeze out radiation-induced damage in the detector.

CERN’s RD39 collaboration plans to study the Lazarus effect in depth. The first step came in August 1998 when two silicon detectors with full read-out from the Delphi experiment at LEP were put into a test beam along with prototype detectors for the forthcoming COMPASS experiment. Members of the LHCb collaboration were also involved, because close-to-the-beam tracking is a vital feature of the group’s planned experiment.

One of the Delphi detectors had previously been irradiated with a comparable particle dose to that expected after a few years of LHC operation; the other was undamaged. The test beam demonstrated not only that the signal recovers at low temperatures, but also that the positional accuracy of silicon was not impaired, the two silicon detectors producing compatible results. Delphi’s standard read-out electronics also worked well at low temperatures.

A second test in November placed a healthy 3 x 3 pad array of silicon detectors in the beam line of the NA50 experiment, fully intercepting the highest intensity lead-ion beam at CERN. Permanently operated at 77 K, the 1.5 square millimetre pad centred on the beam performed well, even after being traversed by some ten thousand million lead ions. The pulses from each incident ion and the total ionization current from the detector were continuously monitored during and between beam pulses, with the results clearly showing that the detector survived this extreme environment without any noticeable change in performance.

These test beam results suggest that conventional silicon detectors operated at liquid nitrogen temperature could still remain the detectors of choice for the next generation of particle physics experiments. RD39 plans a series of proof-of-principle experiments to confirm its early findings and to demonstrate the feasibility of a low-cost cryogenic silicon tracker. The collaboration will also optimize a new high-intensity radiation-hard beam monitor based on the Lazarus effect. The results will be closely followed by the collaborations preparing for physics at the LHC.

Perfect rose

semicon3_4-99

Just as steel embedded in concrete has a dramatic effect on the properties of the finished product, so impurities in silicon can change the behaviour of silicon detectors. In 1996 the Rose collaboration set out to test the hypothesis that carbon and oxygen impurities, in particular, would improve the radiation tolerance of silicon, allowing it to be used for detectors in the LHC.

Oxygen and carbon were chosen because, according to models of radiation damage, oxygen should “capture” vacancies in the silicon lattice and carbon should capture silicon interstitials. Also, oxygen and carbon are always present to some extent in detector-grade silicon, making them prime candidates for further studies. This capturing effect would then render the lattice defects inactive, just as extreme cold does in the Lazarus effect. Unlike the Lazarus effect, however, silicon detectors made radiation-hard through defect engineering could be operated with only moderate cooling.

More than a dozen samples of silicon doped to various degrees with carbon and oxygen were studied while undergoing irradiation corresponding to the full lifetime of a detector in the LHC. While not entirely agreeing with model predictions,
the results were encouraging. Carbon in the silicon lattice diminishes performance, while oxygen improves radiation hardness to a greater degree than foreseen.

semicon4_4-99

The strength of the effect with oxygen was not the only unexpected outcome. As part of the Rose programme, samples were irradiated with different kinds of particle: neutrons, protons and pions. At the LHC, the radiation that traverses the detectors closest to the beam pipe is expected to consist mainly of pions, but, further away from the initial collision, neutrons will become more important.

The results show a performance improvement of a factor of three for strongly interacting charged particles in oxygenated detectors, compared with those of standard silicon detectors. Curiously, no improvement is seen for detectors irradiated by neutrons. However, because in most situations charged particles make up a substantial fraction of the radiation environment, the improvement in performance for charged particles is welcome. Moreover, a simple method has been found to diffuse oxygen into any silicon wafer prior to, or during, processing, and this is being transferred to detector manufacturers. Experiments such as DESY’s HERA-B, which replaces its silicon detectors each year, should be among the first to benefit.

Detectors are not the only place where solutions to such problems are needed. In the electronics industry, limitations with ion-implantation techniques are holding up progress in transistor miniaturization. In response to this problem, the EU supports the European Network in Defect Engineering of Advanced Semiconductor Devices (ENDEASD), which is linked to the Rose collaboration. ENDEASD combines a range of academic institutions and semiconductor manufacturers, bringing much extra expertise and knowledge to the complicated subject of radiation effects.

Given the success of both the Lazarus and Rose collaborations, the obvious question to ask is whether an even higher performance could be achieved by combining the two techniques. This is high on the agenda for both collaborations and will be investigated this year. LHC experiments are already preparing to put out invitations to tender for their tracking detectors, so for Lazarus and Rose the timescale to produce working detectors is tight. However, with the progress made so far, even at temperatures below 100 K, both collaborations are confident that the future for silicon looks rosy.

ISAC produces first ion beam at TRIUMF

On 30 November 1998 the new ISAC (Isotope Separation and Acceleration) facility at the Canadian TRIUMF laboratory in Vancouver produced its first ion beam with short-lived exotic isotopes.

Low-energy beams of potassium (atomic mass numbers 37 and 38) were transported from a proton-bombarded target through a high-resolution mass-separator system to the first experimental station. This milestone was achieved more than a month ahead of initial estimates. The first experiment will investigate weak interaction symmetries in the decay of optically trapped potassium isotopes. On 30 November 1998 the new ISAC (Isotope Separation and Acceleration) facility at the Canadian TRIUMF laboratory in Vancouver produced its first ion beam with short-lived exotic isotopes.

ISAC uses ISOL (on-line isotope separation) to produce short-lived exotic nuclei through a reaction between the primary proton beam and a thick target. Additional experiments measure precise lifetimes of exotic nuclei.

While this first beam was created with only 0.5 µA of proton current on the production target, over the next few years the current on target will gradually be increased to 100 µA.

A nuclear magnetic resonance station using polarized lithium 8 for condensed matter studies and a low temperature nuclear-orientation station are scheduled to begin operation this autumn. Completion of this first phase of the facility (ISAC-I) is expected in late 2000 when accelerated beams up to 1.5 MeV/nucleon become available for nuclear astrophysics measurements.

Making tracks for the LHC

Whatever means are used to record the result of high-energy collisions, it is still important for physicists to have “pictures” of the tracks left by the emerging particles. In middle of the century, these tracks were captured photographically in mechanically operated cloud and bubble chambers, whose annals provided a dramatic photo album of physics progress. These chambers have been superseded by sophisticated electronic detectors in which the particles produce signals in successive layers of sensors surrounding the particle collision site. A computer-driven pattern recognition system disentangles a cloud of discrete points, joining together related signals to reveal the tracks and provide an electronic snapshot of the collision. For CERN’s LHC collider, it is the inner tracking systems of the big ATLAS and CMS detectors that will illustrate the research albums for the beginning of the 21st century.

The Technical Design Reports for the ATLAS pixel detector and CMS Tracker have now been formally approved and become subject to stringent scheduling and monitoring to ensure commissioning in the year 2005.

ATLAS pixels

lhc1_3-99

The heart of the mighty ATLAS detector being constructed for CERN’s LHC proton collider will record the sprays of particles emerging directly from particle collisions at energies never before explored. This core “vertex detector”, a direct descendant of the old bubble chamber, will track the fine central lacework of these complicated processes. These innermost signals will be vital for the subsequent layers of detector to follow the particles emerging from each LHC interaction.

The technique being used is semiconductor pixels, a technology pioneered at fixed-target experiments and previously used in vertex detectors at the Delphi and SLD experiments respectively at CERN’s LEP and Stanford’s SLC electron­positron colliders. Pixels ­ tiny diodes implanted on semiconductor wafers ­ pick up the ionization produced by charged particles and offer high spatial resolution in two directions, fixing where particles have passed to within ten microns.

ATLAS pixels will be arranged in three layers in a central barrel of 80 cm length immediately around the collision point, supplemented by five discs either side in the beam direction. The barrel sensors will overlap to ensure that there are no cracks through which particles can escape undetected. The key innermost barrel layer (radius 4 cm) will use 200 micron sensors, while the remainder will use 250 micron thickness. Together with their readout electronics, these will form 1508 barrel modules and 720 disk modules. Although mounted on different structures, the barrel and disk sensors are the same.

lhc2_3-99

Particle bombardment

Using pixels for ATLAS brings new challenges. Firstly, the semiconductors will have to be able to withstand the intense particle bombardment close to the interaction point, with several collisions every 25 nanoseconds each producing hundreds of secondary particles. This could easily spoil the semiconductor properties of the detectors.

Secondly, pixel readout has always needed sophisticated electronics, but in this case the problem is exacerbated by the torrent of data emerging, equivalent to about 100 Megabytes per second.

Thirdly, the substrate of each semiconductor element has to be reliably fixed (bump-bonded) to its support and connections, and finally the array of pixels has to be mechanically supported without interfering too much with the emerging particles, and with adequate cooling. All this for about a hundred million pixels.

The baseline sensor choice is n+ implants on n-type substrate, with individual sensors isolated either via high-dose p-implant surrounds, or by spraying the whole n-surface with medium dose p-implant which is overcompensated by the high dose pixel implants of the sensors themselves. Both techniques show encouraging results and both are being pursued prior to making a final design decision. 4 inch wafers containing two sensor tiles have been fabricated by industry in Germany and in Japan, using both types of isolation in each wafer.

Readout electronics

Readout electronics is a real challenge. With highly irradiated sensors and thin layers, pixel front-end electronics will have to cope with very small signals. In addition, readout of the few thousand pixels hit during a bunch crossing requires sophisticated architecture with lots of task parallelism and data compression. All the electronics has to be built using radiation-hard components, however, initial design and prototyping will be accelerated by using rad-soft electronics. Data is transmitted via optical fibres.

For the integration of sensors and electronics, bump bonding using indium and solder are both being evaluated using test beams at CERN, with a microstrip telescope fixing tracks to 5 microns.

For the mechanical support, a key element is the thermal management tile (TMT), each one of which has to support 13 modules and drain off about 15 kW of heat produced by 2 m2 of integrated electronics. This is done via a cooling tube, supported by an omega-shaped backbone framework. Sensors have to be maintained at ­6 °C. Several TMT solutions are being investigated.

Having taken the plunge to go for semiconductor pixels, initial tests are progressing well. Several design decision still remain, but the heart of the ATLAS detector is on schedule to begin pumping for the first LHC collisions in 2005.

Information from Leonardo Rossi, Genoa.

ABSolutely fabulous!

beam1_3-99

The aim of the ABS Automated Beam Steering project is to give accelerator operators a set of software tools to make their lives easier, while improving the quality of the beams that they deliver. The scheme was initiated by Bruno Autin in CERN’s Proton Synchrotron (PS) Division. Similar projects are in place in accelerator labs around the world, and on 14-16 December CERN hosted a workshop for 70 accelerator physicists. On the agenda was software sharing to make best use of limited resources, the adoption of a common vocabulary and a look ahead to control systems for future accelerators.

When accelerator physics was young, particle accelerators were set up by hand. Operators would go to the magnets with a voltmeter and a screwdriver and adjust each one. Now, computers display information and allow the magnet-tweaking to be done from a central control room, but the procedure is essentially still manual.

With modern computer technology it was only a matter of time before someone asked if operators are necessary. Could accelerators be controlled by computer? The answer, at least for now, is no. Skilled operators are still needed to ensure the smooth running of these sensitive machines, and so the ABS project was born. After two years of preparation, prototypes written by the PS controls group were tested last year and the new tools will be available in 1999.

ABS is not the first attempt at the partial automation of accelerator controls. The project’s origin can be traced to the antiproton source control software designed by CERN’s 1984 Nobel laureate Simon van der Meer. Later, at CERN’s LEP electron­positron collider, an ABS-type program was written to perform closed-orbit corrections. ABS is, however, the first standardized set of software tools for the purpose. Previous attempts in PS have not been successful because, without a standard interface to the software, operators simply found it easier to continue as they were, rather then learn a new system for each kind of accelerator adjustment. The ABS team’s main feature is a generic interface, so whatever adjustment is needed, the program to do it looks the same to the operator.

The ABS backbone is an Oracle database in which every detail of the accelerator can be described. One problem faced by the venerable PS when the project began is that, without a database, vital information was being lost as people retired. Oracle expert Josi Schinzel joined PS in early 1997 to address this problem and has designed a flexible database package that can be adapted easily to new kinds of data as required. Currently it includes all of the information needed to control the accelerator in day-to-day running. In the future it will be extended to include more general documentation of the type frequently carried around only inside experts’ heads.

The ABS procedure starts by setting up the accelerator to nominal settings. But, at PS, those settings were defined in the 1950s and were badly dated. So, when the ABS project began, even this step had to be redefined. Over the years, parts had moved and new ones had been added, so the first step was to dust off the old nominal model and update it, realigning components that had not been touched since they were installed in the 1950s. However, even the best model is never exact. There are always uncertainties in magnet currents, ground movements are not accounted for, and there are inevitably magnetic inhomogeneities. To optimize, the machine parameters must be tweaked until the beam is as required.

ABS uses the Mathematica package along with minimization software developed at CERN. Dedicated corrector magnets ­ dipoles or quadrupoles ­ are then tweaked by the program to produce an optimal beam. In the PS, 40 corrector magnets are under ABS control for closed-orbit corrections alone. Not all are needed to make a correction. For an operator to go through all of the permutations to find the optimal solution would take weeks. ABS can do the job in minutes, thus reducing the arbitrariness of the process.

December’s workshop was the first of its kind but is unlikely to be the last. Already, accelerator physicists from SLAC are planning a follow-up in two years’ time. About half of the workshop’s participants were from CERN; the rest came from accelerator labs around the world, with all of the main particle physics sites being represented.

Summing up the meeting, CERN’s Phil Bryant stressed the chance for industry to play a role, citing the ABS system in use at Italy’s ELETTRA synchrotron light source as an ideal candidate for a commercial package. This would be a godsend for small labs where resources to develop the software themselves are not available.

ABS is currently applied to the PS, Booster and several associated linacs and transfer lines. At CERN’s December workshop, accelerator physicists began sharing their expertise in a bid to extend ABS and similar systems to other accelerator complexes. The net result will be better beams for researchers and a quieter life for operators.

The sun sets on SATURNE

saturne1_3-99

The French SATURNE National Laboratory formally ceased to exist on 31 December 1997. The authorities had actually taken the decision to close it down a few weeks earlier when a large area of the roof above the experimental areas collapsed under a heavy fall of snow. This was the sad end of a forty-year-old laboratory which had lived through two clearly distinguishable eras.

The first SATURNE synchrotron built by the CEA (French Atomic Energy Authority) in 1956­-58 was a weak focusing machine, mainly supplying 3 GeV protons. Particle physicists used bubble chambers (hydrogen or propane) among other detectors. From the mid-60s higher-energy beams elsewhere attracted away an increasing number of SATURNE users.

At the same time, a core community was becoming more aware of the importance of probes close to the GeV energy range, highly suitable for nucleons. The construction of a large magnetic energy-loss spectrometer, SPES1, showed that while nuclear levels could be measured at 1 GeV, the obsolescent synchrotron had to be replaced.

The outcome was the construction in 1978 of a new strong-focussing and separated-SATURNE (focusing and bending) synchrotron for nuclear physics, SATURNE-2. Its maximum energy, limited by the size of its buildings, remained the same (2.95 GeV for protons). This machine was the result of careful consideration followed by a project undertaken jointly by CEA-IRF and CNRS-IN2P3, establishing the new SATURNE National Laboratory.

Over the years, the Laboratory acquired several particle sources making it possible to supply all light-nucleus beams (up to helium-4) at high intensities (up to 1011 per second extracted slowly and without RF). Heavy-ion beams accelerated up to 1.15 GeV per nucleon also became available with the DIONE source and the MIMAS preinjector. Finally, following the solution of the depolarization problem by slow or fast negotiation of depolarizing resonances, SATURNE became capable of supplying the world’s most intense GeV-range proton and deuteron beams. The gradual improvement in the intensities and emittances of the HYPERION polarized particle source and the MIMAS preinjector were essential for this.

A national and then an international community built a large array of detectors to exploit these beams. The first were magnetic spectrometers (SPES 1, SPES 2, SPES 3 and SPES 4) with complementary properties (high resolution to separate nuclear levels, large pulse acceptance for the study of large objects and wide excitation energy ranges, operation close to the beam direction etc). An unusual Time Projection Chamber (DIOGENE) was built to study central collisions between heavy ions, resulting in high multiplicities. A station devoted to nucleon nucleon interactions (elastic scattering) made it possible to polarize the incident beam and the proton target along different axes independently.

More specific devices were also installed: a full solid angle detector (ISIS) for multi-fragmentation studies, cylindrical wire chambers (ARCOLE) for elastic n-p examination, high-acceptance magnetic detectors (DISTO, SPES 4¼) recoil polarimeters, photon detection for eta physics (PINOT), proton radiography etc.

Hadron studies

Most of the experiments at SATURNE were devoted to hadrons ­ their interactions and their behaviour in nuclei. The beams available also made possible a large number of nuclear structure studies, the simulation of the effects of cosmic radiation in the laboratory and a series of spallation neutron measurements with an eye to transmuting nuclear waste, to mention only a few.

The Laboratory made a particularly important contribution to the understanding of the nuclear force by polarization measurements in proton­proton and neutron­proton collisions in an energy range hitherto largely unexplored. It also contributed to research into narrow dibaryonic states consisting of six quarks.

As the wavelength of the proton in the GeV range is nucleon-sized, it is possible to calculate directly the proton­- nucleus interaction from the free nucleon­nucleon interaction without going through the intermediary of an effective force. This provides direct access to the properties of the target nucleus. It thus became possible to determine matter radii (in addition to the charge radii studied by electron scattering) and transition densities for many nuclei.

Moreover, the polarization of the deuteron beams made it possible to isolate specific (spin isoscalar) response functions of the nuclei, which it had never been possible to measure before.

The study of systems with a small number of nucleons provided a better knowledge of the reaction mechanisms for a wide variety of processes and energy and momentum transfers. It particularly concerned the role of certain baryonic resonances in meson production and kinematic limits to meson exchange models. A set of mainly pseudoscalar meson production experiments in nucleon­nucleon collisions provided the basic information needed to interpret these processes in proton­nucleus and nucleus­nucleus collisions. The contribution of the polarization was found to be highly important in determining the dominant mechanisms. This applies especially to exclusive hyperon production, one of the laboratory’s final major programmes.

A remarkable effect, the highly intense production of eta mesons (over 108 a day) at threshold in proton­deuteron collisions provided a tagged eta source for precise measurements of the eta mass and its decay into muon and photon pairs. This high production can also be seen in deuteron­deuteron reactions at threshold and may be interpreted as the first evidence for eta-mesic nuclei.

A particularly important collective phenomenon, the pion mode which characterizes propagation in nuclear media, was demonstrated by charge exchange reactions using light and heavy ions.

The heavy ion beams also resulted in several programmes for the study of macroscopic properties of nuclear systems (multi-fragmentation, stopping power, compressibility).

Two applications-oriented programmes produced important results. Proton and deuteron beams made it possible to examine neutron production by spallation. These data are essential for the validation of the computation programs used in the design of new methods of transmuting radioactive waste. SATURNE’s energy range and beams also made it ideal for simulating cosmic rays and the damage caused to instruments exposed to it. With a week of irradiation simulating a million years of exposure, a series of experiments made it possible to trace stellar history.

SATURNE’s assets were: the quality of its beams; easy and fast energy changes (half an hour or less); its variety of ions and intensity; stable polarization; and mutiple ejection (two experiments supplied with different energies and intensities). The complementary detectors were another trump card. Last, but by no means least, was the competence and willingness of laboratory staff in finding the best solutions for physics.

A book describing many of these research programmes is being published by World Scientific.

SLAC B-Factory comes up to speed

Shortly after the dedication of the PEP-II B-Factory at the Stanford Linear Accelerator Centre (SLAC) on 26 October, a team led by John Seeman resumed the task of commissioning the new electron­positron collider.

The 9.0 GeV electron ring and the 3.1 GeV positron ring both turned on quickly, successfully storing beams before the end of the month. Collisions between the two beams, first achieved on 23 July occurred again on 10 November with 11 bunches in each ring. This time significant luminosity was observed, at about 3 x 1030 per cmper second – but still a factor of 1000 below the design goal.

Further commissioning included attempts to store much higher currents in each beam and focus them better at the interaction point, while improving their lifetimes. With the extensive progress already achieved on the electron ring, attention shifted to the positron beam, which had a lifetime of about 30 minutes in early November and slowly improved during the run. Radiation scrubbing of the vacuum in this ring permitted the stored current to reach 415 millliamps in a total of 291 bunches by run’s end. The current per bunch now exceeds the design goal of 2.1 amperes in 1658 bunches.

By run’s end a luminosity of 3 x 1031 per cm2 per second was measured with 261 bunches circulating in each beam. But calculations based on the beam sizes and currents indicate that the true value could be 2 to 4 times higher. Interaction region group leaders Stan Ecklund and Michael Sullivan are studying the reasons for the apparent discrepancy. Several ring parameters (e.g. tunes and emittances) were varied to maximize the luminosity, giving beam-beam tune-shift limits of about 0.01 to 0.02.

Physicists led by Tom Mattison of SLAC and Witold Kozanecki of Saclay monitored the backgrounds in both rings during this run. These backgrounds ­ believed to be largely due to beam­gas interactions ­ are 5 to 10 times higher than anticipated in the PEP-II conceptual design report. Although the BaBar detector will be able to handle such backgrounds, they are still a cause for concern. Work continues to understand these backgrounds and reduce them. Additional beam collimators are being installed before the next commissioning run.

The current schedule calls for a final commissioning run from mid-January to mid-February followed by installation of the BaBar detector at the interaction point. If all goes well, physicists in this collaboration can expect to begin taking data on 1 May. “We are very pleased with this progress, ” said Seeman, “but we must keep a firm eye on our goals.”

RHIC Mock Data Challenge successfully completed at Brookhaven

With Brookhaven’s RHIC relativistic heavy-ion collider scheduled to be commissioned this year, preparations for its experimental programme gather momentum. The RHIC Mock Data Challenge 1 (MDC-1) began on 8 September and finished successfully on 19 October.

With installed capacities amounting to approximately 25% of that which will be available at the start of the first RHIC physics run, this six-week exercise involved the RHIC Computing Facility, the US Department of Energy’s (DOE’s) High Energy and the Nuclear Physics Computational Grand Challenge Initiative, and the four RHIC experimental collaborations, BRAHMS, PHENIX, PHOBOS, and STAR.

The main goals of the exercise were to show the performance of: event data recording; event reconstruction; and data mining (selecting rich subsets from large volumes of data); each for multiple experiments simultaneously.

During the exercise, aggregate event data recording rates into the High Performance Storage System (HPSS) for the four experiments as high as 18 Mbyte/sec for an 8-hour period were measured. (HPSS is hierarchical storage management system software developed under a Cooperative Research And Development Agreement including several DOE Labs, now commercialized by IBM.)

Event reconstruction by the four experiments on a computing farm consisting of up to 104 Pentium II processors, representing some 1400 SPECint95 benchmarks of CPU capacity, were achieved with CPU utilization efficiencies for a 16-hour period averaging 80% across the experiments.

During simultaneous event data mining by the four experiments, a variety of data access measurements were made. These included evaluation of the performance of a Sun server compared with network-connected Pentium farm machines, the use of Grand Challenge Project software to coordinate queries, and the use of an Oak Ridge developed system to batch files for access from HPSS tapes.

The Grand Challenge Project and STAR were also able to build and exercise a data summary tape level objectivity event data store. Secondary objectives, including running multiple simultaneous functions for a subset of the experiments and running for extended periods for individual experiments, seven days for PHENIX and STAR, were also achieved.

From the perspective of the RHIC Computing Facility, the exercise was valuable in terms of verifying and detailing expected behaviour and limitations of the current facility and by revealing some unexpected problems.

As anticipated, the Managed Data Server (MDS) and in particular the HPSS were found to be the single most complex and critical components. The HPSS showed itself capable of high performance and adequate to the goals of the exercise. However, it was also clear that the time between its initial installation at the RHIC Computing Facility and its large-scale use in the exercise were not adequate to achieve the desired levels of reliability. The limited storage resources, in particular tape drives, available for this exercise also contributed to the stress on HPSS. Except for an initial delivery delay, the performance of the Intel-based Linux processor farms during the exercise were gratifyingly close to what was anticipated. An unexpected issue was the performance of the RHIC Wide Area Network. While the need to tune RHIC Computing Facility network parameters and collaborating remote machines was anticipated, end-to-end problems including the national ESnet and/or commercial links were more serious and less tractable than anticipated.

From the perspective of the RHIC Computing Facility, the ability of all six parties to participate effectively in a unified exercise was the most important outcome. If this synergy continues and convergent iteration can be achieved, effective computing for the RHIC experimental programme will be assured.

A cold start for LHC

lhc1_2-99

One of thevfirst major LHC procurement contracts was for the superconducting materials ­niobium-titanium bars andvniobium sheet worth $45 million – with Wah-Chang in the US and using US money. This was followed by contracts for the actual cable manufacture. A total of 13 680 kilometres of cable will be needed, longer than the diameter of the Earth. As well as being for the dipoles, this material will also be used for the 520 focusing quadrupoles and several other magnets.

Involved in this manufacture are Alsthom (France), Vacuumschmelze (Germany), Europa Metalli (Italy), Furukawa (Japan) and IGC (USA), reflecting the worldwide involvement in the LHC.

Testing and quality assurance of the superconducting strands, and subsequently the full cable, is absolutely vital. A new test facility at CERN, equipped with its own helium refrigerator, will come into operation soon. This facility will be used for quality control of the strands during production, whereas most of the final cable will be tested in a similar facility at Brookhaven as part of the CERN­US collaboration.

For the magnets themselves, tests on 1 metre dipole models underlined the wisdom of reverting to the original 6-block coil design after a flirtation with a 5-block configuration.

lhc2_2-99

During 1998 two more 10 metre collared coils were delivered by industry and assembled at CERN into complete magnets, and the first 15 metre prototype dipole, built by industry under an agreement between CERN and the Italian INFN, was put through its paces. While the nominal field for LHC magnets to handle 7 TeV beams is 8.34T, 9T is seen as the ultimate goal. While the former figure is usually a cinch, the latter is not, underlining the need for careful quality control at all stages in the manufacture and assembly, and the importance of the collaring procedure to anchor all components securely under immense magnetic forces and when cooled to 2 K.

Orders for six prototype collared coils with the series manufacture design have been issued, and the first has even arrived. These will be assembled into cryomagnets at CERN, as industry is not yet equipped with the necessary 15 metre hydraulic presses for the welding of the cold mass.

Prototype high-temperature superconducting current leads have been ordered and tested. Other cryogenic-related equipment reflects further the world involvement in the LHC. Power supplies for quench heaters are being developed by a collaboration working through the Indian Centre of Advanced Technology, Indore, while equipment to handle the extraction of the stored superconducting energy in the event of a quench is being designed and constructed by Russian industry and by IHEP, Serpukhov.

Also being supplied from Russia, in this case the Budker Institute, Novosibirsk, are 360 warm dipoles and 180 quadrupoles for the two 2.5 km transfer lines feeding protons from the SPS to the LHC. The first magnets have arrived.

With the technical specification of the dipole cold masses complete, almost the last act of 1998 at CERN was to issue a call for tenders for the first phase of LHC dipole procurement.

EPIC developments in processing power

chips1_2-99

In the long run, technology always benefits from new scientific insights. Modern semiconductor technology, for example, is one of the ultimate applications of quantum mechanics.

But there are other ways in which science can drive technology. For science to advance, researchers look at the world around us in new ways and under special conditions. The further science advances, the more unusual these conditions become. Irrespective of the ultimate scientific breakthrough, these increasingly stringent demands frequently catalyse new technological developments. Cryogenics, high vacuum and data acquisition and processing are examples of areas where the special requirements of particle physics have fostered industrial progress.

At CERN’s LHC collider now under construction, the very high event-rates call for a major push in data acquisition and processing power for the experiments. Each LHC experiment will generate a torrent of information, about 100 Mbyte per second, equivalent to a small library. With raw events happening every 25 nanoseconds, the data volumes will grow at an alarming rate, and a major upgrade in data handling capability is called for in order that precious physics information is not to be lost.

In addition, to minimize the requirement for expensive computer power downstream, as much processing as possible has to be done by hardware strategically distributed upstream.

Moore’s Law

At the same time, the highly competitive computer industry is continually at work developing its next generation of products. The industry accepts a law first put forward by Intel founder Gordon Moore, which states that the number of transistors that can be harnessed for a particular task doubles every 18 months. With processing speed thus also increasing at an annual rate of 60%, new approaches are continually sought to take maximum advantage of this incredible rate of advance.

It is extremely expensive for a computer manufacturer to embark on major development alone. Computer supplier Hewlett Packard and semiconductor giant Intel announced a joint research and development project in 1994 with the objective of catering for systems to appear on the market at the end of the 90s. At the time, it was already clear that 32-bit technology would soon have yielded all it could, and future developments would have to turn to a more flexible approach.

The outcome ­ the new Explicitly Parallel Instruction Computing (EPIC) technology ­ is a milestone in processor development. EPIC is the foundation for a new generation of 64-bit instruction set architecture driving the flow of operations through the microprocessor.

The main advance is the chip’s capacity for parallel processing, handling different operations at the same time rather than the traditional sequential approach. A good example of sequential processing is a traditional airline check-in, where although there are normally many parallel counters, each customer can only use one. At each counter a single clerk handles a long sequence of operations ­ ticket, seat allocation, baggage, boarding pass etc.

Throughput could be increased with more clerks behind each counter, each clerk being responsible for a specific operation, but this is not true parallelism. Even in the traditional check-in approach, sequential operations eventually become parallel ­ baggage is accepted item by item before being assembled into parallel loads for different aircraft.

However, in a fully parallel processing environment, all check-in tasks would be handled at separate counters coordinated by a central processor. Customers would be tagged as they entered the airport building and the whole check-in operation would become scheduled to occur in parallel.

The new EPIC design advances on current X86 Intel architecture by allowing the software to tell the processor when parallel operations are needed. This reduces the number of branches and optimizes the links between processing and memory. Under the codename Merced, the first 64-bit processor is scheduled for production next year.

bright-rec iop pub iop-science physcis connect