Comsol -leaderboard other pages

Topics

LHC computing: Milestones (archive)

The Grid gets EU funds

Plans for the next generation of network-based information-handling systems took a major step forward when the European Union’s Fifth Framework Information Society Technologies programme concluded negotiations to fund the Data Grid research and development project. The project was submitted to the EU by a consortium of 21 bodies involved in a variety of sciences, from high-energy physics to Earth observation and biology, as well as computer sciences and industry. CERN is the leading and coordinating partner in the project.

Starting from this year, the Data Grid project will receive in excess of €9.8 million for three years to develop middleware (software) to deploy applications on widely distributed computing systems. In addition to receiving EU support, the enterprise is being substantially underwritten by funding agencies from a number of CERN’s member states. Due to the large volume of data that it will produce, CERN’s LHC collider will be an important component of the Data Grid.

As far as CERN is concerned, this programme of work will integrate well into the computing testbed activity that is already planned for the LHC. Indeed, the model for the distributed computing architecture that Data Grid will implement is largely based on the results of the MONARC (Models of Networked Analysis at Regional Centres for LHC experiments) project.

The work that the project will involve has been divided into numbered subsections, or “work packages” (WP). CERN’s main contribution will be to three of these work packages: WP 2, dedicated to data management and data replication; WP 4, which will look at computing-fabric management; and WP 8, which will deal with high-energy physics applications. Most of the resources for WP 8 will come from the four major LHC experimental collaborations: ATLAS, CMS, ALICE and LHCb.

Other work will cover areas such as workload management (coordinated by the INFN in Italy), monitoring and mass storage (coordinated in the UK by the PPARC funding authority and the UK Rutherford Appleton Laboratory) and testbed and networking (coordinated in France by IN2P3 and the CNRS).

March 2001 p5 (abridged).

 

The Gigabyte System Network

To mark the major international Telecom ’99 exhibition in Geneva, CERN staged a demonstration of the world’s fastest computer-networking standard, the Gigabyte System Network. This is a new networking standard developed by the High-Performance Networking Forum, which is a worldwide collaboration between industry and academia. Telecom ’99 delegates came to CERN to see the new standard in action.

GSN is the first networking standard capable of handling the enormous data rates expected from the LHC experiments. It has a capacity of 800 Mbyte/s (that’s getting on for a full-length feature film), making it attractive beyond the realms of scientific research. Internet service providers, for example, expect to require these data rates to supply high-quality multimedia across the Internet within a few years. Today, however, most home network users have to be content with 5 kbyte/s, or about a single frame. Even CERN, one of Europe’s largest networking centres, currently has a total external capacity of only 22 Mbyte/s.

November 1999 p10 (abridged).

 

Approval for Grid project for LHC computing

The first phase of the impressive Computing Grid project for CERN’s LHC was approved at a special meeting of CERN’s Council, its governing body, on 20 September.

CCCom2_10_08

October 2001 p32 (extract).

After LHC commissioning, the collider’s four giant detectors will be accumulating more than 10 million Gbytes of particle-collision data each year (equivalent to the contents of about 20 million CD-ROMs). To handle this will require a thousand times the computing power available at CERN today.

Nearly 10 000 scientists, at hundreds of universities round the world, will group in virtual communities to analyse this LHC data. The strategy relies on the coordinated deployment of communications technologies at hundreds of institutes via an intricately interconnected worldwide grid of tens of thousands of computers and storage devices.

The LHC Computing Grid project will proceed in two phases. Phase 1, to be activated in 2002 and continuing in 2003 and 2004, will develop the prototype equipment and techniques necessary for the data-intensive scientific computing of the LHC era. In 2005, 2006 and 2007, Phase 2 of the project, which will build on the experience gained in the first phase, will construct the production version of the LHC Computing Grid.

Phase 1 will require an investment at CERN of SFr30_million (some €20 million) which will come from contributions from CERN’s member states and major involvement of industrial sponsors. More than 50 positions for young professionals will be created. Significant investments are also being made by participants in the LHC programme, particularly in the US and Japan, as well as Europe.

November 2001 p5 (abridged).

LHC computing: Switching on to the Grid (archive)

CCCom1_10_08

When CERN’s LHC collider begins operation, it will be the most powerful machine of its type in the world, providing research facilities for thousands of researchers from all over the globe.

The computing capacity required for analysing the data generated by these big LHC experiments will be several orders of magnitude greater than that used by current experiments at CERN, itself already substantial. Satisfying this vast data-processing appetite will require the integrated use of computing facilities installed at several research centres across Europe, the US and Asia.

During the last two years the Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC) project, supported by a number of institutes participating in the LHC programme, has been developing and evaluating models for LHC computing. MONARC has also developed tools for simulating the behaviour of such models when implemented in a wide-area distributed computing environment. This requirement arrived on the scene at the same time as a growing awareness that major new projects in science and technology need matching computer support and access to resources worldwide.

In the 1970s and 1980s the Internet grew up as a network of computer networks, each established to service specific communities and each with a heavy commitment to data processing.

In the late 1980s the World Wide Web was invented at CERN to enable particle physicists scattered all over the globe to access information and participate actively in their research projects directly from their home institutes. The amazing synergy of the Internet, the boom in personal computing and the growth of the Web grips the whole world in today’s dot.com lifestyle.

Internet, Web, what next?

However, the Web is not the end of the line. New thinking for the millennium, summarized in a milestone book entitled The Grid by Ian Foster of Argonne and Carl Kesselman of the Information Sciences Institute of the University of Southern California, aims to develop new software (“middleware”) to handle computations spanning widely distributed computational and information resources – from supercomputers to individual PCs.

Just as a grid for electric power supply brings watts to the wallplug in a way that is completely transparent to the end user, so the new data Grid will do the same for information.

Each of the major LHC experiments – ATLAS, CMS and ALICE – is estimated to require computer power equivalent to 40,000 of today’s PCs. Adding LHCb to the equation gives a total equivalent of 140,000 PCs, and this is only for day 1 of the LHC.

Within about a year this demand will have grown by 30%. The demand for data storage is equally impressive, calling for some several thousand terabytes – more information than is contained in the combined telephone directories for the populations of millions of planets. With users across the globe, this represents a new challenge in distributed computing. For the LHC, each experiment will have its own central computer and data storage facilities at CERN, but these have to be integrated with regional computing centres accessed by the researchers from their home institutes.

CERN serves as Grid testbed

As a milestone en route to this panorama, an interim solution is being developed, with a central facility at CERN complemented by five or six regional centres and several smaller ones, so that computing can ultimately be carried out on a cluster in the user’s research department. To see whether this proposed model is on the right track, a testbed is to be implemented using realistic data.

Several nations have launched new Grid-oriented initiatives – in the US by NASA and the National Science Foundation, while in Europe particle physics provides a natural focus for work in, among others, the UK, France, Italy and Holland. Other areas of science, such as Earth observation and bioinformatics, are also on board. In Europe, European Commission funding is being sought to underwrite this major effort to propel computing into a new orbit.

June 2000 pp17–18.

TOTEM and LHCf: Roman pots for the LHC (archive)

The “Roman pot” technique has become a time-honoured particle-physics approach each time a new energy frontier is opened up, and CERN’s LHC proton collider, which can attain collision energies of 14 TeV, will be no exception. While other detectors look for spectacular head-on collisions, where fragments fly out at wide angles to the direction of the colliding beam, with Roman pots the intention is to get as close as possible to the beams and to intercept particles that have been only slightly deflected.

If two flocks of birds fly into each other, most of the birds usually miss a head-on collision. Likewise, when two counter-rotating beams of particles meet, most of the particles are only slightly deflected, if at all. Paradoxically, most of the particles in a collider do not collide. Of those particles that do, many of them just graze past each other, emerging very close to the particles that are sailing straight through.

CCTOT3_10_08

These forward particles are also important for measuring the total collision rate (cross-section). In the same way as light diffracting around a small obstacle gives a bright spot in the centre of the geometric shadow, so the wave nature of particles gives a central spot of maximum “brightness”.

To pick up these forward particles means having detectors that venture as near to the path of the colliding beams as possible, like avid spectators at a motor race leaning over the safety barrier. This is where Roman pots come in.

Why Roman? They were first used by a CERN/Rome group in the early 1970s to study the physics at CERN’s Intersecting Storage Rings (ISR), the world’s first high-energy proton–proton collider.

Why pots? The delicate detectors, able to localize the trajectory of subnuclear particles to within 0.1 mm, are housed in a cylindrical vessel. These “pots” are connected to the vacuum chamber of the collider by bellows, which are compressed as the pots are pushed towards the particles circulating inside the vacuum chamber.

The physics debut of these Roman pots was a physics milestone. Experiments at lower energies had found that the proton interaction rate was shrinking, and physicists feared that the proton might shrink out of sight at higher energies. Using the Roman pots, the first experiments at the ISR were able to establish rapidly that the interaction rate of protons (total cross-section) in fact increases at the new energies probed by the ISR.

In their retracted position, the Roman pots do not obstruct the beam, thus leaving the full aperture of the vacuum chamber free for the fat beams encountered during the injection process. Once the collider reaches its coasting energy, the Roman pot is edged inwards until its rim is just 1 mm from the beam, without upsetting the stability of the circulating particles.

Each time a new energy regime is reached in a particle collider, Roman pots are one of the first detectors on the scene, gauging the cross-section at the new energy range. After the ISR, Roman pots have been used at CERN’s proton–antiproton collider, Fermilab’s Tevatron proton–antiproton collider and the HERA electron–proton collider at the DESY laboratory, Hamburg.

In the future, Roman pots will again have their day in the TOTEM experiment at CERN’s LHC proton collider.

April 1999 p8.

LHCf: a tiny new experiment joins the LHC

While most of the LHC experiments are on a grand scale, LHC forward (LHCf) is quite different. Unlike the massive detectors that are used by ATLAS or CMS, LHCf’s largest detector is a mere 30 cm. Rather like the TOTEM detector, this experiment focuses on forward physics at the LHC. The aim of LHCf is to compare data from the LHC with various shower models that are widely used to estimate the primary energy of ultra-high-energy cosmic rays.

The LHCf detectors will be placed on either side of the LHC, 140 m from the ATLAS interaction point. This location will allow the observation of particles at nearly zero degrees to the proton beam direction. The detectors comprise two towers of sampling calorimeters designed by Katsuaki Kasahara from the Shibaura Institute of Technology. Each is made of tungsten plates and plastic scintillators 3 mm thick for sampling.

Yasushi Muraki from Nagoya University leads the LHCf collaboration, with 22 members from 10 institutions and four countries. For many of the collaborators this is a reunion, as they had worked on the former Super Proton Synchrotron experiment UA7.

November 2006 p8.

TOTEM goes the distance (archive)

CCTOT2_10_08

With detectors positioned at distances of 147 and 220 m from the CMS interaction point and others inside CMS, the TOTal Elastic and Diffractive Cross Section Measurement (TOTEM) experiment will measure the total interaction cross-section of protons at the LHC.

The data collected by the experiment will help to improve knowledge of the internal structure of the proton and the principles that determine the shape and form of protons as a function of their energy. Furthermore, TOTEM will allow precise measurements of the LHC luminosity and individual cross-sections used by the other LHC experiments. Specific to the TOTEM experiment are the “Roman pots”. Veritable marvels of technology, these cylindrical vessels can be moved to within 1 mm of the beam centre. They contain detectors that will measure very forward protons, only a few microradians away from the beams, which arise from elastic scattering and diffractive processes.

Inelastic interactions between protons will be studied by gas electron multiplier (GEM) detectors installed in “telescopes”, placed in the forward region of the CMS detector, where the charged-particle densities are estimated to be in the region of 106 cm–2s–1. Each of the telescopes contains 20 half-moon detectors arranged in 10 planes, with an inner radius matching the beam pipe. TOTEM will exploit the full decoupling of the charge-amplification and charge-collection regions, which allows freedom in the optimization of the readout structure, a unique property of GEM detectors.

The closer that the Roman pot detectors can get to the path of the beam, the more precise the results. For the LHC, the Roman pots will collect data from a distance of 800 μm from the beam. Several improvements in TOTEM’s detectors will provide an unprecedented level of precision: the thin stainless-steel windows of less than 150 μm in thickness; the flatness of the windows (less than 30 μm); and the precision of the motor mechanism that moves the pots towards the beam. The pots used in the TOTEM experiment are manufactured by VakuumPraha in Prague, according to specification drawings produced at CERN.

In the final configuration, eight Roman pots will be placed in pairs at four locations at Point 5 on the LHC. There are two stations at each end of the CMS detector, positioned at distances of 147 m and 220 m from the collision point (interaction point 5). Although TOTEM and CMS are scientifically independent experiments, the Roman-pot technique will complement the results obtained by the CMS detector and by the other LHC experiments overall. The ATLAS experiment will also be using a pair of Roman pots based on the design developed by TOTEM, with slight adaptations to suit its own specific needs.

TOTEM has now installed all the Roman pots and has equipped a few of them with detectors. This will allow them to test the movement of the Roman pots with respect to the beams at the LHC start-up and to take some first data. Some detectors were also installed within CMS. After having gained experience this year, the remaining detectors will be installed during the winter shut-down to make the experiment fully operational for next year’s runs.

• Based on an article in CERN Bulletin 2008 issue 37–38.

LHCf looks forward to high energies

CCTOT1_10_08

Positioned 140 m from the ATLAS interaction point, the LHCf experiment will attempt to improve the models that describe the disintegration of ultra-high-energy cosmic rays as they enter the atmosphere. This will allow their energies to be determined more accurately and their composition to be analysed with greater precision. This information will help support the hypotheses on the mysterious origins of cosmic rays.

The LHCf detectors are placed along the beam pipe just beyond the experiment cavern, at the point where the pipe splits into two. This location allows them to detect the neutral particles (or their decay products) that are emitted in the forward region and are not bent off course by the magnetic fields of ATLAS and the LHC magnets.

While the old generation of accelerators allowed researchers to verify the cosmic-ray disintegration models up to energies in the region of 1015 eV – LHCf will test them at energies of up to 1019 eV. Even if this year’s data is generated by lower-energy collisions, it will still be important as it will lie in the top-most region of data collected from previous experiments.

• Based on an article in CERN Bulletin 2008 issue 37–38.

LHCb: A beauty of an experiment (archive)

CCLHb1_10_08

With preparations for the ATLAS and CMS large general-purpose detectors for CERN’s LHC collider now advancing, the initial cast for the LHC experimental programme is extended with the publication of a full technical proposal for the LHCb experiment. The aim of this experiment is to study in detail the physics of the Standard Model’s third (and final) generation of particles, particularly the beauty, or “b” quark contained in B mesons. This third generation of quarks makes possible the mysterious mechanism of CP violation.

When component quarks mutate under the action of the weak force, subtle effects come into play. The first to be discovered was the violation of parity (left–right mirror symmetry) in standard nuclear beta decay. This parity violation is seen even with the up–down quark doublet that makes up protons and neutrons.

Searching for a more reliable mirror to reflect particle interactions, physicists proposed CP symmetry. As well as switching left and right, such a mirror also switches particles and antiparticles – the CP mirror image of a right-handed particle is a left-handed antiparticle. However, having six quarks (arranged pair-wise in three generations) opens up the possibility of violating CP symmetry as well. Such effects had been seen in 1964 with neutral kaons. But these kaon phenomena are only a tiny corner of the Standard Model’s CP violation potential. Much larger effects should happen in the B sector. The race is now on to collect enough B particles to become the first to glimpse this additional CP violation.

While these will surely reveal more CP violation effects, the full picture will probably only emerge with the interaction rates and energy conditions of the LHC, which will considerably extend the B physics reach. As well as investigating all aspects of CP violation, LHCb would also consolidate our knowledge of particle reactions and explore fully all quark and lepton sectors of the Standard Model.

The LHCb experiment, which so far has attracted some 340 physicists from 40 research centres in 13 countries, aims to exploit the luminosity of 2×1032 per cm2/s which should be available from the LHC from Day 1. For the other experiments, the LHC’s collision luminosity will be cranked up to 1034. LHCb expects to harvest about 1012 b quark–antiquark pairs each year. LHCb is a large single-arm spectrometer covering an angular range from 10 out to 300 mrad and will be housed in the 27 km LHC/LEP tunnel in the Intersection 8 cavern nearest Geneva airport, currently the site of the Delphi experiment at the LEP electron–positron collider.

At the heart of the detector is the vertex detector, studied by a CERN/Amsterdam/Glasgow/Heidelberg/Imperial College London/Kiev/Lausanne/Liverpool/MPI Heidelberg/NIKHEF Amsterdam/Rome 1 team. The vertex detector will record the decays of the B particles, which travel only about 10 mm before decaying. Each of the 17 planes of silicon (radius 6 cm) spaced over a metre consists of two discs to measure radial and polar coordinates. The arrangement should provide a hit resolution between 6–18 microns and 40 microns for the impact parameter of high momentum tracks.

Downstream of the vertex detector, the tracking system reconstructs the trajectories of emerging particles. Using 11 stations spaced over about as many metres, this tracking uses a honeycomb of drift chambers on the outside (where the particle fluxes are lower), enclosing a finer granularity arrangement on the inside. Microstrip gas chambers with gaseous electron multiplication is the prime contender for this part of the detector, but silicon strips and micro-cathode strips are also being investigated. The inner tracker is being investigated by Heidelberg (University and MPI), PNPI St Petersburg and Santiago (Spain), and the outer by Dresden, Free University of Amsterdam, Freiburg, Humboldt Berlin, IHPE Beijing, NIKHEF Amsterdam and Utrecht.

LHCb’s 1.1 tesla superconducting dipole spectrometer magnet (studied by CERN and PSI Villigen) would benefit from the infrastructure developed for the Delphi magnet at LEP. The magnet polarity is reversible to help the systematic study of CP violation effects.

Particle identification is carried out using the ring-imaging Cerenkov (RICH) technique, with the first RICH equipped with a 5 cm silica aerogel and 1 m C4F10 gas radiators behind the vertex detector and the second station with 2 m of CF4 gas radiator behind the tracker. Cerenkov photons would be picked up by a hybrid photodiode array, the subject of a vigorous ongoing R&D programme. The RICH study group consists of Cambridge, CERN, Genoa, Glasgow, Imperial College London, Milan and Oxford.

Following the second RICH is the electromagnetic calorimeter for identifying and measuring electrons using a ‘shashlik’ structure of scintillator and lead read out by wavelength-shifting fibres. It has three annular regions with different granularities to optimize readout. Identification of these electromagnetic particles is facilitated by a lead-scintillator preshower detector. Electromagnetic calorimetry is studied by a Bologna/Clermont Ferrand/lNR Moscow/lTEP Moscow/Lebedev Moscow/Milan/Orsay/Rome l/Rome 2 team.

The hadron calorimeter (Bucharest/IHEP Moscow/Kharkov/Rome 1) is of scintillator tiles embedded in iron. Like the electromagnetic calorimeter upstream, it has three zones of granularity. Readout tests with a full-scale module prototype in a beam have already exceeded the expected performance of 50 photoelectrons per GeV. Downstream, shielded by the calorimetry, four layers of muon detector (Beijing/CERN/Hefei/Nanjing/PNPI/Shandong/Rio de Janeiro/Virginia) uses multigap resistive plate chambers and cathode pad chambers embedded in iron, with an additional plane of cathode pad chamber muon detectors mounted in front of the calorimeters. As well as muon identification, this provides important input for the triggering.

Data handling will use four levels of triggering (event selection), with initial (level 0) decisions based on a high transverse-momentum particle and using the calorimeter and muon systems. This reduces the 40 MHz input rate by a factor of 40. The next level trigger (level 1) is based on information from the vertex detector (to look for secondary vertices) and from tracking (essentially to confirm high transverse momentum) and reduces the data by a factor of 25 to an output rate of 40 kHz. Level 2, suppressing fake secondary decay vertices, achieves another further 8-fold compression. Level 3 reconstructs B decays to select specific decay channels, achieving another compression factor of 25 and data are written to tape at 2OO Hz. Data handling and offline computing are being looked at by Bologna, Cambridge, CERN, Clermont Ferrand, Heidelberg, Lausanne, Lebedev, Marseille, NIKHEF, Orsay, Oxford, Rice and Virginia.

• May 1998 pp3–5 (abridged).

 

Beauty at the LHC

The Standard Model of physics, with its picture of six quarks and leptons grouped in pairs into three generations, is coming under detailed scrutiny as physicists try to understand what makes it work so well. This demands precision probes of all quark channels, rare as well as familiar.

The LHC will be a prolific source of B particles containing the fifth (beauty, b) quark, either in beam–beam collisions or using one of the high energy proton beams in a fixed-target set-up. Obvious aims of the B-physics programme at the LHC are to investigate the mixing of neutral B mesons, the particle lifetimes and the spectroscopy of beauty baryons. However the main goal will be observing CP violation in the neutral B system (neutral mesons containing b with either d or s quarks).

CP violation – the subtle disregard of an otherwise perfect symmetry of a combined particle–antiparticle and left-right switch – has been known for 30 years and only seen in the decays of neutral kaons. Its origin is still a mystery but it is widely believed to be responsible for the universe’s matter-antimatter asymmetry. The Big Bang initially produced equal amounts of matter and antimatter but the tiny CP-violation mechanism was enough to tilt the balance in favour of matter as we know it.

To complement the B physics capabilities of LHC’s big detectors (ATLAS and CMS), one dedicated B physics experiment is planned for the initial phase of the LHC experimental programme. Three groups submitted Letters of lntent based on different experimental approaches:

• colliding beams at the full LHC 14 TeV collision energy (the COBEX project)

• an internal gas jet target intercepting a circulating beam at the fixed target energy of 114 GeV (the GAJET project)

• a beam extracted from the beam halo by a bent crystal and a septum magnet for a fixed target experiment (the LHB project).

Considering these ideas, the LHC Experiments Committee pointed out that when LHC comes on line, initial measurements of CP violation in the B meson system will have been made by several ongoing projects. The LHCb will therefore be a second-generation study. While identifying attractive features in all three Letters of lntent, the Committee was of the view that an experiment using the collider approach, handling the full production rate, is the most attractive.

The Committee, whose view was subsequently endorsed by the Research Board, encouraged all participants in the three Letters of Intent to join together to submit a fresh design for a collider-mode B experiment.

• September 1994 p10.

 

Birth of a collaboration

The stage being set for CERN’s LHC proton–proton collider includes a place for an experiment – LHC-B – to study the physics of B particles. The Letter of Intent for this experiment has been reviewed by the appropriate committees, who recommend that the collaboration should now proceed to a vigorous research and development programme for the various detector components en route to a full technical proposal.

By the time the LHC is operational, the B meson system will have been extensively studied elsewhere – in the B factories being built at SLAC (Stanford) and at KEK, Japan, at Cornell’s revamped CESR ring, at the HERA-B experiment at DESY, Hamburg, and at Fermilab’s Tevatron. The LHC-B experiment will therefore be a second-generation study. While all three initially submitted approaches had different appealing features, the collider route, exploiting the full B production rate, was thought to be the most attractive for mature physics. CERN therefore encouraged all participants in the initial B-physics ideas to collaborate in a fresh design for a collider-mode experiment. The result is the LHC-B collaboration, which currently groups almost 200 researchers from 40 institutes in 15 countries, and is growing.

• April/May 1996 pp2–4 (extract).

ALICE: The heavy-ion challenge

CCAli2_10_08

When the ideas for ALICE were first formed at the end of 1990, the heavy-ion programme was still in its infancy and very little was known about what physics to expect or what kind of detector would be required. Nevertheless, an expression of interest for a dedicated general-purpose heavy-ion detector was presented at Evian in 1992. “That’s the first appearance of ALICE,” recalls Jürgen Schukraft, who has been at the helm of the experiment since its inception in 1991. “We had to do enormous extrapolations because the LHC was a factor of 300 higher in centre-of-mass energy and a factor of 7 in beam mass compared with the light-ion programme, which started in 1986 at both the CERN SPS and the Brookhaven AGS. It was akin to planning for the International Linear Collider with a centre-of-mass energy of 1 TeV based on knowledge from Frascati’s ADONE machine, one of the first electron–positron colliders running at 3 GeV.”

Sixteen years later, the field of heavy ions is in a mature state. The ALICE collaboration has the benefit of results from the heavy-ion programmes at the SPS and at Brookhaven’s RHIC, to use as guidance, allowing an infinitely better idea of what to look for, as well as the kind of detectors and the precision needed. Heavy ions will collide at the LHC with energy levels 28 times higher than at RHIC and 300 times higher than at the SPS, representing a huge jump in energy density. “The field of heavy ions has gone from the periphery into a central activity of contemporary nuclear physics,” explains Schukraft. “The exciting thing about the LHC is that because of the huge jump in energy compared with RHIC, there are many open questions to be answered and lots of surprises to be expected. While we don’t know the answers yet, today at least we know some of the questions.”

ALICE will study the quark–gluon plasma (QGP), the first evidence of which was discovered at RHIC and the SPS, and will continue the investigations by confirming interpretations and testing predictions at the LHC. “Back in 1992, we were imagining what the quark–gluon plasma would look like and we expected it to behave like an ideal gas, but what we found is that it behaves like a perfect fluid, so it is completely different,” says Schukraft. “This was a very big surprise, because instead of being weakly interacting, or gas like, it is strongly interacting. It is the best fluid anyone has ever found in nature, much better than liquid helium, for example.” He adds: “The discovery that QCD matter is more like a fluid, was made at RHIC. We now expect to see it flow at about the same strength at the LHC if our understanding is correct – because it can’t get any better than ‘ideal’ – or we will be scratching our heads if it behaves differently.”

Another question on the minds of the ALICE collaboration is whether there is not only QGP, but yet another unusual state of matter called colour glass condensate (CGC), which may form at high gluon densities in heavy nuclei. While QGP is hot and dense, CGC is cold and dense, and would exist in the initial state – before the nuclei collide – and then melt away. “We hope to discover new aspects of QCD in the strongly coupled regime, where the strong interaction is actually strong,” says Schukraft. “One of the central concepts of the Standard Model is phase transition and spontaneous symmetry breaking. The QCD phase transition is the only one accessible to study by experiment and ALICE will measure its properties and parameters.”

As the field of heavy ions has unfolded, the ALICE collaborators have been flexible in changing or adding to their detector. Over the course of time, 50% of new detector components have been added to the original Letter of Intent submitted in the spring of 1993, as a result of the new data from the SPS and RHIC. This includes the muon spectrometer, a transition-radiation detector and the electromagnetic-jet calorimeter, scheduled to be completed in 2011. “Now we know better what we need for this new regime,” explains Schukraft. In addition, some detectors had to be invented from scratch – such as the time-of-flight detector, which was impossible to build at the time the original design was made, and silicon pixel detectors, which were not around then.

ALICE is expecting to receive 1 PB of data for the one month per year of heavy-ion operation, at a rate of more than 1.25 GB/s, which presents a huge challenge. According to Schukraft, state-of-the-art technology in data-collection infrastructure during the 1990s worked at a rate of 10 MB/s. “Most people thought 1 GB/s would be a real challenge to reach and that we would have to find a way to reduce the data volume. There were many discussions on how to handle this huge amount of data, yet today within a factor of 2–3 it is quite common. However, 15 years ago one could not dream of handling such a large amount of data at such a rapid rate,” he says. He expects that the heavy-ion data taking will start by the end of 2009 and soon after begin to show the first interesting results.

Although the collaboration’s main interest is heavy-ion collisions, for most of the year ALICE will be running using proton–proton collisions, which is important for comparing measurements from the lead–lead collisions. The detectors are optimized for complete particle identification at angles close to 90°, detecting particles from extremely low to fairly high momentum. During the proton runs, ALICE collaborators will be tuning the Monte Carlo generators and evaluating the background and detector performance for QCD measurements, such as charm and beauty production at low transverse momentum.

“What we are doing at the LHC is very exciting,” says Schukraft. “The LHC is really amazing in its ability to combine three different approaches in one machine: high-energy phenomena, producing new particles to be studied by ATLAS and CMS; indirect effects of virtual high-mass particles, studied in LHCb; and distributed energy that heats and melts matter, to be studied by ALICE. We look forward to studying lead–lead collisions at LHC energy scales.”

ALICE: New kid on the block (archive)

CCAli1_10_08

In the children’s story, Alice chased a white rabbit down a hole to find herself transported to a magical world. At the LHC, ALICE (A Large Ion Collider Experiment) will be pursuing new states of matter, and the wonderland to be found could be every bit as new and exciting. The LHC will continue CERN’s tradition of diverse beams, being able to accelerate not only protons, but also high-energy beams of lead ions. It is this capability which ALICE is designed to exploit.

The idea of building a dedicated heavy-ion detector for the LHC was first aired at the historic Evian meeting in March 1992. From the ideas presented there, the ALICE collaboration was formed, and in 1993, a Letter of Intent was submitted. High-energy heavy-ion collisions provide a unique laboratory for the study of strongly interacting particles. Quantum chromodynamics (QCD) predicts that at sufficiently high energy densities there will be a phase transition from conventional hadronic matter, where quarks are locked inside nuclear particles, to a plasma of deconfined quarks and gluons. The reverse of this transition is believed to have taken place when the universe was just 10–5 s old, and may still play a role today in the hearts of collapsing neutron stars.

The feasibility of this kind of research was clearly demonstrated at CERN and Brookhaven with lighter ions in the 1980s. Today’s programme at these laboratories has moved on to heavy ions, and is just reaching the energy threshold at which the phase transition is expected to occur. This physics reach will be extended at the RHIC heavy ion collider at Brookhaven, scheduled to come into operation in 1999. The LHC, with a centre-of-mass energy around 5.5 TeV/nucleon, will push the energy reach even further.

ALICE is bringing members of CERN’s existing heavy-ion community together with a number of groups new to the field drawn from both nuclear and high-energy physics. By LHC standards, the detector is of moderate proportions, being based on the current magnet of LEP’s L3 experiment. When LEP switches off, the L3 magnet will be left in place whilst ALICE is installed. LHC beams will pass through the magnet slightly off-centre, 30 cm higher than the current LEP beams.

On the trail of quark-gluon plasma

Because the physics of the quark-gluon plasma could be very different from that of ordinary matter, the ALICE detector has been designed to cover the full range of possible signatures, whilst being flexible enough to allow future upgrades guided by early results. The detector consists of two main parts, a central detector, embedded within the magnet, and a forward muon spectrometer included as an addendum to the Letter of Intent in 1995. The set-up is completed by zero-degree calorimeters located far downstream in the machine tunnel, to intercept particles emerging very close to the colliding beams.

One of the greatest challenges of heavy-ion physics is to pick out the individual tracks from the dense forest of emerging particles. ALICE’s tracking system has been designed for safe and robust pattern recognition within a large volume solenoid producing a weak field. The L3 magnet with a field of 0.2 tesla is ideal for the purpose.

The Inner Tracking System, lTS, consists of six cylindrical layers of highly accurate position-sensitive detectors from radii of 3.9 cm to 45 cm extending to ±45°. Its functions are secondary vertex recognition, particle identification, tracking, and improving the overall momentum resolution. The different layers are optimized for efficient pattern recognition. Because of the high particle density in the innermost regions, the first four layers provide position information in two dimensions. The first two layers are silicon pixel detectors, and the second two are silicon drift detectors. The two outermost layers will be composed of double sided silicon micro-strip detectors. The complexity and importance of this device is reflected in the number of institutions responsible for its production: Bari, Catania, CERN, Heidelberg, Kharkov, Kiev, Nantes, NIKHEF, Padua, Rez, Rome, St Petersburg, Salerno, Strasbourg, Turin, Trieste and Utrecht.

Central tracking is completed by a Time Projection Chamber, TPC, being built by Bratislava, CERN, Cracow, Darmstadt, Frankfurt, and Lund. Proven technology has been chosen to guarantee reliable performance at extremely high multiplicity. The drawbacks of this technology are high data volumes and relatively low speed. The TPC occupies the radial region from 90 cm to 250 cm, and is designed to give a rate-of-energy-loss resolution of better than 7%. It will also serve to identify electrons with momenta up to 2.5 GeV/c.

Identification parade

Two different technologies are under study for the last sub-detector to cover the full azimuthal angle, the particle identification system, PlD. Pestov spark counters, single-gap gas filled parallel-plate devices, are being investigated by Darmstadt, Dubna, Marburg, Moscow-ITEP, Moscow-MePHI, and Novosibirsk, whilst parallel plate chambers, PPCs, are being developed by CERN, Moscow-ITEP, Moscow-MePHI, and Novosibirsk. The final design is expected to be complete by the end of 1998. The PPCs are less demanding to construct and operate, but the Pestov counters give a timing resolution of less than 50 ps, some four times better than PPCs.

A second particle identification device for higher momentum particles, the HMPID, is included in the design as a single arm device above the central PlD. A ring-imaging Cerenkov (RICH) detector is the preferred option, being developed by Bari, CERN, Zagreb, and Moscow-INR. However, an organic scintillator approach being pursued by Catania, and Dubna has not yet been ruled out.

Below the central barrel region of the detector is another single-arm device, the photon spectrometer, PHOS, to measure prompt photons and neutral mesons. It is being prepared by Bergen, Heidelberg, Moscow-Kurchatov, Munster, Protvino, and Prague using scintillating lead tungstate crystals developed in the context of CERN’s generic detector R&D effort.

Zero-degree calorimeters, ZDC, will be positioned 92 m from the interaction point to measure the energy carried away by non-interacting beam nucleons, a quantity directly related to the collision geometry. These are calorimeters of the spaghetti type, with quartz fibres as the active medium. Their construction is the responsibility of Turin. Another forward detector, the forward multiplicity detector, FMD, will be embedded within the solenoid with the purpose of providing fast trigger signals and multiplicity information outside the central acceptance of the detector. Innovative micro-channel plate detectors are under consideration by Moscow-Kurchatov and St Petersburg, with conventional silicon multipad detectors as a back-up.

The forward muon spectrometer, FMS, is a major addition to the original design as specified in the Letter of Intent. It was included to measure the complete spectrum of heavy-quark resonances, which are expected to provide a sensitive signal for the production of a quark-gluon plasma. The first section of the spectrometer is an absorber placed inside the solenoid about 1 m from the interaction point. This is followed by a large 3 tesla dipole magnet outside the solenoid containing 10 planes of tracking stations. A second absorber and two further tracking planes provide muon identification and triggering. Teams from CERN, Clermont-Ferrand, Gatchina, Moscow-Kurchatov, Moscow-INR, Nantes, and Orsay are working on a more detailed design for the FMS, which is expected later this year.

Triggering is the responsibility of Birmingham and Kosice. Proton–proton mode and ion–ion mode have different trigger requirements. In proton–proton mode, a minimum bias trigger is required, whilst for ion–ion collisions, the trigger’s function is to select on collision centrality. A level zero trigger decision is made at around 1.2 microseconds based on centrality information from the FMD. At level-one (2 microseconds) this is supplemented by the ZDC. A dimuon trigger from the FMD also contributes to level-one. The final level-two trigger decision is made after further processing at 100 microseconds.

The architecture of the ALICE data acquisition system is determined by the relatively short heavy-ion runs foreseen for the LHC, roughly 10% of each year’s running. The collaboration will have ten times as long to analyse the data as they have to collect them, and so a high bandwidth system is envisaged in order to collect as much data as possible in the time available. CPU-intensive operations such as event filtering and reconstruction will be performed offline. Data acquisition is the responsibility of Budapest, CERN, and Oslo.

• March 1996 pp9–12 (abridged).

 

Green light for ALICE

ALICE has received the green light to proceed towards final design and construction. ALICE is the natural continuation, at CERN of the SPS Heavy Ion programme, initiated in 1986, which has recently provided exciting new results in the quest for the quark-gluon plasma.

Up to 50,000 charged particles are expected to be emitted in a lead–lead collision at the LHC of which about ten thousand will go through the ALICE central detector. That is why the central tracking in ALICE is based on the Time Projection Chamber (TPC) technique, which has already proven its value in registering tracks in a high multiplicity environment within the NA49 SPS experiment. The LHC collision rate in heavy ion mode is compatible with TPC drift times of around 100 microsec.

In the forward direction, within a 9 degree angle around the beam, ALICE will be equipped with a muon spectrometer, made of a sophisticated hadron absorber, a dipole magnet, five tracking stations (made of Cathode/pad Strip Chambers) and two trigger stations (made of Resistive Plate Chambers). Measurements on muon pairs are an essential part of the ALICE physics programme, since heavy dileptons probe the early stages of the produced medium.

• April 1997 pp4–5 (extract).

CMS: Building on innovation

From the beginning, the CMS collaboration had taken a new approach with the plan to assemble the detector above ground in a spacious surface building while the civil engineering work on the underground cavern was underway. Alain Hervé, who had been Technical Coordinator for the L3 experiment at LEP before taking up the same position with CMS, strongly recommended constructing the detector in slices that would be lowered down the 100 m shaft into the cavern after extensive commissioning on the surface. This had never been done before for such a large-scale high-energy physics experiment, most experiments being constructed directly in the experimental area. This decision, and the requirement of the ease of maintenance, determined the overall structure of the detector, with slices that could be lowered one by one – 15 heavy pieces in all.

“It is very unusual to do this, but the surface building was made quite large, and we could work on several pieces at the same time because they could easily be moved back and forth. Also the underground civil-engineering work in the caverns would take time, so we started assembling the detector four to five years before the underground cavern was finished. The fully tested elements were lowered underground between November 2006 and January 2008. The experiment is commissioned and now ready for data-taking. The duration of the lowering operation and commissioning was essentially that foreseen 17 years ago,” explains Jim Virdee, who has been with CMS since the very beginning and spokesperson since 2007. “I know a few future experiments are looking at this way of doing things,” he adds, “so I think it might catch on. It gives a lot of flexibility, providing ease of maintenance and installation. Even late on we could work on various elements in parallel in the underground cavern.”

A lot of people thought we had left it too late, and I was being advised that we were taking a risk, but it was a risk we had to take.

Jim Virdee

The long process from the design phase to final construction encompassed some crucial changes in technology, which allowed savings in time, money and effort. Despite the unexpected challenges that arose, the collaboration remained flexible and creative in solving them. “We needed radiation-hard electronics in our tracker, electromagnetic calorimeter and hadron calorimeters, along with radiation-tolerant muon systems. We did a lot of R&D on this with industries that had produced radiation-hard electronics, usually for space or military applications,” recalls Virdee. The collaboration was ready to launch production of the front-end electronics of the inner tracker when the foundry that was going to produce the electronics moved, and somehow lost its ability to produce electronics with good radiation hardness. “So we were thrown back to the drawing board and had to develop a new way of obtaining radiation-hard electronics,” says Virdee. “We essentially changed all of our on-detector electronics for the tracker and the electromagnetic calorimeter. This was a major issue that we were confronted with in the late 1990s and it’s all worked out very well. A lot of people thought we had left it too late, and I was being advised that we were taking a risk, but it was a risk we had to take.”

Another significant challenge concerned the production of 75,000 lead-tungstate crystals in Russia and China. These were chosen for their compactness, owing to their short radiation length, and high radiation hardness, but early tests revealed problems when using silicon photodiodes, with the scintillation light being drowned out by unwanted signals arising from charged particles at the end of the shower passing through the photodiodes. A solution was discovered using silicon-avalanche photodiodes, which could work in a magnetic field. Working with the crystal supplier in Russia also proved interesting. “The economic conditions in Russia have changed a lot since we started producing the crystals,” says Virdee, “so much so that we had to place the last few orders in roubles, not in dollars any longer because the rouble was considered by the manufacturer to be a more stable and stronger currency!”

In 1999 the CMS collaboration made a major decision to change the design of their inner tracker. Originally, they had included both microstrip gas chambers (MSGCs) and silicon sensors after performing much R&D on various technological options. The cost-per- square-centimetre of silicon detectors in the early 1990s was high, so the plan was to use silicon detectors close to the interaction point and use MSGCs further away. “This technology required some development to make it suitable for use in the LHC, and essentially we succeeded in doing that,” says Virdee. However, development of silicon detectors continued during the decade. Larger wafers were becoming available at a competitive cost and with improved performance. Furthermore, automation – employed in the electronics industry – allowed rapid and reliable production of the 17,000 silicon modules needed for the tracker.

The collaboration took the bold decision based on practical aspects to use only silicon.

At the beginning of 1999, when it was clear that silicon had reached a competitive state with the MSGCs, the collaboration took the bold decision based on practical aspects to use only silicon. “We were pressed for time, and having two different technologies required us to have two different systems doing similar work. At the time we had not invested as much effort in the systems issues as we would have wished for,” Virdee explains. “So one of the key issues that arose was: can we come up with a single design to simplify the work and save time? The basic issue was that the silicon detectors were of high quality, and were mass-produced by industry, so we could just buy them while high-rate production lines for MSGCs had still to be commissioned.”

Once the LHC starts, the CMS physicists, some of whom have spent most of their working lives building the large and complex subdetectors, will have the long-awaited chance for discoveries. “However, before we do that we need to verify that the subdetectors perform as designed. Currently, we are doing that by running with cosmic rays. As far as we can tell the detector is working as expected and this is very encouraging. The moment of truth, however, will be when we record collision data,” says Virdee. “This start-up is very exciting because we are making a big leap up in energy and entering a new regime. All indications are that there is something special about this energy range.”

CMS: A study in compactness (archive)

CCCMS1_10_08

The milestone workshops on LHC experiments in Aachen in 1990 and at Evian in 1992 provided the first sketches of how LHC detectors might look. The concept of a compact general-purpose LHC experiment based on a solenoid to provide the magnetic field was first discussed at Aachen, and the formal expression of interest was aired at Evian. It was here that the Compact Muon Solenoid (CMS) name first became public.

Optimizing first the muon-detection system is a natural starting point for a high-luminosity (interaction rate) proton–proton collider experiment. The compact CMS design called for a strong magnetic field, of some 4 T, using a superconducting solenoid, originally about 14 m long and 6 m bore. (By LHC standards, this warrants the adjective “compact”.)

The main design goals of CMS are: 1) a very good muon system providing many possibilities for momentum measurement; 2) the best possible electromagnetic calorimeter consistent with the above; 3) high-quality central tracking to achieve both the above; and 4) an affordable detector.

Overall, CMS aims to detect cleanly the diverse signatures of new physics by identifying and precisely measuring muons, electrons and photons over a large energy range at very high collision rates, while also exploiting the lower luminosity initial running. As well as proton–proton collisions, CMS will also be able to look at the muons emerging from LHC heavy-ion beam collisions.

The Evian CMS conceptual design foresaw the full calorimetry inside the solenoid, with emphasis on precision electromagnetic calorimetry for picking up photons. (A light Higgs particle will probably be seen via its decay into photon pairs.) The muon system now foresaw four stations. Inner tracking would use silicon microstrips and microstrip gas chambers, with over 107 channels offering high track-finding efficiency. In the central CMS barrel, the tracking elements are mounted on spirals, providing space for cabling and cooling.

Following Evian, a letter of intent signed by 443 scientists from 62 institutes was presented to the then new LHC Experiments Committee. Two electromagnetic-calorimetry routes were proposed, a preferred one based on homogeneous media, and the other on a less expensive sampling solution using a lead/scintillator sandwich read out by wavelength-shifting fibres, named shashlik.

Due to limited resources in the collaboration at the time, the shashlik solution was adopted as baseline. However, R&D continued on cerium fluoride (CeF3) and two other candidate media, lead-tungstate crystals (PbWO4) and hafnium-fluoride glasses. The collaboration had doubled in size by the summer of 1994 and in September of that year lead tungstate was chosen after extensive beam tests of matrices of shashlik, cerium fluoride and tungstate towers. The radiation length of PbWO4 is only 0.9 cm and the required volume (approximately 12.5 m3) is only half that for CeF3, leading to a substantial reduction in cost. In addition, lead tungstate is a relatively easy crystal to grow from readily available raw materials and significant production capacity already exists.

Following the November 1993 decision to foreclose the SSC project, US physicists were looking for new possibilities and many knocked at the CMS door. A letter of intent submitted to the US Department of Energy in September 1994 covered a 270-strong US contingent in CMS, where the main responsibility would be for the endcap muon system and barrel hadronic calorimeter.

Meanwhile, interest continued to grow so that CMS now involves some 1250 scientists from 132 institutions in 28 countries. Some 600 scientists are from CERN member states, the remainder hail from further afield: some 300 from 37 institutes in the US, and 250 from research institutes in Russia and member states of the international Joint Institute for Nuclear Research, Dubna, near Moscow.

The choice of magnet was the starting point for the whole CMS design. Although the solenoid has been cut from 14 to 13 m in length, its radius (2.95 m) and magnetic field (4 T) remain unaltered. This long and high field solenoid removes the need for additional forward magnets for muon coverage, while accommodating easily the tracking and calorimetry.

The 12-sided structure, designed at CERN, is subdivided along the beam axis into five rings, each some 2.6 m long, with the central one supporting the inner superconducting coil. Endcaps complete the magnetic volume. The coil itself, designed at Saclay, is split into four sections, each 6.8 m in diameter, the maximum girth compatible with transport by road. The conductor is 40-strand niobium-titanium enclosed in an aluminium stabilizer. With 900 W of cooling power at 4.5 K and 3400 W at 60 K, cooldown will take 32 days.

In order to deal with high track multiplicities in the inner tracking cavity, detectors with small cell sizes are needed. Solid-state and gas-microstrip detectors provide the required granularity and precision. Two layers of pixel detectors have been added to improve the measurement of the track-impact parameter and secondary vertices. The silicon-pixel and microstrip detectors will be kept at 0° to slow down damage by irradiation. High track finding efficiencies are achieved for isolated high transverse-momentum tracks. It is also fairly high for such tracks in jets. All high transverse-momentum tracks produced in the central region are reconstructed with high-momentum precision (5 per mil), a direct consequence of the high magnetic field. The responsibility for the inner tracker extends to institutes in Belgium, Finland, France, Germany, Greece, India, Italy, Switzerland, the UK, the US and CERN.

Centrally produced muons are identified and measured in four muon stations inserted in the magnet-return yoke. The chambers are judiciously arranged to maximize the geometric acceptance. Each muon station consists of 12 planes of aluminium drift tubes designed to give a muon vector in space, with 100 μm precision in position and better than 1 mrad in direction.

The four muon stations also include resistive-plate chamber-triggering planes that identify the bunch crossing and enable a cut on the muon transverse momentum at the first trigger level. The endcap muon system also consists of four muon stations. Each station consists of six planes of Cathode Strip Chambers. The final muon stations come after a substantial amount of absorber so that only muons can reach them. The large bending power is the key to very good momentum resolution even in the so-called “stand alone” mode, especially at high transverse momenta. The muon-system team includes scientists from Austria, China, Germany, Hungary, Italy, Poland and Spain with large contingents from the US and Dubna member states.

As the coil radius is large enough to install essentially all the calorimetry inside, a high-precision electromagnetic calorimeter can be envisaged. The lead-tungstate (PbWO4) crystal calorimeter leads to a di-photon mass resolution twice as good as that anticipated from the shashlik. The electromagnetic calorimeter groups scientists with large experience of total absorption calorimeters from China, Dubna member states, France, Italy, Germany, Switzerland, the UK, the US and CERN.

The hadron calorimeter, benefiting from US involvement, will use interleaved copper plates and plastic scintillator tiles read out by wavelength-shifting fibres. As well as the US, the CMS hadron calorimetry squad includes institutes from China, Hungary, India, Spain and Dubna member states.

For LHC’s design luminosity of 1034 cm–2 s–1, CMS will have to digest 20 highly complex collisions every 25 ns. This input rate of 109 interactions per second has to be reduced to just 100 for off-line analysis. This will be accomplished by a two-level trigger. The first-level trigger uses pipelined information from the muon detectors and the calorimeters to reach a decision after a fixed time period of 3 μs. The data from a maximum of 10s interactions per second, from the muon detectors and the calorimeters only, is forwarded to an online processor farm. This “virtual” Level 2 uses the full granularity to reject almost 90% of the events. The entire data from the remaining events is then passed to the farm for further processing. The trigger- and data-acquisition systems are the responsibility of a team from Austria, Finland, France, Germany, Hungary, Italy, Portugal, Poland, Dubna Member States, Spain, Switzerland, the UK, the US and CERN. Software and computing, for monitoring and control as well as data handling and analysis, will take on a new dimension at the LHC.

• June 1995 pp5–8 (abridged).

 

CMS changes to silicon track

CCCMS2_10_08

The collaboration for the CMS experiment will base its tracker entirely on silicon sensor technology using fine-feature-size electronics. The decision to go all-silicon follows unexpectedly rapid recent advances in read-out for microstrip detectors, in the fabrication of sensors on 6 inch diameter silicon wafers, and automated assembly techniques for an all-silicon detector. It is a significant departure from the CMS baseline-tracker proposal, which foresaw a central region of silicon devices surrounded by microstrip gas chambers (MSGCs).

In the mid-1990s, MSGCs seemed to offer an economical alternative to silicon. In early implementations, however, their performance was found to deteriorate significantly with increased exposure to ionizing particles. Nevertheless, solutions to these teething problems seemed to be available and CMS chose MSGCs as their baseline proposal – on the condition that certain milestones were reached. These were successfully achieved, but silicon-related technology was advancing in parallel, reducing the cost advantage that MSGCs offered.

A decisive factor in reducing the tracker’s price tag, by almost SFr6.5 million, was the development by CMS of a CMOS read-out chip using low-cost technology, originally aimed at increasing the compactness of computer chips. With a feature size of 0.25 μm compared with the 1 μm of conventional CMOS chips, the new APV25 chip is certainly compact. It is also extremely radiation-hard, with lower noise and power consumption than a conventional CMOS chip. The other decisive factor is that silicon detectors are already widely available from industry in large quantities and their price has been falling.

May 2000 p5 (abridged).

ATLAS: The making of a giant

CCAtl3_10_08

ATLAS is the well deserved name for the largest-volume detector ever constructed at a particle collider. It sits about 100 m underground in a cavern that could accommodate the Arc de Triomphe in Paris. A multipurpose detector, its physics goals range from the search for the Higgs boson and supersymmetric particles to the exploration of extra dimensions and other alternative scenarios.

The ATLAS collaboration was born in the autumn of 1992 from the merging of two existing groups, ASCOT and EAGLE, that had presented different expressions of interest at the meeting in Evian the previous March. By the end of 1994, the ATLAS collaboration had taken shape and submitted the technical proposal. “In summer 1995 the detector was pretty much the same as it is today with the exception of the inner detector, whose technical design report was presented later, in 1997,” says Peter Jenni, (co-)spokesperson of the ATLAS collaboration since the beginning. “When, we submitted the technical proposal in December 1994, all the big decisions, such as which type of calorimeter or magnetic field to use, had already been taken.”

So, after about 15 years in the making, not much has changed from the original design for ATLAS. There were only ever two main turning points. “Until 1997, the design of the precision chambers in the inner detector was not established,” explains Jenni. “The collaboration was hesitating between using microstrip gas chambers and silicon strips in the outer layer. It finally decided to adopt the silicon solution. In 2002, the ATLAS detector underwent an internal financial audit and the resources review board accepted a completion plan with a reduced budget. As a result, the development of some parts of the detector had to be postponed. The impact of such financial cuts was particularly significant on the high-level trigger and data acquisition, but some features of the inner detector, the muon system, the electronics of the calorimeter and the shielding system had to be reviewed as well.” Since then, not all these projects have been completed, and some of them never will be. “However,” says Jenni, “this does not affect the main design or performance of the detector.”

The detector was designed from the beginning to study a range of phenomena. “The initial design requirements of ATLAS were optimized for the search for the Higgs boson and supersymmetric particles,” confirms Jenni. “The Higgs boson always featured strongly because, depending on the mass, the decays to deal with experimentally are very different. Therefore it is an excellent benchmark for making sure you have built a detector with many capabilities.”

If the ATLAS detector has not changed much since 1995, the physics panorama has. New particles have come onto the scene, as well as new scenarios that attempt to describe the first moments of the universe. “ATLAS will be able to study the signature of still-to-be discovered heavy objects decaying into electron pairs or muon pairs, such as the Z’,” explains Jenni. “The superconducting toroid system allows us to measure muons with great precision, even with the highest luminosity, independently from the inner detector.” Jenni also expects an excellent performance for studying signatures from particles coming from possible supersymmetry (SUSY). “Our detector has a particularly good hadronic calorimeter, which will allow us to measure accurately the missing energy associated with the possible existence of SUSY or extra dimensions. Moreover, if there is a graviton-like resonance from extra-dimension scenarios we will have to measure the angular distribution. In this case, toroids have the advantage that the field is optimal also in the forward direction.” In Jenni’s opinion: “The performance of detectors with high luminosity will make the difference in the race for discovery in the long run.”

However, according to the most recent schedules, such high luminosity will not be available at the LHC until 2011 or 2012. In particular, the first protons will collide in the LHC at 5 TeV, rather than at 7 TeV. Instead of being disappointed, Jenni is pragmatic. “We will use the first two-month run to get to know and test the detector with known signatures, such as the W boson and the top quark – 10 TeV at low luminosity will already give us a lot of data to calibrate, as well as understand all the subdetectors and the chain of data preparation and analysis. Before any discovery can be claimed we first have to show that the known physics is reproduced and that the detector performs well.”

After this first learning phase, the collaboration will be ready for 2009, when the accelerator will run at full energy and increasing luminosity. If the expected Higgs boson really exists, ATLAS will start to record its signatures. “An estimate for finding the Higgs is not before 2010, but this seems rather optimistic,” says Jenni. “For SUSY or extra dimensions, the time needed to study the signatures depends on the different theoretical models. We could cast light on some of them before the Higgs can be confirmed”.

When it comes to discoveries, an important aspect for the collaboration and for CERN will be how they will be disseminated. “The first thing we will take care of is to publish our results in a scientific review, not in the New York Times,” declares Jenni. “Then will come the sharing of the excitement of the results with the public and this is a very important aspect. For an experiment like ATLAS, outreach is an important activity. I think that it is crucial to involve active scientists, although scientists do not necessarily know how to deal with it. We will all have to learn how to do it together with CERN.”

ATLAS has been a pioneer in this field, with an attractive website that features video material, interactive games, press kits, regular news etc. “Inside ATLAS we have some communication plans to deal with the publication of the first results. There is already quite a lot of preparation of educational resources to be used to explain how things work. An EU co-funded project has recently received a first approval from the Commission,” continues Jenni. “All this, however, seems rather theoretical for the moment. I feel that we will have to learn how to do things for real.”

In the race for discovery at the LHC, ATLAS is not alone. The collaborations are competitors, but they are also allied because what one detector sees will have to be confirmed by the others. “Different detectors have made different choices, giving priority to different features (calorimetry, particle identification systems etc). Physics will tell us who made the right choice,” confirms Jenni. “Having invested so much in this powerful multipurpose detector, it is clear that the ambition and duty of ATLAS is to exploit the LHC potential to the maximum”.

bright-rec iop pub iop-science physcis connect