Comsol -leaderboard other pages

Topics

Scientists at the D0 experiment discover new path to the top

On 8 December, scientists from the D0 experiment at Fermilab’s Tevatron announced the first evidence for top quarks produced singly, rather than in pairs. The top quark has played a prominent role in the physics programme at the Tevatron ever since it was discovered there nearly 12 years ago. Just before the discovery in 1995, D0 collaborators were already turning their attention to the electroweak production of single top quarks, with theorists suggesting that the cross-section should be large enough to observe in the Tevatron’s proton–antiproton collisions.

CCEnew2_01-07

A top quark is expected to be produced by itself only once in every 2 × 1010 proton–antiproton collisions, through the electroweak processes shown in figure 1. Although the cross-section is not much smaller than for top-quark pair-production, the signature for single top production is easily mimicked by other background processes that occur at much higher rates.

To stand a chance of observing the electroweak process, D0 physicists had to develop sophisticated selection procedures, resulting in around 1400 candidates selected from the thousands of millions of events recorded over the past four years (corresponding to 1 fb-1 of collision data). The team expected only about 60 true single top events among all these candidates, so had to exploit every piece of information to establish the presence of the events.

The researchers used three different techniques (boosted decision trees, matrix element-based likelihood discriminants and Bayesian neural networks) to combine many discriminating features in ways that enable single top quark events to be recognized. In this way they effectively reduced the multidimensional system to a single, powerful variable.

CCEnew3_01-07

With agreement among the three measurements, the D0 team finds the cross-section for single top quark production to be 4.9 ±1.4 pb, consistent with the Standard Model prediction (D0 Collaboration 2006). They estimate the chance of measuring this value as the result of a background fluctuation at less than 1 in 2800 (3.4 σ). This result establishes the first evidence for single top quark production.

The analysis also constrains the magnitude of |Vtb|, an important parameter of the Standard Model’s Cabibbo–Kobayashi–Maskawa (CKM) matrix, which describes how quarks can change from one type to another. If the CKM matrix describes the intermixing of three generations of quarks – with top and bottom forming the third generation – the value of |Vtb| should be close to one. Any departure from this value could be a sign of new physics, be it a new family of quarks or some unforeseen physical process. The D0 result provides the first opportunity for a direct measurement of |Vtb| and constrains it to lie between 0.68 and 1 with a 95% probability, consistent with the presence of only three generations of quarks.

In addition to its inherent success, this analysis is an important milestone in the D0 Collaboration’s continued search for the Standard Model Higgs boson. Higgs production is predicted to occur at rates smaller than single top quark production in the presence of substantial “irreducible” backgrounds (including single top). In this regard, D0 is developing a refined ability to “reduce the irreducible”, exemplified by this analysis and the recent evidence for the associated production of W and Z bosons. These high-level analyses and the detailed understanding of the growing data-set are becoming the backbone of D0’s search for the Higgs boson.

BEPCII makes progress towards switch-on

The BEPCII project, a major upgrade and natural extension of the Beijing Electron–Positron Collider (BEPC), has passed an important milestone with beam now circulating in the outer ring and the synchrotron radiation (SR) beam lines open to users. Obtaining colliding beams will be the next step.

CCEnew4_01-07

BEPCII consists of two storage rings, with a new ring built inside the original BEPC ring. The two rings will cross at two points to form a collider, with one ring for electrons and one for positrons. BEPCII will operate with beam energy in the range 1.0–2.1 GeV, appropriate for charm production, and with a design luminosity of 1 × 1033 cm-2s-1 at 1.89 GeV. The upgraded collider will also provide improved SR performance with higher beam energy and photon intensity than at BEPC.

Construction work on BEPCII started at the beginning of 2004. Summer that year saw the installation of new hardware subsystems for the linac injector after the old devices had been removed and commissioning of the upgraded injector linac followed, demonstrating its design performance. Then, after 16 months’ hard work, most of the components for the new inner storage ring had been manufactured and tested.

CCEnew5_01-07

Installation was completed in early November 2006 with conventional magnets installed in the interaction region to enable commissioning of the outer ring with electrons for SR operation. In the meantime, improvement of the cryogenic system and field mapping of the superconducting magnets will proceed at a position out of the beam line.

Commissioning of the outer ring started on 13 November and a beam position monitor revealed the signal for the first turn of beam on the same day. Then, in the early morning of 18 November, the operators obtained circulating beam without RF and stored beam with RF. At the same time, the hardware systems were tested and debugged. Vacuum conditioning with beam followed, and with improving vacuum, orbit correction and other measures, the beam current in the storage ring and the beam lifetime were increased step by step. At the time this issue went to press, the beam current had reached 200 mA with a lifetime of 4 h at 1.89 GeV.

CCEnew6_01-07

For SR operation, the beam energy was ramped to the required 2.5 GeV and commissioning with the SR beam lines began. The SR beams were then opened to users from 25 December. “This is a milestone of the BEPCII construction towards its final goal,” stressed Nobel laureate Tsung-Dao Lee during his recent visit to Beijing.

Commissioning with electron and positron beams in preparation for the first collisions will be carried out after one month of operation for SR users. The plan is then to install the superconducting insertion quadrupoles into the interaction region in the summer and to move the new detector BESIII into place in autumn. The first physics run of BEPCII/BESIII is scheduled to start by the end of 2007.

X-ray laser pulses light up the nano-world

An international team of scientists using the soft X-ray free-electron laser FLASH at DESY has achieved a world first by taking a high-resolution diffraction image of a non-crystalline sample with one extremely short and intense laser shot. This first successful application of “flash diffractive imaging” opens a new era in structural research.

CCEnew7_01-07

The experiment suggests that in the near future images from nanoparticles and even large individual macromolecules – viruses or cells – may be obtained using a single ultra-short high-intensity laser pulse. This would provide new possibilities for studying the structure and dynamics of nanometre-sized particles and molecules without the need for the crystallizing required in conventional X-ray structure analysis.

In the experiment at FLASH, the researchers directed a very intense free-electron laser pulse of 32 nm wavelength and 25 fs duration at a test sample, a thin membrane into which 3 μm-wide patterns had been cut (Chapman et al. 2006). The energy of the laser pulse heated the sample up to around 60,000 K, making it vaporize. However, the team was able to record an interpretable diffraction pattern before the sample was destroyed. The image obtained from the diffraction pattern showed no discernible sign of damage, and the test object could be reconstructed to the resolution limit of the detector. Damage occurred only after the ultra-short pulse traversed the sample.

In order to take images of large molecules with atomic resolution, such experiments will have to be carried out using radiation of even shorter wavelengths, i.e. hard X-rays such as the ones that will be produced from 2009 on by the Linac Coherent Light Source (LCLS) in Stanford, or by the European X-ray Free-Electron Laser (XFEL) in Hamburg, which should begin operation in 2013. Since the method demonstrated at FLASH does not require any image-forming optic, it can be extended to these hard X-ray regimes, for which no lens currently exists.

GEM structure makes self-portrait

In 1996 Fabio Sauli at CERN introduced the gas electron multiplier (GEM) – a new idea for gas amplification in particle detection. The concept has since seen increasing use in particle physics and other applications. Recently Ronaldo Bellazzini and his team at INFN/Pisa have used a GEM-based pixel detector illuminated by ultraviolet (UV) light to produce a “self-portrait” of the GEM amplification structure.

CCEnew8_01-07

Bellazzini uses a UV lamp to illuminate a caesium-iodide photocathode that is also the entrance window of the gas pixel detector. The light intensity is sufficiently low that the device detects only one photon at a time, each producing a single electron. The electron drifts into a single GEM hole where it knocks further electrons from atoms in an avalanche effect. The avalanche due to the single electron is extracted and a fine-pitch pixel CMOS analogue chip, which is also the charge-collecting electrode, provides a direct reading of the GEM charge multipliers, measuring the centre of “gravity” of the avalanche. If the resolution is good and the noise is low, the centre of gravity corresponds to the centre of the GEM hole.

Accumulating thousands of such events produces a map, in effect a “self-portrait”, of the GEM amplification structure with individual dots only 50 μm apart. The charge-collecting chip has 100,000 pixels arranged in a honeycomb pattern also at a pitch of 50 μm, providing an intrinsic resolution of the read-out system of only 4 μm, in response to a single primary electron.

NSCL reveals plans for $500 million upgrade

A detailed white paper published on 7 December outlines plans for a capability upgrade of the National Superconducting Cyclotron Laboratory (NSCL) located at Michigan State University.

CCEnew9_01-07

The 415 page document gives a scientific and technical description of a proposed Isotope Science Facility (ISF) that will use a high-power heavy-ion driver linac capable of accelerating beams of all stable elements to 200 MeV/nucleon with up to 400 kW beam power. Rare isotopes produced and separated in flight will be available as stopped, fast and re-accelerated (up to 12 MeV/nucleon) beams.

The cost of building ISF on an undeveloped site on the university’s south campus is approximately $500 million, according to NSCL officials.

The NSCL publication was followed on 8 December by the unedited prepublication release of Scientific Opportunities with a Rare-Isotope Facility in the United States, a report by the US National Academies. This declared a next-generation rare-isotope facility to be “a high priority for the United States”. The National Academies also found that a new US facility based on a heavy-ion linear accelerator would complement existing and planned international efforts.

Funded by the US National Science Foundation, NSCL is the largest nuclear-science facility on a US university campus and educates about 10% of that country’s nuclear-science doctoral students. It serves an international user community of 700 scientists from 35 countries.

3D map reveals distribution of dark matter

Making a map of something that cannot be seen directly seems almost impossible, but it has been achieved by a wide international collaboration analysing a huge set of observations gathered by the Hubble Space Telescope (HST) and several ground observatories. The result is a 3D map of the distribution of dark matter in a large volume of the universe.

CCEast1_01-07

Evidence for dark matter goes back to 1933 when the Swiss astrophysicist Fritz Zwicky working at the California Institute of Technology (Caltech) deduced that the Coma cluster of galaxies would not remain bound together by gravity without the pull of an additional hidden mass. Over the course of decades, it became clear that dark matter is much more abundant in the universe than visible matter. Recent cosmology results further suggest that most of the dark matter is not made of ordinary matter composed of protons and neutrons, but is of a yet to be discovered non-baryonic nature (see CERN Courier May 2006 p12).

Direct evidence for dark matter being decoupled from ordinary matter was obtained last year (see CERN Courier October 2006 p9), an observation difficult to reconcile with alternative theories like Modified Newtonian Dynamics (MOND).

Until now, the ability to derive the spatial distribution of dark matter was limited to individual galaxies and clusters. The new study published in Nature goes a step further by tracing dark matter on a much wider scale on the sky and also in depth along the line of sight. The international collaboration led by Richard Massey, also from Caltech, analysed the Cosmic Evolution Survey (COSMOS), a mosaic of 575 pointings of the Advanced Camera for Surveys (ACS) aboard the HST covering almost 2 sq. deg. on the sky, about eight times the area of the Moon. Dark matter manifests itself by the gravitational deflection of light emitted by background sources. This gravitational-lensing effect slightly deforms the shape of remote galaxies. It is this subtle effect that was measured for about 500,000 galaxies in this survey and analysed statistically to derive the projected distribution of dark matter.

The mapping of the dark matter in the third dimension is achieved by carefully selecting galaxies according to distance. Their observed distortion is mainly affected by dark matter at about half this distance. Using this property, a first 3D map of dark matter could be constructed extending from a redshift of 0 to 1. The survey volume is therefore an elongated cone rather than a box extending over about 7 billion light-years. The determination of the distance to the galaxies obtained by measuring their photometric redshift required follow-up observations in 15 wavelength bands by four ground-based telescopes.

The map is quite rudimentary, with little evidence for the expected filamentary or “sponge-like” structure obtained by simulations of the building-up of large-scale structures in the universe. Nevertheless, it shows some evolution from a more homogeneous to a more clumpy distribution of dark matter as expected through self-gravity, and it paves the way to future surveys that should enable this kind of study to have higher resolution and cover much greater volumes of the universe.

Further reading

R Massey et al. 2007 Nature doi:10.1038/nature05497.

Recent ISOLDE results revisit parity violation

It was 50 years ago last October that Tsung-Dao Lee and Chen Ning Yang suggested that the invariance under mirror reflection that we experience in everyday physical laws – parity symmetry – might be violated on the microscopic scale by the weak interaction (Lee and Yang 1956). They made this truly revolutionary suggestion to solve the so-called θ-τ puzzle, which involved two different decay modes of what seemed to be a single particle (now known as the K-meson) and which violated parity conservation. Lee and Yang formulated a description of the weak interaction that enabled parity to be violated, and were later awarded the Nobel prize for their theory.

Proving the theory

Just a few months later, in January 1957, Chien-Shiung Wu of Columbia University and collaborators Ernest Ambler, Raymond Hayward, Dale Hoppes and Ralph Hudson from the National Bureau of Standards in Washington announced that they had successfully confirmed the theory by observing parity non-conservation (PNC) in the beta-decay of a polarized sample of the radioactive 60Co nucleus (Wu et al. 1957). PNC has since become a cornerstone of the formulation of the weak interaction and the Standard Model of particle physics, even though its origin remains unexplained.

CCEiso1_01-07

The experiment used a modified version of a facility at the Bureau of Standards in which the spin of all nuclei pointed in one direction, a feat that required cooling the nuclei in the presence of a magnetic field to a temperature of several millikelvin above absolute zero. Under these conditions, the nuclei decayed under the influence of the weak force and emitted beta particles (electrons) and antineutrinos. In fact, the antineutrinos could not be observed in this system, but the effect on the electrons was measurable. The team showed that, as suggested by the theory of Lee and Yang, the number of electrons emitted in one direction with respect to the nuclear spin direction was significantly greater than the number emitted in the opposite direction, clearly indicating that parity was violated. Had it been conserved, equal numbers of electrons would have been observed in both directions.

Parity violation is characteristic of the weak nuclear force, but the strong nuclear force and the electromagnetic force preserve parity. Since the 1950s, these three fundamental forces in nature have been combined in a single theory – the Standard Model. This model suggests that PNC can also manifest itself in processes that are dominated by the strong and electromagnetic interaction, via the weak interaction part in the nuclear Hamiltonian – but such effects are usually minute and very difficult to observe (Adelberger et al. 1985 and Desplanques et al. 1980).

CCEiso2_01-07

In this article we will focus on measurements of PNC effects in bound nuclear systems, where parity mixing occurs between pairs of nuclear states of the same spin as a consequence of the weak part in the Hamiltonian. A PNC effect in a bound system can be written as:

PNCeffect ∝ |<HPNC>|K /ΔE

Here <HPNC> is the matrix element of the weak Hamiltonian, ΔE is the energy difference between the parity doublet levels and K is a model-dependent amplification factor, related to the ratio between the reduced matrix element of the “normal” gamma decay and the “abnormal” (PNC-enabled) matrix element of the same multipolarity. Table 1 lists several such cases, indicating values of <HPNC> and estimates of K for each transition.

CCEiso3_01-07

Among these cases, the mixing of the levels with spin I = 8 in the 180mHf nucleus provides a unique opportunity to study PNC in the electromagnetic and strong sectors, owing to the very large amplification. This amplification, of around 109, arises from the details of the nuclear structure, such as the proximity of the 8+ and 8 levels to each other, and the very different structure of the 8 level with respect to the sequence of positive parity levels below, to which it decays (figure 3). In the early 1970s, Kenneth Krane and collaborators at the Los Alamos Scientific Laboratory succeeded in observing parity mixing in the decay of 180mHf, when they measured a left–right asymmetry of about 1% in the emission of the 501 keV gamma transition (Krane et al. 1971a and 1971b).

Revisiting the evidence

This observation has been the only clear-cut demonstration of this type of parity violation until now and, as such, a group of us felt that the case deserved revisiting using the modern techniques of radioactive beams provided by the ISOLDE facility at CERN. During an experiment in October 2005, we produced a beam of 180mHf nuclei in their isomeric (metastable) 8 level at ISOLDE and implanted it into a magnetized iron foil at around 20 mK inside the NICOLE low-temperature 3He – 4He dilution refrigerator (figure 2).

CCEiso4_01-07

By detecting the 501 keV decay gamma-rays (figure 4) in two horizontal germanium detectors situated outside the NICOLE refrigerator, fore and aft (0° and 180°) of the polarization direction, we could determine the left–right asymmetry of the decay – a direct consequence, and a direct proof, of PNC. We used another detector, below the beam line at 90° to the axis of polarization, to monitor the 0°/90° ratio that provides a measure of the nuclear polarization and temperature.

The results we obtained show the parity-violating effect in the 501 keV gamma transition (figure 1) in close agreement with the previous experiments. Analysis yields an asymmetry of about 1% (Stone et al. 2007).
So the present experiment re-establishes the case of 180mHf as the prime example of PNC in bound nuclear systems, a fitting tribute, 50 years on, to the work of the pioneering scientists.

SN1987A heralds the start of neutrino astronomy

Twenty years ago researchers observed neutrinos from the supernova SN1987A – the first detection of neutrinos from beyond our solar system. Underground detectors are now waiting to study the explosion and neutrino properties of the next nearby supernova.

CCEsup1_01-07

In the early 1980s scientists built the first big detectors underground to search for nucleon decays. Grand unified theories (GUTs), proposed in the late 1970s, unify strong, weak and electromagnetic interactions. They predict that quarks can be transformed to leptons and that even the lightest hadron, the proton, can decay to lighter particles, such as electrons, muons and pions. The predicted lifetime of the proton was then about 1030 years, inspiring the construction of detectors weighing several thousand tonnes. The Irvine–Michigan–Brookhaven (IMB) detector in the US, which started data-taking in 1982, was a Cherenkov detector with 7000 tonnes of water viewed by 2048 5-inch photomultiplier tubes (PMTs) (figure 1). It was soon followed by the Kamiokande water Cherenkov detector in Japan. This was a 3000 tonne detector with 1000 20-inch PMTs, and it started up in 1983 (figure 2). Unfortunately, these detectors could not detect a proton decay signal because the lifetime of the proton was ultimately predicted to be much longer than the early GUTs had indicated.

In 1984/5 the Kamiokande collaboration upgraded their detector to look for solar neutrinos. Previously, the only detector searching for solar neutrinos was the Homestake experiment of Ray Davis and colleagues. The experiment observed a solar-neutrino flux of about a third of that predicted by the standard solar model. This was the famous “solar-neutrino problem”, and further experiments were needed to solve the discrepancy. To detect solar neutrinos, the Kamiokande team installed new electronics to record the timing of each PMT. They also constructed an anticounter to reduce gamma rays from the rock and improved water-purification to reduce radon background. The IMB collaboration upgraded their 5 inch PMTs to 8 inch PMTs to lower the detector’s energy threshold.

Supernova!

On 23 February 1987 at 0735 (UT), when the Kamiokande detector was ready to detect solar neutrinos, it observed neutrinos from SN1987A. The progenitor of the supernova was a blue giant in the Large Magellanic Cloud, 170,000 light years away. The Kamiokande detector observed 11 events and the IMB detector registered eight. Researchers at the Baksan underground experiment in Russia later analysed their data for the same period and found five events. The neutrino burst observed lasted about 13 s (figure 3).

CCEsup2_01-07

The theory of stellar evolution predicts that the final stage of a massive star (typically more than eight solar masses) is a core collapse followed by a neutron star or a black hole. As the temperature and density at the centre of stars increase, nuclear fusion produces heavier elements. This leads finally to an iron core of about one solar mass; further nuclear fusion is prevented as iron has the largest binding energy of all elements. When the core becomes gravitationally unstable it triggers the supernova explosion.

The gravitational potential energy of the iron core gives the energy released by the core collapse, which is about 3 × 1053 ergs. Predictions indicated that neutrinos would release most of the energy, since other particles, such as photons, are easily trapped by the massive material of the star. Researchers used the energy and number of observed events observed by Kamiokande and IMB to estimate the energy released by neutrinos from SN1987A, which was found to agree very well with expectations. This result confirmed the fundamental mechanism of a supernova explosion.

CCEsup3_01-07

There has been extensive work to simulate the explosion of a supernova, taking into account the detailed nuclear physics and with the recent addition of multi-dimensional calculations. However, no simulation has produced an explosion. Something seems to be missing and further investigation and more experimental data are needed. Although the observation of neutrinos from SN1987A confirmed the supernova scenario, the observed number of events was too small to reveal details of the explosion.

The next event

More recent underground detectors will give very valuable information when the next supernova burst occurs. The Super-Kamiokande detector has a photo-sensitive volume of 32,000 tonnes viewed by 11,129 20-inch PMTs. It can detect about 8000 neutrino events if a burst occurs at the centre of our galaxy (a distance of about 10 kpc). Super-Kamiokande should be able to measure precisely the time variation of the supernova temperature by detecting the interactions of emitted antineutrinos on free protons. Neutrino–electron scattering events, which are about 5% of the total events, should pinpoint the direction of the supernova.

CCEsup4_01-07

The kilotonne-class liquid-scintillator detectors, LVD in the Gran Sasso National Laboratory and KamLAND in Japan, will give additional information as they have a lower energy sensitivity and contain carbon. The IceCube detector, currently being built at the South Pole, can detect a supernova neutrino signal as a coherent increase of their PMT dark rate.

Although the supernova rate expected in our galaxy is only one every 20–30 years, a detection would provide an enormous amount of information. Scientists are proposing megatonne-class water Cherenkov detectors to detect proton decay and investigate neutrino physics, for example CP-violation in the lepton sector. If such detectors are built, they could observe a supernova in nearby galaxies every few years.

Supernovae have occurred throughout the universe since just after the Big Bang. The flux of all supernova neutrinos, known as supernova relic neutrinos (SRN), is intriguing. The expected flux of SRN is about several tens per square centimetre per second. The first five years of data from Super-Kamiokande gave an upper limit on the flux about three times higher than this expectation. By improving detection, it may soon be possible to detect SRN.

The neutrino data of SN1987A also yielded data on elementary-particle physics. It provided a limit on the mass of the neutrino of less than 20 eV/c2 (which in 1987 was competitive with laboratory experimental limits) and an upper limit on the neutrino lifetime. Future supernova data could provide something new in elementary-particle physics, for example, if the neutrino-mass hierarchy is inverted and a close supernova is detected, the energy spectrum of supernova neutrinos could reveal the hierarchy.
A conference to discuss supernova data from the past 20 years and what could be learned from a future supernova will be held at Waikoloa, Hawaii, on 23–25 February 2007.

Quadripôles: un transfert réussi vers l’industrie

Mi-novembre 2006, la 392 e et dernière masse froide d’un quadripôle principal du LHC était livrée au CERN. L’arrivée de cet aimant destiné à focaliser les faisceaux du LHC concluait une collaboration de 17 ans entre le CERN, le CEA-Saclay et l’industrie européenne.

La conception, les essais et la fabrication des aimants quadri­pôles du LHC ont été réalisés dans le cadre de la contribution exceptionnelle de la France au LHC. En 1996, le CERN, le CEA et le CNRS signaient un protocole d’accord pour le futur grand accélérateur en présence du Ministre Français de l’Education nationale et du Secrétaire d’Etat à la recherche. Au terme de cet accord, le département Dapnia du CEA-Saclay réalisait l’étude, la fabrication de trois prototypes, le lancement de la production dans l’industrie et le suivi de la fabrication des masses froides des sections droites courtes. Le CNRS prenait en charge l’étude des cryostats et de l’assemblage des sections droites courtes (p27).

En réalité, l’accord venait formaliser une collaboration entamée à la fin des années 80, reposant sur le savoir faire du CEA éprouvé avec la fabrication des quadripôles supraconducteurs de la machine HERA de DESY à Hambourg. À partir de 1989, deux prototypes de quadripôles avaient été conçus par le CEA-Saclay, dont l’un avait été testé au sein de sa section droite courte dans la première chaîne de test du LHC dès 1994. La signature de l’accord de collaboration donnait toutefois un nouvel élan à la collaboration. Le CEA et le CNRS s’engageaient sur une importante contribution en ressources humaines: 200 hommes-an étaient dévolus à quatre domaines techniques spécialisés, dont 75 hommes-an du CEA pour les masses froides des quadripôles. A la fin de la collaboration, la contribution du CEA-Saclay pour les quadripôles se sera en réalité élevée à 92,5 hommes-an.

Fin 1996, les paramètres des aimants quadripôles étaient définis. Une particularité de ces aimants tient à leur grande variété. Les 360 masses froides des arcs comptent 40 variantes et les 32 unités destinées aux régions de suppression de dispersion comptent 16 variantes. Cette diversité est due aux multiples combinaisons d’aimants correcteurs montés aux deux extrémités des quadripôles, à l’intérieur des masses froides. De surcroît, les quadripôles peuvent avoir une fonction focalisante ou défocalisante. Enfin, les interfaces vers le cryostat et vers la ligne d’alimentation en hélium liquide diffèrent également.

Cette complexité et les évolutions de la machine dans son ensemble expliquent que l’appel d’offre dans l’industrie n’ait été lancé que trois ans plus tard, fin 1999. A son terme, l’entreprise allemande ACCEL Instruments s’est vue attribuer la construction des aimants quadripôles et leur assemblage dans leur masse froide. Pour accueillir cette production, ACCEL a spécialement transformé deux immenses halls industriels désertés, à Troisdorf près de Bonn. Une fosse de huit mètres de profondeur a été creusée et aménagée afin d’assurer l’assemblage des masses froides à la verticale.

L’outillage et les procédures de fabrication avaient été développés pendant la première phase de la collaboration. Pour préparer la fabrication en série dans l’industrie, le CEA-Saclay avait en effet écrit les spécifications pour les outillages de bobinage, de frettage des ouvertures et d’assemblage des culasses ainsi que pour le montage des composants dans leur masse froide. Des méthodes de vérification avaient été développées avec le CERN. Dès avril 2001, le CEA-Saclay débutait le transfert de la technologie et de l’outillage développés pour les cinq premières masses froides.

La production d’une masse froide consiste à bobiner quatre bobines supraconductrices, puis à les fretter dans des colliers en inox qui doivent résister aux forces électromagnétiques. Les performances de l’aimant dépendent de la précision et de la qualité du bobinage et du frettage. Le bobinage doit être réalisé avec une précision de l’ordre de la vingtaine de micromètres pour le positionnement du conducteur sur une longueur de 3,2 mètres. Deux ouvertures frettées sont montées dans une culasse commune constituée de tôles poinçonnées en acier à faible teneur en carbone. Afin d’augmenter sa capacité de production, ACCEL s’est équipé d’outillages supplémentaires. L’étape la plus délicate était d’obtenir des bobines régulières avec ces nouveaux outillages. Le transfert de savoir-faire et le suivi de la production impliquaient une présence régulière des experts du CEA dans l’entreprise. Deux techniciens du CEA-Saclay ont assuré le transfert de technologie chez ACCEL. De même, le démarrage de la fabrication a été suivi par deux techniciens en permanence et un ingénieur du CEA-Saclay une semaine sur deux.

Les aimants et les masses froides ont été soumis à des mesures électriques et mécaniques après chaque étape de fabrication. Avant la livraison, des tests de pression et d’étanchéité ont été exécutés avec un équipement spécifique. Un système de gestion des non-conformités constituait un outil important du suivi de fabrication. Néanmoins, le délai de plusieurs semaines entre la fabrication et le test d’un aimant à froid au CERN rendaient les corrections d’erreurs difficiles. Pendant ce laps de temps, de nombreuses masses froides étaient fabriquées. Toute déviation devait donc être connue le plus tôt possible pour être corrigée.

Mi-2002, le premier aimant quadripôle sortait de l’usine. Testé au CERN, il démontrait d’excellentes performances. Alors que le courant nominal requis est de 11,870 ampères, la première transition résistive (quench) survenait à 12,631 ampères. Ce premier essai confirmait la fiabilité de la conception et autorisait la poursuite de la fabrication de série.

La production de composants aussi complexes n’a pas été pour autant sans mal. La montée en cadence a en partie été retardée par des délais de livraison des composants fournis par le CERN et ses contractants. Une grande partie de ces composants &ndash; le câble supraconducteur, le métal des colliers et des culasses, les aimants correcteurs, les bus-bars et les diodes &ndash; étaient en effet fabriqués par d’autres firmes et laboratoires et ont subi des aléas de qualité et de délais. Alors que la fabrication était à mi-­parcours, les tests ont fait apparaître des valeurs de perméabilité magnétique trop élevées de l’acier austénitique pour environ 10% des colliers de frettage. La décision de choisir astucieusement la position dans la machine des aimants incriminés, afin que les effets parasites s’annulent, a permis de limiter le retard. Le fournisseur de l’acier a amélioré la qualité des tôles pour les lots de colliers suivants.

Au plus fort de la production, quatre masses froides étaient produites chaque semaine. En décembre 2004, la livraison de la 100 e masse froide était célébrée. En novembre 2006, 10 ans après la signature de l’accord de la collaboration CERN&ndash;CEA, et six ans après celle du contrat avec ACCEL, la production des masses froides des quadripôles principaux du LHC était terminée.

L’étroite collaboration entre le CEA-Saclay et le CERN a été le moteur de ce succès. Les deux laboratoires ont combiné leur expertise et savoir-faire dans un esprit de confiance mutuelle et en respectant des procédures de contrôle de la qualité bien élaborées. Les difficultés techniques ont ainsi pu être surmontées et la technologie innovante de fabrication a pu être transférée à une entreprise industrielle qui s’est montrée volontaire et tout à fait capable d’exécuter cette fabrication complexe.

SSS: le pari gagnant de la collaboration

SSS: a collaborative winning gamble

The last of the 474 short straight sections (SSS) for the LHC have been assembled at CERN. These sets of magnets to focus the beams contain, among others, the main superconducting quadrupoles, and they have been developed and produced in the context of the special French contribution to the LHC project. CNRS (Institut de physique nucléaire d’Orsay) designed and assembled the 136 variations of SSS in collaboration with CERN. More than a thousand technical drawings were needed to document the project. Following the insolvency of the company in charge of production, CERN took over the assembly, showing that a laboratory could successfully lead industrial work.

Les dernières des 474 sections droites courtes du LHC sont en cours d’achèvement sur le site de Prévessin du CERN. La réussite de cet assemblage, qui a débuté il y a quatre ans, est le fruit d’un travail étroit entre le CERN et ses partenaires industriels. Elle marque également l’aboutissement d’une collaboration de 10 ans entre le CERN et le CNRS.

CCEsss1_01-07

Les sections droites courtes (appelées SSS de l’Anglais Short Straight Sections) sont les assemblages contenant les quadripôles supraconducteurs principaux de focalisation du LHC, fabriqués par l’entreprise allemande ACCEL, ainsi que les quadripôles d’insertion assemblés au CERN. En plus de ces aimants principaux, les SSS intègrent une grande variété d’aimants correcteurs, de systèmes d’instrumentation et de diagnostic, d’amenées de courant, de composants de cryogénie et du vide.

Intégrer tous ces éléments dans l’espace étroit des cryostats, tout en respectant les spécifications rigoureuses de charges thermiques sur le système cryogénique, constituait un défi pour les ingénieurs et dessinateurs du CERN et du CNRS. La conception était également rendue complexe par les mouvements au sein du cryostat. Les contractions thermiques causées par les variations de température (les aimants du LHC sont refroidis à 2 K) entraînent en effet des mouvements des composants. La stabilité et le positionnement géométrique précis de l’ensemble doivent pour autant être respectés. Etant donné le nombre important d’unités à assembler, un autre défi tenait au développement de méthodes d’assemblage à l’échelle industrielle. Enfin, un plan d’assurance-qualité très complet devait être mis en oeuvre pour s’assurer du respect des spécifications.

CCEsss2_01-07

Après l’approbation du projet LHC en décembre 1994, la conception finale des SSS a débuté en février 1996 avec la signature par le CERN et les deux instituts français CEA et CNRS d’un protocole de collaboration prévoyant plusieurs accords techniques d’exécution. Le premier accord, entre le CEA de Saclay et le CERN, portait sur la réalisation des masses froides des quadripôles supraconducteurs (p25). Le deuxième accord, entre l’IN2P3 du CNRS et le CERN, couvrait la conception industrielle des cryostats des SSS et de tous les équipements nécessaires à leur assemblage, réalisée par le bureau d’études de la division accélérateur de l’Institut de physique nucléaire (IPN) d’Orsay.

Dans le cadre de ce deuxième accord, le CERN pouvait tirer parti de ses compétences dans la conception de cryostats et de son expérience acquise lors de la réalisation des deux premiers prototypes de SSS testés dans la première chaîne de test du LHC dès 1994. Le CERN avait la responsabilité de définir les paramètres principaux de la conception et du pilotage du projet. Le CNRS, s’appuyant sur ses ressources en ingénierie et son bureau d’étude, avait comme mission l’étude de détail du cryostat, l’étude des outillages d’assemblage des SSS, la participation à la réalisation de deux prototypes pour la deuxième chaîne de test du LHC, le lancement et la participation au suivi des fabrications et des assemblages de série.

CCEsss3_01-07

Les sections droites courtes comprennent un grand nombre de variantes et leur assemblage est complexe. Les 474 unités comptent 60 types de cryostats, ce qui donne, en ajoutant les différentes masses froides, au total 136 variantes. Pour documenter un tel ensemble, le CNRS a produit au total plus de mille dessins techniques et une quarantaine de documents, des notes de calcul aux spécifications. Tous ces documents ont été validés par les ingénieurs du CERN et sont aujourd’hui disponibles dans le système informatique de gestion de données d’ingénierie.

Pour faire face à la complexité du projet et combler les difficultés liées à la distance entre le CERN et l’IPN d’Orsay, l’utilisation des technologies de l’information pour la gestion de projet s’est avérée indispensable. Le CERN a mis au point des moyens informatiques de communication, d’approbation et d’archivage des documents et de gestion des données très adaptés à ce travail de collaboration à distance. Ces outils, tels qu’EDMS (Engineering Data Management System), CDD (CERN Drawing Directory), les routines informatiques pour le transfert de dessins CAO et la transformation en format HPGL, ont largement été utilisés. Un rôle crucial a été rempli par les réunions de revue de projet, qui se sont déroulées tout au long de la collaboration, et qui ont permis, à chaque étape critique, d’en assurer le pilotage.

A l’automne 2002, suite à l’insolvabilité de l’entreprise allemande BDT en charge de la fabrication et de l’assemblage des SSS, le CERN a repris le travail à son compte. Cette décision stratégique majeure d’internaliser l’assemblage sur son site permettait d’éviter les inévitables retards engendrés par le lancement d’un deuxième appel d’offre. Un ancien atelier du CERN fut réaménagé en quelques mois, devenant opérationnel à l’automne 2003. Pendant que le CERN reprenait en main l’approvisionnement des composants fabriqués dans une dizaine de sociétés européennes, une petite équipe d’ingénieurs et de techniciens du Laboratoire européen se chargeait d’organiser l’atelier, de planifier la production, d’élaborer les procédures d’industrialisation et de rédiger le plan d’assurance qualité. L’exécution du travail, dans le cadre d’un contrat à obligation de résultats, fut confiée au consortium ICS, déjà en charge de l’assemblage au CERN des cryostats des aimants dipôles du LHC. Deux sociétés furent choisies pour le contrôle qualité: l’Institut de Soudure français, pour contrôler la conformité des soudures, et le consortium Air Liquide-40/30, pour vérifier l’étanchéité des circuits cryogéniques et du vide.

L’assemblage d’une section droite courte consiste essentiellement à introduire une masse froide, préalablement équipée d’écrans thermiques, dans une enceinte à vide qui isole thermiquement les aimants opérant à 2 K. Mais le travail le plus important consiste à intégrer des composants spécifiques à chaque SSS. Les composants à assembler sont préalablement inspectés, testés et préparés en kits selon le type de section à assembler. Cette approche valut au chantier le qualificatif humoristique de «Legoland». Gérer les stocks de plus de 400 sortes de composants à combiner selon 136 types d’assemblages s’est avéré un vrai casse tête, justifiant la mise en place d’une plate-forme logistique dédiée, pilotée par trois personnes à temps plein. Les techniques d’assemblage couramment employées sont le montage mécanique, les travaux de chaudronnerie, le soudage TIG et MIG sur acier inoxydable et aluminium, le brasage cuivre/inox ou le brasage de câbles supraconducteurs.

Jalonnant la phase d’assemblage, les inspections et les tests intermédiaires nécessaires pour valider la qualité du travail comportent des contrôles géométriques, des tests de continuité et d’isolation sur les circuit électriques des aimants et leur instrumentation, des tests de polarité des aimants pour dépister les erreurs de câblage, des inspections de soudure (visuelles et aux rayons X). Des tests d’étanchéité des circuits de cryogénie et du vide sont également menés à l’aide de détecteurs à spectrométrie de masse d’hélium. Plus de 5 km de soudures étanches pour les circuits cryogéniques et plus de 2500 tests d’étanchéité ont été réalisés. Les tests ont permis de localiser et de réparer 73 fuites. Ce plan d’assurance qualité rigoureux a permis d’intercepter plus de 550 non-conformités critiques aussi bien au cours de l’assemblage qu’à la réception des composants.

La figure, illustrant le rythme de production des SSS en fonction du temps, montre clairement la courbe d’apprentissage jusqu’à la maîtrise des procédés d’assemblage et de l’organisation de la production: une seule section droite courte était assemblée par mois en 2003, contre 20 au moins par mois en 2006. L’assemblage de chaque SSS durait entre deux et quatre semaines, selon la complexité. Lorsque l’activité a atteint son pic, 50 personnes, ouvriers et techniciens, étaient présentes dans l’atelier.

L’aboutissement de ce projet marque la fin d’une expérience très riche et unique. La durée de 10 ans de la collaboration, la complexité du LHC et les technologies de pointe auxquelles il a fallu faire appel ont constitué autant de défis pour la gestion technique, la gestion des ressources et la coopération des équipes entre deux instituts culturellement et géographiquement éloignés. Le défi de l’internalisation de l’assemblage a été gagné, prouvant qu’il est possible de mener à bien un travail industriel au sein même d’un laboratoire de recherche. Cette réintégration de l’assemblage au CERN, redoutée au début, a vite montré ses atouts: en ayant un accès immédiat et permanent à l’atelier, le CERN a pu anticiper et exercer un pilotage réactif, gage du succès.

bright-rec iop pub iop-science physcis connect