Comsol -leaderboard other pages

Topics

RF antennas help unravel cosmic rays

CCcos1_04_07

The origin of ultra-high-energy cosmic rays (UHECR) observed at energies above 1019 eV is a mystery that has stimulated much experimental and theoretical activity in astro-physics. When cosmic rays penetrate the atmosphere, they produce showers of secondary particles and corresponding radiation, which in principle yield information on the particle tracks, energy and origin of the primary cosmic rays. The CODALEMA experiment in Nançay has recently measured the radio-electric-field profiles associated with these showers on an event-by-event basis. These novel observations are directly connected to the shower’s longitudinal development, which is related to the nature and energy of the incident cosmic ray.

There have been previous partial studies of radio emission from showers, so why is this result so promising? The UHECR flux is very low (a few per square kilometre per century), so the largest detector arrays are square kilometres in area to detect secondaries at ground level combined with various other techniques, such as the fluorescence emission. However, the latter method is limited to the optical domain, where the need for moonless skies and appropriate environmental conditions results in a maximum duty cycle of only about 10%. So, the radio detection technique offers an interesting if challenging alternative (or complementary) method in which one antenna array can provide a very large acceptance and sensitive volume adequate to characterize rare events, such as UHECR.

CCcos2_04_07

There is also another important argument for the radio technique: because the distance between radiating particles is several times smaller than typical radio wavelengths, the individual particles radiate in phase. This will result in a coherent type of radio emission dominating all other forms of radiation, with a corresponding electromagnetic radiated power proportional to the square of the deposited energy. In air, the coherent radiation will build up at frequencies up to several tens of megahertz, while in dense materials, more compact showers can result in coherence radiation up to several gigahertz. Gurgen Askar’yan first suggested the production of radio emission in air showers in 1962, and some observations were reported in the 1960 and 70s (Allan 1971, Gorham and Saltzerg 2001). The electronics available at the time made the measurements unreliable, however, and researchers abandoned the technique in favour of direct ground particle or fluorescence measurements.

High-performance digital signal-processing devices have now made feasible the sampling of radio-frequency (RF) waveforms with large frequency bandwidth and high time-resolution, depending on the nature of the primary cosmic ray. Exploiting these new possibilities, the SUBATECH Laboratory, Nantes, and the Paris Observatory have developed the CODALEMA (Cosmic ray Detection Array with Logarithmic Electro-Magnetic Antennas) experiment on the site of the Nançay Radio Observatory.

CCcos3_04_07

For its first phase, CODALEMA has used some of the 144 log-periodic antennas of the decametric array of the Nançay Observatory distributed along a 600 m baseline. All the antennas are band-pass filtered (24–82 MHz) and linked, after RF signal wide-band amplification, to fast-sampling digital oscilloscopes (figure 1).

In its first running period, CODALEMA has established the appropriate conditions for the analysis of the antenna data, either in stand-alone mode or in coincidence with a set of particle detectors acting as a trigger. Figure 2 shows four cosmic-ray events identified at different zenith angles. It illustrates how the results reveal the dependence of the electric field on the distance of the antenna to the shower impact (in metres) at energies around 1017 eV. They show for the first time the richness of the information contained in the longitudinal shower development measured by radio detection.

CCcos4_04_07

First, the device is sensitive to amplitudes down to 1 μV/m/MHz, which is free from the fluctuations in the number of particles encountered in particle detectors. This allows detailed analysis of the field amplitude and its dependence on the energy and nature of the incident cosmic ray. It is remarkable that this sensitivity is carried over distances presumably greater than 600 m from where the shower hits the ground. The clear zenith angle dependence of the field profile illustrates the large angular acceptance of the array. This demonstrates for the first time the sensitivity of this detection method to the development of the shower, related to sequences of charge generation. Additionally, a Fourier transform analysis of the signal revealed a possible frequency dependence of the signal with impact parameter, a quantity strongly connected to the physical characteristics of the air shower.

At last, the study of the stand-alone antenna mode has clearly established that the transient character of the radio signal can be safely used to determine the arrival direction (with an accuracy of around 0.7°) and to reconstruct full event waveforms.

These results clearly demonstrate the interest in a complete re-investigation of the radio detection of UHECR that Askar’yan first proposed in the 1960s. Only two experiments in the world have undertaken this type of analysis with atmosphere showers: CODALEMA and the LOPES experiment, which is studying the same RF domain (Isar et al. 2006). The latter works as an extension of the triggering multi-detector set-up Kascade-Grande in Karlsruhe. Other active experiments use dense materials such as ice, salt or lunar regolith, and therefore study higher frequency domains; these include RICE and ANITA (Antarctic), FORTE satellite (Greenland), SALSA (Mississippi) and GLUE (NASA-Goldstone).

In addition, the CODALEMA results indicate specific features that encourage the use of this technique as a complementary method to experiments based on large ground detector arrays, such as the Pierre Auger Observatory. The CODALEMA collaboration is currently investigating this possibility. In addition, it is considering the exploitation of the large zenithal acceptance for the challenging study of very inclined showers, which correspond to large slant depths in the atmosphere (or Earth). For example, while suppressed by the Earth’s opacity and barely accessible to other techniques, high-energy neutrinos can interact at any point along such trajectories, producing τ particles and subsequent detectable “young” showers in the atmosphere. Another interesting application is the characterization of distant storms and very energetic atmospheric radiation events of currently unknown origin. An upgraded experimental set-up has been running since November 2006, consisting of 16 antennas with new active wide-bandwidth dipoles together with 13 particle detectors to allow shower-energy determinations for calibration purposes.

• The CODALEMA collaboration comprises three laboratories from the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3): SUBATECH/Nantes, LPSC/Grenoble LAL/Orsay, two laboratories from the Institut National des Sciences de l’Univers (INSU): Observatoire de Paris/LESIA and LAOB/Besançon; the LPCE/Orleans CNRS laboratory; and the private ESEO Angers Institute.

Zurich workshop faces the LHC’s precision challenge

CCzur1_04_07

With the imminent start-up of the LHC, particle physics is about to enter a new regime, which should provide solutions to puzzles such as the origin of electroweak symmetry-breaking and the existence of supersymmetry. The LHC will produce head-on collisions between protons or heavy-ions, but at the fundamental level these come down to interactions between partons, that is, quarks and gluons. For this reason, all the interesting new reactions will be initiated essentially by quantum chromodynamic (QCD) hard-scattering, and any claim for new physics will require a precise understanding of known, Standard Model processes.

To prepare for the “precision challenge” at the LHC, the particle theory groups of ETH Zurich and Zurich University organized a workshop on High Precision for Hard Processes (HP2). The three-day workshop took place on the ETH campus in early September 2006, involving about 65 participants. These included 15 diploma and doctoral students, indicating that precision calculations for the LHC is a lively field that attracts many young researchers.

HP2 addressed the precision challenge with reviews of results from Fermilab’s Tevatron, expectations at the LHC and measurements of parton distributions. A few benchmark reactions, such as single-inclusive jet-production and vector-boson production, will already be accessible with very low luminosity at the LHC. These can provide precise constraints on the proton structure, which is relevant to all hadron-collider reactions.

CCzur2_04_07

Likewise, precision studies, such as investigating the properties of the top quark, demand a better description of the full characteristics of an event. These will include improved jet-algorithms and a better understanding of soft physics in hard interactions. Searches will often involve multiparticle final states, calling for a precise description of high-multiplicity processes.

Searches for physics beyond the Standard Model will have to aim for a variety of different theoretical scenarios of electroweak symmetry-breaking. These include supersymmetry, higher-dimensional Higgs-less or composite Higgs models, and many other alternatives. Distinguishing models based on experimental observations could become very difficult, since signatures often look similar. The study of this variety of new models, therefore, calls for flexible tools that allow the prediction of all observable consequences of a new model simultaneously. With leading-order event-generator programmes now including generic new physics scenarios, we are clearly on the way to more systematic studies.

While these leading-order studies should give a first overview of the general features of signal and background processes, allowing the design of cuts and the optimization of search strategies, they will often be insufficient when precision is required. This will be the case either in discriminating among similar models or if signals are likely to be spread out over a continuous background.

Until now, next-to-leading order (NLO) calculations have been carried out on a case-by-case basis. Several speakers presented new results at the workshop, including Higgs-plus-2-jet production through QCD processes and vector-boson fusion; top quark plus 1 jet production; scalar-quark effects in Higgs production; and corrections to the decay of the Higgs boson into four fermions.

All of these calculations were major, time-consuming projects, and it is becoming increasingly clear that the large number of phenomenologically relevant reactions for LHC physics calls for automated techniques for NLO computations. While generating the appropriate real and virtual Feynman diagrams can be automated, the evaluation of the one-loop diagrams poses a major bottleneck, since standard one-loop methods applied to multi-particle processes result in over-complicated and numerically unstable results. The search for an efficient and automated method for NLO calculations was a major focus of the workshop.

Techniques proposed for the automated computation of one-loop virtual corrections to multi-particle processes range from purely numerical to fully analytical methods. In the purely numerical approaches, one searches for isolated, potentially singular contributions to the loop integrals at the level of the integrand, and subtracts them using universal subtraction factors. Semi numerical techniques aim to perform a partial simplification of the one-loop integrals to a not-necessarily minimal basis, which ensures maximum numerical stability. The purely analytical methods aim for an expression in a minimal basis. The workshop heard of much progress in this direction.

A substantial number of talks addressed the application of twistor-space techniques. Originally proposed as a new method to understand better the mathematical structure of tree-level amplitudes, the twistor-space formulation has proven fruitful for simplifying loop amplitudes. The twistor-based coefficients are a crucial ingredient in reconstructing one-loop amplitudes from their cuts. In supersymmetric theories, this procedure yields the full one-loop amplitude for any process. In the Standard Model, however, the rational parts of the amplitudes escape the cut construction.

Fortunately, this is not a major difficulty. Exploiting the detailed analytical properties of the amplitudes or generalizing the unitarity-cut method from four dimensions to general dimensions, the rational parts can also be calculated in a systematic manner. Applications of these new tools range from the phenomenology of Higgs bosons and one-loop multi-parton amplitudes to loop corrections in supergravity amplitudes.

For a number of benchmark processes, typically of low multiplicity, even NLO accuracy is not sufficient. The workshop heard progress reports on next-to-next-to-leading-order (NNLO) calculations of the Drell–Yan process and of the three-jet event-shapes in e+e; annihilation. New methods to perform NNLO calculations and to predict the singularity structure of QCD amplitudes at NNLO and beyond are paving the way for further progress in this area.

Very often, particular terms become large at any order in perturbation theory, necessitating an all-order resummation. For the leading logarithmic corrections to arbitrary processes, this can be performed using parton showers. Over the last few years we have seen major progress in this area, with NLO corrections being included into the parton showers on a process-by-process basis. Additionally, a number of important hard processes have been included recently in the MC@NLO event-generating program. There have also been suggestions for new implementations of parton showers that aim at more efficient systematic methods for the inclusion of the NLO corrections.

Sub-leading corrections need to be determined on a case by case basis, and speakers reported on new results for Higgs and W pair production. On the more formal side, these resummation approaches can be used to predict dominant terms at three loops, and to obtain an improved understanding of universal soft behaviour and of the high-energy limit of QCD. While resummation approaches have long been considered independent of higher-order calculations, the workshop clearly illustrated that both areas can have a fruitful interchange of ideas and methods.

In summarizing the highlights from the conference, Zvi Bern from the University of California at Los Angeles emphasized that theory is taking up the LHC’s precision challenge. Progress on many frontiers of high-precision calculations for hard processes will soon yield a variety of improved results for reactions at the LHC, providing experimental groups with the best possible tools for precision studies and new physics searches.

The next HP2 meeting will be in Buenos Aires, Argentina, in early 2009, when there should be plenty of discussion on the first data from the LHC.

Histoire de la radioactivité

par René Bimbot, Vuibert. Broché ISBN 9782711771943, €35.

CCEboo1_03-07

Dans son Histoire de la radioactivité, René Bimbot s’adresse à un très large public dans un style limpide et son livre se lit aisément, cela d’autant plus que de nombreux graphiques et illustrations facilitent la compréhension. Au fil du texte, le lecteur peut aussi s’attarder sur des encadrés expliquant certaines notions simples de physique, accessibles dès la fin du secondaire, qui, au prix d’un effort minime, permettent d’approfondir la compréhension du sujet traité.

Des vignettes présentent en quelques paragraphes les chercheurs qui ont écrit les pages les plus remarquables de cette histoire ou y sont étroitement associés, c’est le cas successivement de Röntgen, Becquerel, Marie Curie, Pierre Curie, J J Thomson, Planck, Einstein, Rutherford, Bohr, de Broglie, Chadwick, Fermi, Yukawa, Frédéric et Irène Joliot, Lawrence, Gentner, Seaborg et Charpak.

Trop souvent, sous différents prétextes, les physiciens présentent de façon caricaturale le développement historique de leur sujet de prédilection; ici au contraire René Bimbot s’attache à suivre en grand détail le cheminement des idées en faisant ressortir les influences réciproques des acteurs ayant joué les premiers rôles dans les découvertes sur la radioactivité.

Curieusement ce livre semble combler un vide; en effet, une recherche rapide montre qu’il n’existait pas d’ouvrage dans la langue de Becquerel traitant complètement du sujet. En anglais on note le récent ouvrage de G I Brown (2003) intitulé Invisible Rays: A History of Radioactivity qui traite des mêmes sujets. L’auteur a joué un rôle très actif dans certaines des célébrations des divers centenaires remarquables de la physique que nous avons connu au cours de la dernière décennie, une bonne préparation pour l’élaboration de son Histoire.

Le livre comporte trois parties intitulées «De la radioactivité naturelle au noyau de l’atome», «De la radioactivité artificielle à l’énergie de fission» et «Rayonnements et radioactivité aujourd’hui».

Les sept chapitres de la première partie s’attachent aux premiers travaux de Becquerel, puis de Marie et Pierre Curie, et guident ensuite le lecteur à travers les bouleversements de la physique qui ont caractérisé le début du 20 e siècle jusqu’à l’élucidation par Gamow de la radioactivité alpha en 1928 (expliquée par l’effet tunnel permis par la mécanique quantique) et la théorie de Fermi de la radioactivité bêta en 1933. En résolvant les énigmes posées par la radioactivité, les physiciens nucléaires ont enrichi la physique de deux nouvelles forces, la force forte et la force faible. Ils ont ainsi légué à leurs successeurs de la physique des particules deux sujets essentiels dont l’élucidation théorique a pris plusieurs décennies pour arriver à l’unification électrofaible et à la chromodynamique quantique.

La deuxième partie traite en sept chapitres du sujet associé du positon et de l’antimatière, puis de la radioactivité dite artificielle et des diverses applications de la radioactivité. Parmi celles-ci, René Bimbot souligne bien sûr le rôle toujours croissant en médecine nucléaire des multiples radionucléides à usage médical et de l’imagerie avant de passer au domaine de la datation, en particulier par la méthode du carbone-14. Il consacre trois chapitres à la fission, aux armements nucléaires, aux diverses filières électronucléaires (sans éluder l’accident de Tchernobyl) et à la question des déchets.

Plus courte, la troisième partie du livre discute des détecteurs puis des deux faces contrastées des rayonnements: leurs dangers mais aussi leurs bienfaits en radiothérapie. Le chapitre final aborde des manifestations plus rares de la radioactivité, comme la capture électronique, l’émission de un ou de deux protons et aussi d’un neutron post-bêta. René Bimbot insiste par ailleurs sur le fait qu’il reste des radioactivités à découvrir après celle des ions lourds (comme le carbone-14) et que la sensibilité croissante des méthodes de détection promet la découverte de radioactivités ultrafaibles par des nucléides considérés comme stables. Ce chapitre considère aussi la désintégration bêta au niveau des quarks, après les niveaux moins élémentaires du noyau et du proton présentés auparavant.

On peut regretter l’omission de la double désintégration bêta qui aurait particulièrement intéressé les lecteurs du CERN Courier. De plus, un index analytique des sujets traités aurait utilement complété celui des scientifiques cités dans le livre.

Le livre de René Bimbot remplit donc de façon convaincante l’objectif annoncé par le titre et il intéressera en particulier le lecteur recherchant une perspective historique sur un sujet qu’il connaît déjà.

CMS: a super solenoid is ready for business

Assembly of the solenoid

For seven years, Point 5 on the LHC has been the site of intense activity, as the CMS detector has taken shape above ground at the same time as excavation of the experimental cavern below. Last year saw an important step in the preparations on the surface, as the huge CMS superconducting solenoid – the S in CMS – was cooled down, turned on and tested.

The CMS coil is the largest thin solenoid, in terms of stored energy, ever constructed. With a length of 12.5 m and an inner diameter of 6 m, it weighs 220 tonnes and delivers a maximum magnetic field of 4 T. A segmented 12 500 t iron yoke provides the path for the magnetic flux return. Such a complex device necessarily requires extensive tests to bring it into stable operation – a major goal of the CMS Magnet Test and Cosmic Challenge (MTCC) that took place in two phases between July and November in 2006.

From the start, the idea was to assemble and test the CMS magnet – and the whole detector structure – on the surface prior to lowering it 90 m below ground to its final position. The solenoid consists of five modules that make up a single cylinder (figure 1), while the yoke comprises 11 large pieces that form a barrel with two endcaps. There are six endcap disks and five barrel wheels, and their weight varies from 400 t for the lightest to 1920 t for the central wheel, which includes the coil and its cryostat.

The CMS solenoid has several innovative features compared with previous magnets used in particle-physics experiments. These are necessary to cope with the high ampere turns needed to generate the 4 T field – 46.5 MA around a 6 m diameter. The most distinctive feature is the four-layer coil winding, reinforced to withstand the huge forces at play. The niobium titanium conductor is in the form of Rutherford cable co-extruded with pure aluminium and mechanically reinforced with aluminium alloy (figure 2). The layers of this self-supporting conductor bear 70% of the magnetic stress of 130 MPa while the cylindrical support structure, or mandrel, takes the remaining 30%.

Constructing the coil has been a tour de force in international collaboration involving suppliers in several countries. The basic element, the superconducting wire, originated with Luvata in Finland, and passed to Switzerland, where Brugg Kabelwerk made the Rutherford cable, and Nexans co-extruded it with pure aluminium. The cable then went to Techmeta in France for ­electron-beam welding onto two sections of high-strength aluminium alloy to allow the conductor to support the high magnetic stress. Finally ASG Superconductors in Italy wound the coils for the five sections of the solenoid, which travelled individually by sea, river and land to Point 5 for assembly into a single coil. The division into sections, and the chosen outer diameter of 7.2 m, ensured that transport could be by road without widening or rebuilding.

A model of the CMS solenoid coil

By August 2005 the solenoid was ready to be inserted into the cryostat that keeps it at its operating temperature of 4.5 K (figure 3). Cooling requires a helium refrigeration plant with a capacity of 800 W at 4.5 K and 4500 W in the range 60–80 K. The cryoplant was first commissioned with a temporary heat load to simulate the coil and its cryostat, and then early in 2006 the real coil was ready for cool-down. In an exceptionally smooth operation the temperature of the 220 t cold mass was lowered to 4.5 K over 28 days.

The next stage was to close the magnet yoke in preparation for the MTCC. The massive elements of the yoke move on heavy-duty air pads with grease pads for the final 100 mm of approach. Once an element touches the appropriate stop it is pre-stressed with 80 t to the adjacent element to ensure a good contact before switching on the magnet. To assure good alignment, a precise reference network of some 70 points was set up in the assembly hall, with the result that all elements could be aligned to within 1 mm of the ideal coil axis. The first closure of the whole yoke took some 30 days, and was completed on 24 July (CERN Courier September 2006 cover and p7). The MTCC could now begin.

Testing the magnet took place in two phases, with the initial tests in August 2006 and further tests and field mapping in October. The cosmic challenge, to test detectors and data-acquisition systems with cosmic rays, took place simultaneously. Each step in current ended with a fast discharge into external dump-resistor banks. Depending on the current level at the time of the fast discharge, it could take up to three days to re-cool the coil.

A key feature with any superconducting magnet system is to protect against high thermal gradients occurring in the coil if the system switches suddenly from being superconducting to normally (resistively) conducting with a sudden loss of magnetic field and release of stored energy – a quench. The aim is to dissipate the energy into as large a part of the cold mass as possible. For this reason the CMS solenoid is coupled inductively to its external mandrel, so that in the case of a quench eddy currents in the mandrel heat up the whole coil, dissipating the energy throughout the whole cold mass.

The tests showed that when the magnet discharges the dump resistance warms up by as much as 240°. At the same time the internal electrical resistance of the coil increases, up to as much as 0.1 Ω after a fast discharge at 19 kA.

The tests also showed that after a fast discharge at 19 kA the average temperature of the whole cold mass rises to 70 K, with a maximum temperature difference of 32.3° measured between the warmest part, on the inside of the central section of the coil, and the coldest part, on the outside of the mandrel. It then takes about two hours for the temperature to equalize across the whole coil. About half of the total energy (1250 MJ) dissipates as heat in the external dump resistor, which takes less than two hours to return to its normal temperature.

Monitoring the magnet’s mechanical parameters was also an important feature of the tests, to check for example the coil shrinkage and the stresses on the coil-supporting tie rods during cool-down. The measured values proved to be in excellent agreement with calculations. Powering the cold mass step-by-step allowed also for measurements of any misalignment of the coil. This showed that the axial displacement of the coil’s geometric centre is less than 0.4 mm, indicating a magnetic misalignment of less than 2 mm in the positive z direction.

A major goal of Phase II of the MTCC was to map and reconstruct the central field volume with 5 × 10–4 precision. The measurements took place in three zones, with flux-loop measurements in the steel yoke, check-point measurements near the yoke elements, and a detailed scan of the entire central volume of the detector – essentially the whole space inside the hadron calorimeter.

The solenoid

Measuring the average magnetic flux density in key regions of the yoke by an integration technique involved 22 flux loops of 405 turns installed around selected plates. The flux loops enclosed areas of 0.3–1.58 m2 on the barrel wheels and 0.48– 1.1 m2 on the endcap disks. The flux loops measure the variations of the magnetic flux induced in the steel when the field in the solenoid is changed during the fast (190 s time constant) discharge. A system of 76 3D B-sensors developed at NIKHEF measured the field on the steel–air interface of the disks and wheels to adjust the 3D magnetic model and reconstruct the field inside the iron yoke elements, which are part of the muon absorbers.

A special R&D programme with sample disks made of the CMS yoke steel from different melts investigated if the measurements of the average magnetic flux density in the yoke plates could be done with accuracy of a few per cent using flux loops. These studies showed that the contribution of eddy currents to the voltages induced in the test flux loop is negligible.

The precise measurement of the magnetic field in the tracking volume inside the CMS coil used a field-mapper designed and produced at Fermilab. This incorporated 10 more 3D B-sensors, developed at NIKHEF and calibrated at CERN to 5 × 10–4 precision for a nominal field of 4 T. To map a cylindrical volume inside the coil, this instrument moved along the rails installed inside the hadronic barrel calorimeter, stopping in 5 cm steps at points where the sensor arms could be rotated through 360°, and at predefined angles with azimuth steps of 7.5°. Figure 4 shows the final position of the mapper before closure of the positive endcap. It mapped a cylindrical volume 1.724 m in radius and 7 m long.

The CMS magnet has six NMR probes near the inner wall of the superconducting coil cryostat to monitor the magnetic field. These were also used in the field-mapping to measure the field on the coil axis and on the cylindrical surface of the measured volume.

The actual field-mapping in October involved a series of measurements at 0 T, 2 T, 3 T, 3.8 T (twice to study systematics), 3.5 T and 4 T. The flux-loop measurements were made during fast-­discharges of the coil at various current values. While the detailed analysis of the data is still ongoing, preliminary studies are very encouraging. The field distribution behaves very much as the simulation predicted – though more detailed simulation of the extra iron in the feet of CMS is necessary to account for it fully.

The solid steel yoke

The azimuthal component of the field is nominally zero, but as the plot shows it takes on small values with a sinusoidal dependence on the rotation angle. This is now fully understood as coming from small tilts of the plane in which the mapper moves with respect to the nominal z axis of the magnetic field; this couples the magnetic field components and also gives rise to the small variations seen in the radial component. In addition, the team now understands some even smaller variations in the sinusoidal behaviour of the azimuthal field as a function of the z step following a careful survey of the tilts induced on the mapper arms as it traversed the length of the coil on its (almost) straight rails.

The electrical tests of the solenoid have demonstrated its excellent performance, as well as the functioning of its ancillary systems, and its readiness for smooth operation. The detailed mapping was equally successful and final analysis is now underway to ensure that the best possible parameterization of the field for analysis of real data in autumn 2007. As soon as the tests were over, the huge sections of yoke were pulled apart again, and the descent to the cavern could at last begin.

Many institutes participating in CMS took part in the design, construction and procurement of the magnet, as members of the CMS Coil Collaboration, including CEA Saclay, ETH Zurich, Fermilab, INFN Genoa, ITEP Moscow, the University of Wisconsin and CERN. 

J-PARC linac accelerates hydrogen ions up to 181 MeV design value

On 24 January the linac for the Japan Proton Accelerator Research Complex (J-PARC) successfully accelerated a beam of negative hydrogen ions up to 181 MeV, the design energy for Phase I of the project. The acceleration to the full energy is three months earlier than scheduled.

CCEnew1_03-07

J-PARC, which is a joint project between the High Energy Accelerator Research Organization (KEK) and the Japan Atomic Energy Agency (JAEA), is being built at Tokai, approximately 120 km north of Tokyo. The accelerator system will comprise a 400 MeV proton linac (181 MeV at the first stage of Phase I), a 3 GeV, 25 Hz Rapid-Cycling Synchrotron (RCS), and a 50 GeV Main Ring Synchrotron. The RCS provides the Materials and Life Science Experimental Facility with a 1 MW beam to generate pulsed muons and pulsed spallation neutrons. Every 3 s, the beam from the RCS is injected four times into the Main Ring, where it is ramped up to 50 GeV (40 GeV at Phase I). Fast extraction then provides a beam for neutrino production and slow extraction sends beam to the Hadron Experimental Facility (HDF). The neutrinos travel to the Super-Kamiokande detector located 295 km to the west, while the slowly extracted beam will produce secondary beams for hyper-nuclei experiments, rare-decay experiments with kaons, hadron-spectroscopy experiments and so on, or will serve primary beam experiments in the HDF.

Construction of J-PARC started in April 2001, and beam commissioning began some five years later. The radio-frequency quadrupole (RFQ) linac accelerated beam up to 3 MeV on 20 November 2006, the very day that the beam commissioning started. A month later, on 19 December, the team accelerated the beam up to 20 MeV using the first tank of the drift-tube linac (DTL), and up to 50 MeV using the second and third tanks the next day. Then at midnight on 19 January all of the 30 separated-type DTL (SDTL) cavities, which are driven by 15 klystrons, were ready for acceleration up to 181 MeV. First, the commissioning team performed a phase scan with each pair of the SDTL cavities driven by one klystron before finally completing the scan through 15 pairs on 24 January.

In each scan the team measured the beam energy by the time-of-flight method. For the initial beam study, the peak current, the beam pulse length and the repetition rate were set at 5 mA, 20 μs and 2.5 Hz, respectively, to avoid possible damage in accelerator components should something go wrong at high beam power. During the commissioning, both the RF power source and cavity system proved to be very stable. This stability together with a very accurate alignment of all of the magnets, especially in the drift-tube linac, were the two major elements that allowed a rapid tuning of the linac, three months before schedule.

Klystrons drive the J-PARC DTL, but it also has quadrupole electromagnets, which are variable-focusing elements. To meet the conflicting requirements of these two systems, the researchers chose an RF acceleration frequency of 324 MHz rather than the widely used 350 MHz; the compact quadrupole electromagnets were developed with the full use of electroforming and wire-cutting techniques; and industry developed the 324 MHz klystrons with a pulsed power of 3 MW (500 μs and 50 Hz) in close collaboration with the J-PARC linac team.

The combination of the 3 MeV RFQ linac and the 324 MHz RF source is now being considered as a best choice for the front end of the proton linac by many future projects at Fermilab, the ISIS spallation neutron source in the UK, the Facility for Antiproton and Ion Research at GSI and the Chinese Spallation Neutron Source. This is partly because 324 or 325 MHz, 3 MW pulsed klystrons are now available, and partly because the frequency is a quarter of L-band frequency, which would be used in a future superconducting International Linear Collider.

The RCS beam commissioning will start in autumn 2007, while the beam commissioning of the Main Ring and the muon and neutron production targets together with their beam transports will start in May 2008. By the end of 2008, the complex will be ready for the J-PARC users. The success of the linac beam commissioning earlier than scheduled is encouraging news.

ALICE’s TPC arrives at experiment cavern…

In early January ALICE’s time projection chamber (TPC) moved 300 m from the assembly hall to the experiment cavern, taking four days to complete the journey. This 5 m wide, 5 m diameter cylinder weighs 8 tonnes and is extremely fragile.

CCEnew2_03-07

The first steps included lifting the TPC with an overhead crane from the cleanroom in the assembly hall and positioning it onto four hydraulic jacks, which raised the TPC to 80 cm. Then a flatbed truck gently slid under the structure and carefully carried it to the entrance of the cavern, making sure not to tilt it more than 2°. The next step was to lower the cylinder 50 m into the ALICE cavern. This proved challenging, with just 10 cm of leeway between the delicate TPC and the shaft walls. Finally, a gantry crane moved the TPC close to its final position within the solenoid magnet, where work will begin on installing the internal tracking system.

The TPC consists of very light, fragile carbon-fibre. The surface structure, or field cage, is covered with 30,000 mylar strips secured with the utmost precision. The two endcaps carry electronic read-out channels. These are connected by several thousand flat cables to two service support wheels, which provide support for the electrical, electronic and gas-supply systems. In May, the TPC will be tested in its underground location using cosmic rays.

…while ATLAS toroid takes a steady trip

The first End-Cap Toroid (ECT) for the ATLAS experiment at the LHC has begun the last stage of its journey to the underground cavern. Now that the assembly of the cold mass, integration in the vacuum vessel and connection to the vacuum pumps, cryogenic lines and current leads are all complete, the toroid will undergo a cooling test on the surface before being lowered into the cavern.

CCEnew3_03-07

The 5 m wide, 11 m diameter, 240 t structure is one of two similar ECTs, the last large magnets to be installed inside ATLAS. Moving at about 1 km/h on a special transport trailer, it left the assembly hall on the Meyrin site in preparation for cold-testing at a nearby outdoor location. The transport operation was extremely delicate: the slightest wrong turn or movement could have caused the tall structure to sway at an angle that would cause serious damage to the fragile parts inside. The toroid cold mass is suspended inside the vacuum vessel by four gravity rods and tipping the ECT at too large an angle could have damaged these rods.

During the surface test, the ECT is being cooled to 80 K using the cryogenic plant in a nearby building. Tests will check for cold leaks in the cooling circuits and verify the electrical insulation of the coils under thermal stress. During the 300–80–300 K thermal cycling, all of the crucial components as well as the magnet’s instrumentation will be thoroughly checked to make sure that it will function properly once installed underground.
The test will last until mid-March. The toroid will then be lowered into the ATLAS cavern in early June for final commissioning, when it will be cooled to 4.5 K using the ATLAS cryogenic plant and charged up to the nominal current of 20.5 kA. The second ECT is scheduled for lowering in early July, just in time for closure of the LHC beam pipe in August.

U70 proton synchrotron goes ahead with stochastic extraction

Beam current and spill current

The new advanced slow stochastic extraction (SSE) system at the U70, the 70 GeV proton synchrotron at the Institute for High Energy Physics (IHEP) in Protvino, has operated successfully during normal running in 2006. The aim is to use the technique to produce longer, more uniform spills than can be achieved with the standard extraction, which uses magnetic optics to move particles to the transverse resonance.

The first feasibility tests for SSE at the U70 took place in late 2004. These tests yielded natural stochastic spills, which were superimposed by radiofrequency noise with power spectra kept invariant through the extraction time. Such spills inherently had no flat-top in their DC content, however, and so were not useful for users. Since then, the beam physicists and engineers at IHEP have continued their efforts towards an operational SSE scheme and have developed some sophisticated dedicated circuitry, which they beam-tested during runs in 2005–2006.

The core of the new system consists of a feedback loop that modulates the amplitude of the operational noise in response to the spill current signal, which is monitored by a beam-loss monitor located downstream of the electrostatic septum deflector. Being a DC-coupled feedback system with a finite base-band bandwidth, it is designed both to flatten and to smooth the stochastic spills.

Distribution of the spill ripple magnitude and the amplitude Fourier spectru

The team has now used this system at the U70. The figures show that it has achieved the primary design goal of obtaining low-ripple flat-topped spills lasting 2–3 s, with noticeable progress in the quality of slow spills. The persistent AC ripple observed in the past in the extracted current now shows up as a random signal. It turns out that it cannot be suppressed via the feedback control used owing to the limited base-band bandwidth of the 3rd-order transverse resonance transfer-function involved in the overall closed-loop gain product.

The new SSE scheme routinely serviced the U70 during the entire run of 2006 yielding slow spills 1.1 s long, and exhibited a relatively robust and reliable behaviour consistent with the design aims. Further improvements of the SSE set-up promise a better functioning of the U70 for external fixed-target experiments in the near future.

Gravitational waves could probe inflation

“In the beginning was the Word,” opens the gospel of John, and this finds some resonance in the possibility of detecting gravitational waves from the Big Bang itself at frequencies typical of sound waves. Careful listening to this background signal would probe the inflation phase of the very early universe.

Astronomy is a science based on photons. First limited to visible light and then broadening to include radio waves, the field of investigation has exploded during recent decades. Windows on the sky at new wavelengths have opened one by one thanks to the use of satellites to avoid absorption by the Earth’s atmosphere. However, at any wavelength, the quest towards the beginning of time stops abruptly 380,000 years after the Big Bang, when the emission of the cosmic microwave background (CMB) occurred. This is when the universe became transparent and there is no hope of looking further back in time via electromagnetic radiation. Although earlier events like inflation may leave some imprint in the map of the CMB (see CERN Courier May 2006 p12), a direct messenger from the very early universe must be of a different nature.

Primordial neutrinos from the time of Big Bang nucleosynthesis would be a good candidate for such a messenger, but their very low energy and interaction cross-section makes them currently undetectable. Gravitational waves are another candidate attracting a great deal of interest as a possible witness of the very early universe. Einstein’s general theory of relativity predicts that these distortions of space–time propagate with the speed of light. Although studying the change of period of tight double-pulsar systems has demonstrated their existence (see CERN Courier March 2004 p12), direct detection is still to be achieved despite the great effort invested to detect these subtle disturbances of space–time.

A theoretical study by Juan Garcia-Bellido and Daniel Figueroa from the Universidad Autonoma de Madrid, Spain, sheds a new light on the expected amplitude and spectral property of primordial gravitational waves. The study focuses on the violent reheating of the universe after hybrid inflation. According to the researchers, many extensions of the Standard Model of particle physics expect this process, both in string and supersymmetric theories.

They suggest that such a process would lead to the formation of high-energy bubble-like structures, which would collide and generate a stochastic background of gravitational waves. Taking into account the expected downshift of this radiation by 24 orders of magnitude due to the cosmic expansion since that time, their calculation predicts a maximum intensity of gravitational waves at frequencies from about 1 Hz for a low-scale inflation model up to about 10 MHz for a high-scale model.

This frequency range is too high to be accessible to the future Laser Interferometer Space Antenna (LISA), a joint mission of ESA and NASA, but there is hope to detect the expected signal with other projects, on the ground with the Advanced Laser Interferometer Gravitational Wave Observatory or in space with NASA’s proposed Big Bang Observer. There is no doubt that the detection of gravitational waves from the first 10-34 s of the universe would be a new milestone in science, allowing physicists to confront their models with observations.

Further reading

J Garcia-Bellido and D G Figueroa 2007 Phys. Rev. Lett. in press [See also http://arxiv.org/abs/astro-ph/0701014].

The EEE Project: big science goes to school

In May 2004, a major webcast linked CERN and high schools all over Italy to inaugurate the Extreme Energy Events (EEE) Project. Launched on the occasion of the visit to CERN of the Italian Minister of Education, University and Research, the project is the initiative of Antonino Zichichi from Bologna University and CERN.

CCEeee1_03-07

What is the main idea behind the project?

This project is meant to be the most extensive experiment to detect muon showers induced by extremely energetic primary cosmic rays (protons or nuclei) interacting in the atmosphere. Ultimately, it will cover a million square kilometres of Italian and Swiss territory. It would have been very expensive to implement such a large project without involving existing structures, namely schools all over Italy and part of Switzerland. This “economic” strategy also has the advantage of bringing advanced physics research to the heart of the new generation of students.

How will the experiment detect cosmic-ray showers?

The EEE telescopes, distributed over an immense area, will be tracking devices, capable of reconstructing the trajectories of the charged particles traversing them. These particles are the secondary cosmic rays produced in the showers, and are mostly muons at sea level. The project is based on a very advanced detector unit: the multigap resistive plate chamber (MRPC) (Cerron-Zeballos et al. 1996). An EEE telescope comprises three layers of MRPCs. We have developed these chambers for the ALICE time-of-flight detector at CERN’s LHC (Akindinov et al. 2004). Their performance in terms of detection efficiency and time resolution is outstanding.

However, the EEE Project also aims to bring science into high schools (Zichichi 2004). This is why the plan was for all of the MRPCs to be built by teams of school pupils supervised by their teachers at CERN or in the nearest laboratory (located in the closest university or research institute). After the MRPC construction phase, school pupils participate in the installation, testing and start-up of the EEE telescope in their school, then in its maintenance and data-acquisition, and later in the analysis of the data. Of course the scientific and technical staff of the universities and research institutes collaborating in the project coordinate and guide everything.

CCEeee2_03-07

The telescopes will be coupled to GPS units and interconnected via a network. A dedicated PC will locally acquire the MRPC signals produced by each telescope and will then transfer them to the largest Italian computer centre in Bologna for analysis using Grid middleware.

How much does the project cost and how is it funded?

The cost is minimal because we install our detectors at existing structures (schools). The EEE Project was funded in 2005 by the Italian Ministry of Education, University and Research, and by Italian research institutes such as INFN and the Enrico Fermi Centre. The cost of an EEE unit (that is, a complete telescope, including PC, laboratory equipment, cables, gas system, etc.) is about €50,000. CERN, the World Federation of Scientists, the Italian National Research Council and many Italian universities are also participating in the project. Owing to the success of the project and to its undoubted impact on education and research potential, we expect more funding in the coming years.

What is the status of the project? How many schools are involved so far and what is the next step?

In one year, pupils from more than 20 high schools have built more than 70 MRPCs at CERN. Nine pilot sites are equipped with EEE telescopes: at CERN, at the INFN Gran Sasso Laboratory, at the INFN Frascati Laboratory and in the INFN sections in Bologna (central Italy), Cagliari (Sardinia, central-western Italy), Catania (Sicily, southern Italy), Lecce (south-east Italy) and Turin (north-west Italy). The remaining MRPCs are currently moving from Geneva to Italy for the other high schools that are involved so far. We foresee that all of these telescopes should soon be collecting data. Meanwhile the construction of other MRPCs at CERN continues, thanks to a new wave of pupils from other Italian high schools. More than 50 schools are already queuing up to be part of the EEE Project.

The next stage of the project is to continue expanding, increasing its coverage and involving as many high schools as possible in this frontier experimental research in fundamental physics.

What do you believe the project contributes to education?

The direct involvement of young pupils in the project is the most efficient way to contribute to their learning while doing advanced research in physics. The pupils will be personally involved in advanced research and will acquire a deeper knowledge of particle and astroparticle physics, experimental tools, data-acquisition systems, software, networks, etc. They will gain direct access to the data and to the working methods typical of modern research work.

How does EEE differ from schools projects in other countries?

When I started to speak about the project I knew of no other proposals. Now some educational cosmic-ray projects have been proposed in other countries. The detectors are groups of scintillation counters, typically on the school’s roof, and not in the building as with the EEE telescopes. These projects don’t use tracking devices.

What will the project contribute to research?

There are short- and long-range time coincidences between close (within the same city) and distant telescopes, and the tracking capabilities of the telescopes will determine with good precision the direction of the incoming primary cosmic ray. Therefore, the EEE Project can study not only large showers of muons originating from a common vertex, but also correlations between separated showers that might be produced by bundles of primaries. The project thus allows a large variety of studies, from measuring the local muon flux in a single telescope, to detecting extensive air showers producing time correlations in the same metropolitan area, to searching for large-scale correlations between showers detected in telescopes tens, hundreds or thousands of kilometres apart. When complete – that is, equipped with at least 100 telescopes – the EEE Project will compete strongly with other high-energy cosmic-ray experiments searching for extreme-energy extended air showers.

bright-rec iop pub iop-science physcis connect