Comsol -leaderboard other pages

Topics

STEP ’09 sets new records around the world

After months of preparation and two intensive weeks of continuous operation in June, the LHC experiments celebrated the achievement of a new set of goals aimed at demonstrating full readiness for the data-taking with collisions expected to start later this year. The Scale Testing for the Experiment Programme ’09 (STEP ’09) was designed to stress the Worldwide LHC Computing Grid (WLCG), the global computing Grid that will support the experiments as they exploit the new particle collider. The WLCG combines the computing power of more than 140 computer centres, in a collaboration between 33 countries.

While there have been several large-scale data-processing tests in recent years, this was the first production demonstration to involve all of the key elements from data-taking through to analysis. This allowed different records to be established in data-taking throughput, data import and export rates between the various Grid sites, and in huge numbers of analysis, simulation and reprocessing jobs. The ATLAS experiment ran close to 1 million analysis jobs and achieved 6 GB/s of Grid traffic – the equivalent of a DVD’s worth of data a second, sustained over long periods. This result coincides with the transition of Grids into long-term sustainable e-infrastructures that will be of fundamental importance to projects with the lifetime of the LHC.

With the restart of the LHC only months away, there will be a large increase in the number of Grid users, from several hundred unique users today to several thousand when data-taking and analysis commence. This will happen only through significant streamlining of operations and the simplification of end-users’ interaction with the Grid. STEP ’09 involved massive-scale testing of end-user analysis scenarios, including “community-support” infrastructures, whereby the community is trained and enabled to be largely self-supporting, backed a core of by Grid and application experts.

Telescopes pin down location of cosmic accelerator

CCnew6_07_09

Teams using imaging atmospheric Cherenkov telescopes to detect very high-energy gamma rays have joined forces with astronomers to reveal the precise location of particle acceleration in the nearby giant radio galaxy Messier 87 (M87). Collaborations on the High Energy Stereoscopic System (HESS), the Major Atmospheric Gamma-Ray Imaging Cherenkov (MAGIC) project and the Very Energetic Radiation Imaging Telescope Array System (VERITAS) have worked together with a team at the Very-Long Baseline Array (VLBA) radio telescope in an unprecedented, co-ordinated, 120-hour observational campaign. Their simultaneous observations at the lowest and highest ends of the electromagnetic spectrum indicate that the active galactic nucleus in M87 accelerates charged particles to very high energies in the immediate vicinity of the central black hole (V A Acciari et al. 2009).

M87 is a giant radio galaxy, 54 million light-years from Earth, with a jet structure – a huge outflow from the central region, which is probably fuelled by accretion of matter onto a massive black hole. In the jet, charged particles can be accelerated to very high velocities, with the inevitable accompanying production of high-energy gamma rays. The first indications for very high-energy gamma radiation from M87 were found in 1998 with the High-Energy Gamma-Ray Astronomy (HEGRA) telescope array – the predecessor of HESS and MAGIC. These observations were confirmed by HESS in 2006 and revealed a fast variability of the gamma-ray flux on a timescale of a few days, implying an exceptionally compact gamma-ray source.

To pinpoint the source more closely, HESS, MAGIC and VERITAS jointly observed M87 from January to May 2008, collecting 120 hours’ worth of data. During this time the galaxy underwent two major outbursts of very high-energy gamma-ray emission. Over the same period, high-resolution radio observations by the 43 GHz M87 Monitoring Team at the VLBA, a system of radio telescopes spanning the US, indicated a strong increase of the flux from the innermost core of M87 in the immediate vicinity of the central black hole. The combination of observations at the two extremes of the electromagnetic spectrum indicate that the site of the high-energy gamma emission, and hence the particle acceleration, in M87 must lie close to the black hole.

NuTeV anomaly supports new effect in bound nucleons

Tevatron

A new theoretical calculation of the effects of the nuclear medium may account for the “NuTeV anomaly”, a puzzling experimental result that disagreed with the Standard Model. The solution may lie with the isovector nuclear force generated by excess neutrons or protons in iron, which produces a subtle change in the quark structure of all of the nucleons.

The NuTeV anomaly arose when the Neutrinos at the Tevatron (NuTeV) collaboration at Fermilab measured the ratio of neutral-current to charged-current reactions in the collisions of high-energy neutrinos (and antineutrinos) with a large steel target (Zeller et al. 2002). The measurements gave a value for the electroweak parameter sin2θW that was three standard deviations higher than predicted by the Standard Model. When analysing the data, however, the collaboration had to make a correction to compensate for the unequal numbers of protons and neutrons in the iron nuclei in the steel target. In the analysis, the effect of the extra neutrons was removed by subtracting the structure functions of a comparable number of free neutrons from the iron nucleus, assuming that the protons and neutrons bound inside the iron nucleus are identical to free protons and neutrons.

Changes in structure functions in bound nucleons are well known through the effect discovered by the European Muon Collaboration (EMC). Now theorists from Tokai University, the University of Washington and Jefferson Lab have revealed a novel isovector EMC effect, arising from a proton or neutron excess. This effect implies an additional correction, of a sign and magnitude that are essentially model independent, which removes at least half of the NuTeV anomaly (Cloët et al. 2009). Moreover, when the new effect is combined with the well known correction for charge symmetry violation in the nucleon itself, the NuTeV data turn out to be in excellent agreement with the Standard Model.

The NuTeV data may be seen as providing crucial evidence for a conceptual change in the understanding of nuclear structure, in which the quark structure of the bound nucleon is fundamentally modified by the medium. Independent experimental confirmation of the isovector EMC effect could be provided by charged-current studies on heavy nuclei at a future electron-ion collider and in parity-violating deep-inelastic scattering experiments at Jefferson Lab following the 12 GeV upgrade.

Galactic positrons are not from dark matter

A new study on the propagation of positrons in the galaxy suggests that dark-matter annihilation or decay is not required to account for gamma-ray observations by ESA’s INTEGRAL satellite. It shows that the observed characteristics of the positron emission can be fully accounted for by β+ decay of radionuclei produced by nucleosynthesis in supernova explosions and the wind of massive stars.

One of the main successes of the INTEGRAL gamma-ray satellite is its unprecedented characterization and mapping of positron annihilation in the galaxy. Early results from INTEGRAL’s spectrometer showed that the 511 keV emission line from electron–positron annihilation is mainly emitted from a circular region that corresponds roughly to the central bulge of the Milky Way. As the galactic-disc emission could not be clearly detected, the data implied a bulge-to-disc ratio at least three times higher than for the production sites of positrons by β+ decay in supernova ejecta.

Dark matter soon emerged as an explanation for the positron excess in the bulge compared with the distribution in supernovae of type Ia (CERN Courier November 2004 p13). Researchers then realized, however, that the range of masses for the required lightweight dark-matter particle was severely limited by the deduced maximum energy of the positrons (CERN Courier December 2006 p14). Nevertheless, over recent years, more than 100 papers have been published on exotic dark-matter candidates that could explain the positron excess in the galactic bulge. These include new axions, superconducting strings, Q balls, sterile neutrinos, millicharged fermions, unstable branons and many more.

This proliferation of exotic ideas should stop with the recent publication of a paper in Physical Review Letters by Richard Lingenfelter and Richard Rothschild from the University of California San Diego and their colleague James Higdon at the Keck Science Center. Their letter is based on a detailed study that they published in the Astrophysical Journal, where they demonstrate that the assumption that positrons cannot propagate over large distances in the galaxy is wrong. They further show that positrons from the decay of the radionuclei 56Ni, 44Ti and 26Al produced in supernova ejecta will preferably decay in the denser environment of the galactic bulge, where most of the molecular clouds are concentrated.

Lingenfelter and colleagues therefore argue that the observed bulge excess can be fully accounted for by identified astrophysical sources of positrons, mainly the supernova ejecta and the strong winds of massive stars. The easy propagation of the positrons before annihilation in the dense envelopes of molecular clouds also explains the high positronium fraction (94% ± 4%) deduced from the ratio of the 511 keV line flux to the three-photon continuum emission at lower energies. The study further suggests that the observed asymmetry of the positron annihilation between one side of the galactic plane and the other (CERN Courier March 2008 p12) could just result from the asymmetric distribution of the inner spiral arms of the galaxy as seen from the Earth.

In view of these results, it seems that the excitement about a possible dark-matter signal in the INTEGRAL measurements was premature and built on shaky ground. With the recent arguments against other dark-matter claims, this elusive matter looks even darker than ever (Cosmic leptons challenge dark-matter detection).

ICE-cool beams just keep on going

Initial Cooling Experiment

The quality of a charged particle beam is characterized by the product of its radius and divergence – the emittance – and by the momentum spread. Together they define the part of the phase space that is occupied by the particles in an accelerator. In 1966 Gersh Budker proposed a method that would allow the compression, or “cooling”, of the occupied phase space in stored proton beams. His idea of electron cooling was based on the interaction of a monochromatic and well directed electron beam with the heavier protons circulating over a certain distance in a section of a storage ring. The electrons are produced continuously in an electron gun, accelerated electrostatically to a velocity equal to the average velocity of the circulating beam and then inflected into the beam. Both beams overlap for a distance, over which the cooling takes place, and then the electrons are separated from the ion beam and directed onto a collector.

The first successful demonstration of electron cooling took place in 1974 at the proton storage ring NAP-M at what is now the Budker Institute of Nuclear Physics (BINP) in Novosibirsk. A few years later CERN and Fermilab built dedicated facilities to study the cooling process, which was a prerequisite for the accumulation of antiprotons for the proposed conversion of proton accelerators to proton–antiproton colliders. The Initial Cooling Experiment (ICE) at CERN became operational in 1977 with the goal of determining which of two cooling methods would be more appropriate for high-energy antiprotons: electron cooling or the technique proposed by Simon van der Meer at CERN, namely, stochastic cooling. The tests on electron cooling took place in 1979 (see box).

From ICE to LEAR

As is well known, CERN chose stochastic cooling for the Antiproton Accumulator that was used to feed the SPS when operating as a proton–antiproton collider. However, the request by physicists for a programme with low-energy antiprotons allowed the ICE electron cooler to continue, with a new lease of life. Thirty years and two reincarnations later, essentially the same device is now used routinely to cool and deliver low-energy antiprotons to experiments on CERN’s Antiproton Decelerator (AD).

The first electron cooling

In its first reincarnation, the ICE cooler was used on the Low Energy Antiproton Ring (LEAR). This decelerator ring was built to deliver intensities of a few thousand million (109) antiprotons in an ultra-slow extraction mode to up to three experiments simultaneously over many hours. Operation in LEAR required a static vacuum level less than 10–11 torr, which meant that the cooler needed a major upgrade of its vacuum system. The high gas load coming from the cathode and collector regions of the cooler had made its operation on ICE very problematic and the best obtainable vacuum was in the order of 10–10 torr. Higher pumping speeds and a careful choice of materials were therefore needed if there was to be any significant improvement in the vacuum.

A team from CERN and Karlsruhe carried out an extensive study of various vacuum techniques between 1981 and 1984, resulting in a new design for the complete vacuum envelope, which was built using high-quality AISI 316LN stainless steel. In addition, the whole system was designed to be bakeable at 300 °C in situ for 24 hours, requiring permanent jackets to provide the necessary thermal insulation. The use of non-evaporable getter (NEG) strips developed for the Large Electron–Positron Collider provided an increase in pumping speed and three such modules were initially installed on the cooler. The choice of NEGs was evident as space limitations excluded any other type of pumping system, such as cryopumps or sputter ion pumps.

With this hurdle overcome preparations started for the integration of the cooling device with LEAR. To fit into one of the 8 m long straight sections of the machine, the interaction length of the cooler had to be reduced by half. Luckily the drift solenoid had been designed in two equal parts so removing one half was not a problem. The high voltage and the control systems of the device were also completely refurbished and a dedicated equipment building was erected close to the LEAR ring. The installation of the cooler took place during the summer of 1987 followed by the conditioning of the cathode and further tests to monitor the evolution of the LEAR vacuum in the presence of the electron beam. By the autumn of 1987 the cooler was ready to cool its first beam. The first cooling tests took place on a 50 MeV proton beam injected directly from Linac 1 and the initial results confirmed all expectations.

After protons the attention turned to antiprotons and the use of electron instead of stochastic cooling to improve the duty cycle of the deceleration in LEAR. To deliver high-quality antiproton beams to the different experiments in the South Hall, the operators applied stochastic cooling after injection at 609 MeV/c and then at various plateaus during the deceleration process. It would normally take around 20 minutes to obtain a “cold” beam at 100 MeV/c, the lowest momentum in LEAR. The use of electron cooling reduced this time to 5 minutes as cooling was needed for only 10 seconds on each of the intermediate plateaus, compared with 5 minutes per plateau with stochastic cooling. Hardware modifications required to render the operation of the cooler as reliable and effective as possible included the replacement of the collector with one that had a better collection efficiency (>99.99%), a new control system to synchronize the power supplies for the cooler with the LEAR magnetic cycle, and the implementation of a transverse feedback system (or “damper”) to counteract the coherent instabilities observed with such dense particle beams.

Apart from being the first cooler to be used routinely for accelerator operations, this apparatus was also the first to demonstrate the cooling and stacking of ions. In 1989 a machine experiment was devoted to studies on O6+ and O8+ ions coming from Linac 1. By applying electron cooling during the longitudinal stacking pro-cess this succeeded in increasing the intensity by a factor of 20. Later these ions were accelerated to an energy of 408 MeV/u and extracted to an experiment measuring the distribution of dose with depth in types of plastic equivalent to human tissue.

The years of operation on LEAR also allowed detailed studies of the cooling process. A full investigation into the influence of the machine’s optical parameters demonstrated that cooling was not effective over the whole radius of the electron beam and that having a finite value of the dispersion function in the cooling section could enhance the process significantly. Before these studies it was believed that a circulating ion beam with transverse dimensions comparable to the electron beam size would produce stronger cooling.

LEIR ring

In a separate study the electron beam was neutralized by accumulating positively charged ions using electrostatic traps placed at either end of the cooling section. By neutralizing the space charge of the electron beam, the induced drift velocity of the electrons would become negligible and hence the equilibrium emittances of the ion beam would be reduced further. Even though a neutralization factor of more than 90% could readily be obtained, it proved to be very difficult to stabilize this very high level of neutralization. Secondary electrons produced in the collector would be accelerated out of the collector region and oscillate back and forth between the collector and the gun. At each passage through the cooling section they would excite the trapped ions causing an abrupt deneutralization.

Another important modification to the cooler was the development of a variable-current electron gun

Another important modification to the cooler was the development of a variable-current electron gun. The gun inherited from ICE was of the resonant type and offered little operational flexibility. The new gun was of the adiabatic type with the peculiarity that it had been designed to operate in a relatively low magnetic field – a prerequisite for its integration in LEAR. Online control of the electron beam intensity was possible by simply varying the voltage difference between the cathode and the “grid” electrode.

Towards the end of the antiproton programme on LEAR, the cooler was paving the way for the conversion of this ring to the Low Energy Ion Ring (LEIR), which would cool and accumulate lead ions for CERN’s new big accelerator, the LHC. A series of machine experiments using lead ions with various charge states (52+ to 55+) not only demonstrated the feasibility of the proposed scheme, but also brought to light an anomalously high recombination rate between the cooling electrons and the Pb53+ ions (which had initially been the proposed charge state) leading to lifetimes that were too short for cooling and stacking in LEAR. It was decided to use Pb54+ ions instead, as they are produced in equal quantities to the 53+ charge state.

On to the AD

After 10 years on LEAR, the cooler was moved to the AD in 1998 where it continues to provide cold antiprotons for the “trap” experiments in their quest to produce large quantities of antihydrogen. Recently the AD team attempted a novel deceleration technique using electron cooling. The idea is to ramp the cooler and the main magnetic field of the AD simultaneously to a lower-energy plateau. This allows the antiproton beam to be kept cold throughout the deceleration pro-cess avoiding the adiabatic blow-up that all beams experience when their energy is reduced. The first tests were very modest, decelerating 3.5 × 107 antiprotons from 46.5 to 43.4 MeV, but future experiments will concentrate on decelerating the beam below 5.3 MeV.

The experience gained with the upgraded ICE cooler on LEAR provided the stepping stones for the design of a new state-of-the-art cooler for the I-LHC project to provide ions for the LHC. This is the first of a new generation of coolers incorporating all of the recent developments in electron cooling technology (adiabatic expansion, electrostatic bend, variable density electron beam, high perveance and “pancake” solenoid structure) for the cooling and accumulation of heavy ion beams. High perveance, or intensity, is necessary to rapidly reduce the phase-space dimensions of a newly injected “hot” beam, while variable density helps to efficiently cool particles with large betatron oscillations and at the same time improve the lifetime of the cooled stack. Adiabatic expansion also enhances the cooling rate because it reduces the transverse temperature of the electron beam by a factor proportional to the ratio of the longitu-dinal magnetic field between the gun and the cooling section.

From the archives

The new cooler, built in collaboration with BINP, was commissioned at the end of 2005 and has since been routinely used to provide high-brightness lead-ion beams required for the LHC. In parallel there have been studies to determine the influence of the cooler parameters (electron beam intensity, density distribution, size) on the lifetime and maximum accumulated current of the ions.

Electron cooling will certainly be around at CERN for quite a few more years. With the AD antiproton physics programme extended until 2016, the original ICE cooler will be nearly 40 years old when it finally retires. If the Extra Low ENergy Antiproton (ELENA) ring comes to life, it will require the design of a new cooler with an energy range of 50 to 300 eV to cool and ultimately decelerate antiprotons to only 100 keV. The possibility of polarized antiprotons at high energy in the AD will also require either an upgrade of the present cooler or the construction of a new one capable of generating a high-current electron beam at 300 keV. Of course the LEIR electron cooler will continue to deliver lead ions for the LHC and, with a renewed interest for a fixed-target ion programme, other ion species could also find themselves being cooled and stacked in LEIR.

Cosmic leptons challenge dark-matter detection

CCdar1_07_09

Recent measurements of cosmic-ray leptons – electrons and positrons – have generated a buzz because they might point to unknown astrophysical or exotic cosmic phenomena. A new measurement of the cosmic-ray positron fraction, e+/(e+e+), by the satellite-borne PAMELA detector shows an unambiguous rise between 10 GeV and 100 GeV. This confirms previous claims by the High-Energy Antimatter Telescope (HEAT) and AMS-01 collaborations (figure 1). At the same time, the Advanced Thin Ionization Calorimeter (ATIC), Fermi Gamma-Ray Telescope and HESS collaborations have published new results on the sum e+e+ at higher energies, up to a few tera-electron-volts. Although there are still discrepancies between these three experiments they could indicate the presence of a feature in the energy spectrum of e+e+ between 600 GeV and 1 TeV. Whether it is a bold peak, as ATIC claims, or a more shy bump, as the Fermi data indicate, is still unclear (figure 2). Further work and crosschecks are necessary to reach a definite answer. Another issue concerns whether this feature arises from electrons only or from both electrons and positrons.

CCdar2_07_09

There is nevertheless the hint of a signal in this energy range, which is quite challenging to reproduce with conventional cosmic-ray models. A workshop held in Paris in May, “Testing Astroparticle with the New GeV/TeV Observations. Positrons And electRons: Identifying the Sources (TANGO in PARIS)”, provided the opportunity to discuss and confront the possible interpretations of these results.

Conventional cosmic-ray production

The current understanding is that most cosmic rays are produced in the remnants of supernovae – what is left after the cataclysmic ends to the lives of many stars. Some cosmic-ray species (positrons, antiprotons, boron etc.) do not exist in stars but are instead produced by the spallation reaction of other cosmic rays with the interstellar medium. Once made, cosmic rays diffuse in the galactic magnetic field; they lose energy, are convected and eventually reach Earth.

Even taking into account the uncertainties underlying the state-of-the-art cosmic-ray transport modelling it is not possible to reproduce the PAMELA data, as figure 1 shows (T Delahaye et al. 2009). One solution is that the model for standard astrophysical positrons is mistaken in some way. For instance, the source distribution in the galaxy might be more complex than generally believed and positron production by spallating proton cosmic-rays on interstellar matter might be higher than expected. Such an effect could arise from a local over-density of proton sources (the spiral arms) or of interstellar matter around supernova remnants. However, in these models, it is difficult at the same time not to over-produce other cosmic-ray species, such as antiprotons or boron.

CCdar3_07_09

Another solution is that supernovae and spallating cosmic rays are not alone in the significant production of high-energy charged particles, so that other astrophysical objects also contribute. As electrons and positrons lose a lot of energy as they propagate in the galaxy, one single nearby source could explain the observed feature. Pulsars seem to be a good candidate for such an effect because they may produce electrons and positrons evenly, thus enriching the surrounding positron fraction. Unfortunately, the way that pulsars could produce electron–positron pairs and release them in the galaxy is not yet clear – making predictions difficult. Nevertheless, recent observations from Fermi have revealed that pulsars are more numerous than expected, so there is a high chance that we are missing many of them. Hence explaining the PAMELA/ATIC feature with pulsars is feasible.

The most exciting solution would be that these excesses arise from the effects of dark matter, so allowing a first insight into physics beyond the Standard Model. Indeed, in such a scenario, the mass of our galaxy would be dominated by new non-standard particles, which would annihilate or decay into standard particles, contributing to the cosmic-ray flux.

While it is extremely appealing, the dark-matter solution is puzzling. The natural way to agree with constraints from cosmology (freeze-out of the dark-matter particles in the early universe) is to have a new particle with mass and couplings of the order of the electroweak scale. If this particle could annihilate or decay into Standard Model particles then the corresponding cosmic-ray production rate would be small, which would not allow the reproduction of features as significant as the ones seen by PAMELA, ATIC, Fermi and HESS. To account for them, the dark-matter signal must be magnified with respect to the standard picture in some way, by a factor ranging from 100 to 1000, depending on the model. This is a well known fact – which the models make possible either with some particle-physics effect (for dark-matter particles of masses typically larger than a few tera-electron-volts or so) or as a consequence of local enhancements of the signal caused by dark-matter substructures.

Trouble appears when confronting this interpretation with channels where corresponding excesses should appear, such as cosmic antiprotons and photons. PAMELA recently published fresh measurements of the antiproton flux up to 100 GeV (figure 3), which show no specific feature. Antiprotons are interesting because the theoretical uncertainty associated with the background estimate is lower than for that of positrons – and most models with new physics expect annihilations or decays of dark matter to produce antiprotons. It is therefore possible to put an upper limit to the signal enhancement necessary to explain the leptonic data (Donato et al. 2009). It eventually appears that the antiproton data are incompatible with the large enhancements that are required by leptons for conventional dark-matter candidates.

The only way out is to have either a very heavy particle (of mass larger than 10 TeV) or to suppress the hadronic annihilation or decay modes of the dark-matter particle. In the first case, an excess of antiprotons should appear in future higher-energy data; in the second, no hadrons are produced by this so-called “leptophilic” dark matter. In both cases the properties of the new particle are different from those usually expected. Within minimal supersymmetric dark-matter models, for instance, large masses imply a loss of naturalness and direct electron/positron production in the annihilation is suppressed. In addition, when confronting models that survive the antiproton constraints to photon observations, the net tightens even more. Indeed, all of these electrons and positrons should also be produced in places where large magnetic fields are present (e.g. at the galactic centre) and consequently produce sizable radio emission, which is in general above the measured values (at least in the most standard galactic models).

The previous considerations assume a particle-physics type enhancement – i.e. an overall enhancement of the production of exotic cosmic leptons – regardless of the location in the galaxy. However, one could ask if these cosmic-ray features are the same everywhere in the galaxy. An interesting possibility is that a nearby clump of dark matter is responsible for some local excesses (Brun et al. 2009). In this case, the antiproton constraints may be less stringent and the ones from photon observations are totally avoided. The main feature responsible for the local lepton anomalies would then be a nearby (closer than a few kiloparsecs), bright clump. (As electrons and positrons do not propagate over large distances, just one massive clump could contribute sufficiently). In fact, dark-matter haloes are expected to form by successive mergers of small structures. Large haloes, such as the one of our galaxy, should contain a lot of smaller subhaloes (up to 20% of the total halo mass). Large numerical simulations can model the formation of these structures and calculate the probability of finding a configuration that fulfils the requirements to account for the lepton excesses in a halo of the size of the Milky Way. Unfortunately, this probability is found to be extremely low; usually fewer than 1% of the simulations exhibit such a favourable scenario. If such a clump does exist, however, the gamma-ray satellite Fermi has enough sensitivity to detect the associated gamma-ray emission.

Epilogue?

CCdar4_07_09

It is definitely possible to reproduce the observed cosmic-ray data with the help of dark-matter signals. Within this hypothesis, however, there will always be some tension between the different channels and observables or quite a high level of fine tuning. It could be that we are circling the properties of dark-matter particles but it is more likely that the bulk of the observed leptons come from a nearby astrophysical source that produces a large fraction of electron–positron pairs. In this case, the signal would constitute an additional background for indirect searches for dark matter through lepton channels that had not previously been accounted for.

A big step forward will be the measurement of the small anisotropy in the arrival directions of the cosmic-ray leptons, if any. If it is observed and points towards a known pulsar, then the conclusion will be clear. It is also urgent to separate electrons from positrons at higher energies and to increase statistics in all channels. Future results from PAMELA, and especially AMS-02, on leptons and also on fluxes of all nuclei will be of great help in feeding the cosmic-ray propagation models. The indirect searches for dark matter through charged channels can then continue, in particular looking for fine structure in the spectra. It will then be interesting (and challenging) to interpret future data and weigh them against results from the LHC and direct-detection experiments.

Whatever the nature of the source, we might be witnessing the first direct observation of a nearby source of cosmic rays with energies in the range of giga- to tera-electron-volts. These are exciting times and we might have to wait a little longer for the solution to this cosmic puzzle. The answer(s) will certainly come from a convergence of information from different messengers. Thanks to its large field of view, the Fermi telescope should reveal something about a nearby source, should it be a pulsar or something more exotic. Eventually, future large neutrino and gamma-ray observatories (such as KM3NeT and the Cherenkov Telescope Array) will certainly offer a great opportunity to take a deeper look into this brainteaser.

• The presentations slides and videos the TANGO talks are available at http://irfu.cea.fr/Meetings/TANGOinPARIS.

A small experiment with a vast amount of potential

CCtot1_07_09

While most of the LHC experiments are on a grand scale, the subdetectors for TOTEM, which stands for TOTal cross-section, Elastic scattering and diffraction dissociation Measurement at the LHC, are not longer than 3 m, although they extend over more than 440 m. Despite reduced dimensions, TOTEM’s potential resides in making some unique observations. In addition to the precise measurement of the proton–proton interaction cross-section, TOTEM’s physics programme will focus on the in-depth study of the proton’s structure by looking at elastic scattering over a large range of momentum transfer. Many details of the processes that are closely linked to proton structure and low-energy QCD remain poorly understood, so TOTEM will investigate a comprehensive menu of diffractive processes – the latter partly in co-operation with the CMS experiment, which is located at the same interaction point on the LHC.

The measurement of the total proton–proton interaction cross-section with a luminosity-independent method requires a detailed study of elastic scattering down to small values of the squared four-momentum transfer, together with the measurement of the total inelastic rate. Early measurements at CERN’s Intersecting Storage Rings (ISR), which were confirmed at CERN’s SppS collider and at the Tevatron at Fermilab, revealed that the proton–proton interaction probability increases with collider energy. However, the nature of the correct growth with energy remains a delicate and unresolved issue. A precise measurement of the total cross-section at the world’s highest-energy collider should discriminate between the different theoretical models that describe the energy dependence. The value of the total cross-section at LHC energies is also important for the interpretation of cosmic-ray air showers. All of the LHC experiments will use TOTEM’s measurement to calibrate their luminosity monitors, in order to calculate the probability of measuring rare events.

Sophisticated detectors

The study of physics processes in the region close to the particle beam, which is complementary to the programmes of the LHC general-purpose experiments, requires appropriate detectors. In the case of elastic and (most) diffractive events, intact protons in the final state need to be detected at a small angle relative to the beam line, therefore special proton detectors must be inserted into the vacuum beam pipe of the LHC. The TOTEM Collaboration had to invest heavily in the design of sophisticated detectors characterised by a high acceptance for particles produced in the busy region close to the beam pipes. All of the three subdetectors – Roman Pots and two particle telescopes, T1 and T2 – will detect charged particles emitted by the proton–proton collisions at interaction point 5 (IP5) on the LHC and will have trigger capabilities to allow an online selection of specific events.

CCtot2_07_09

The Roman Pots are special movable devices that are inserted directly into the beam pipe by bellows, which are compressed as the pots are pushed towards the beams circulating inside the vacuum pipe. They are called “Roman” because they were first used by a group of Italian physicists from Rome, in the early 1970s, to study similar physics at the Intersecting Storage Rings, the world’s first high-energy proton–proton collider. They are known as “Pots” because the vessels that house the delicate detectors, which can localize the trajectory of protons passing within 1 mm of the beam (with a precision of around 20 μm), are shaped like a vase.

In the TOTEM experiment, there are four Roman Pot stations, each composed of two units, separated by a distance of a few metres. Each unit consists of two pots in the vertical plane, which approach the beam from above and below, and one pot that moves horizontally. They are placed on both sides of the interaction point, at distances of 147 m and 220 m.

The proton detectors in the Roman Pots are silicon devices designed by Vladimir Eremin, Nikolai Egorov and Gennaro Ruggiero with the specific objective of reducing the insensitive area at the edge facing the beam to only 50 μm. This can be compared with a dead area typically more than 10 times larger for silicon detectors currently used elsewhere. High efficiency up to the physical border of the detector is an essential feature to maximize the experiment’s acceptance for protons scattered elastically or diffractively at polar angles down to a few microradians at the interaction point. Radiation-hardness studies indicate that this edgeless detector remains fully efficient up to a fluence of about 1.5 × 1014 protons/cm2.

CCtot3_07_09

The inelastic rate is measured by the telescopes T1 and T2. These are two charged-particle trackers situated close to the beam pipes in the CMS cavern at distances of about 10.5 m and 13.5 m on either side of the interaction point; indeed, T1 is within the CMS end-cap. By providing a full azimuthal coverage around the beam line, these telescopes will be able to reconstruct the tracks of charged particles coming from the proton–proton collisions and so allow the determination of the primary interaction vertex.

Each T1 tracker is made up of five subdetector planes perpendicular to the beam line. Each plane consists of six cathode-strip chambers (CSCs) – multiwire proportional chambers filled with a gas mixture, with cathode layers segmented into parallel strips. The advantages of this kind of detector are that it utilizes a well proven technology, provides a simultaneous measurement of three spatial co-ordinates (one from the anode wire plane and two from the cathode-strip planes) and uses a safe gas mixture (Ar/CO2/CF4). As T1 is installed in a high-radiation environment, the chambers have been tested in the gamma-irradiation facility at CERN. They have shown stable performances at doses several times higher than those expected for the design running conditions and exposure time. Tests with cosmic rays and muon beams have shown performances as expected.

CCtot4_07_09

The T2 tracking chambers are based on the gas electron multiplier (GEM) technology, invented by Fabio Sauli and Leszek Ropelewski at CERN, which combines a good spatial resolution with a high rate capability and a good resistance to radiation. In each T2 arm, 20 semi-circular GEM planes, with overlapping regions, are interleaved on both sides of the beam vacuum chamber to form 10 detector planes with full azimuthal coverage. In GEM detectors, in contrast to CSCs, the signal is collected on thin polyimide foils covered by a thin layer of copper on both sides. These foils, densely pierced and contained between two electrodes, are able to achieve high amplification and performance. GEM technology was chosen for T2 for the radiation hardness of the chambers and the flexibility of the read-out geometry. The read-out plane in the T2 chambers has been designed with strips that give a good resolution on the pseudo-rapidity co-ordinate, while pads give the phi co-ordinate for tracking and trigger purposes. Assembled “quarters” were tested with cosmic rays before the installation at IP5 and precommisioning tests have shown a good efficiency and resolution, matching the expected values.

CCtot5_07_09

The read-out of all TOTEM sub-systems is based on the custom-developed digital Very Forward ATLAS–TOTEM (VFAT) chip, which also contains trigger capability. The data acquisition (DAQ) system is designed to be compatible with the CMS DAQ to make common data-taking possible at a later stage.

CCtot6_07_09

The collaboration has recently completed the installation of the Roman Pot stations at 220 m and the subdetector T2. T1 is going to be installed in autumn. In the future two more Roman Pot stations will be put in place at 147 m. The first measurements of the LHC luminosity and individual cross-sections will be performed by TOTEM as soon as the LHC collider becomes operational. The collaboration is looking forward to having adequate data to carry out their first new physics analyses and to having results to announce in 2010.

• The TOTEM Collaboration has about 100 members from 10 institutions in seven countries. Karsten Eggert from CERN is the spokesperson; Angelo Scribano, from the University of Siena and INFN Pisa, is the chair of the Collaboration Board; and Ernst Rademacher from CERN is the technical co-ordinator.

Steven Weinberg: master builder of the Standard Model

It was no surprise that the audience arrived early in CERN’s Globe of Science and Innovation for the colloquium on 7 July. Steven Weinberg is well known for his work on the Standard Model of particle physics and for his skill in writing carefully crafted books about particle physics and cosmology. His life in physics, like that of CERN, has spanned more than 50 years of discoveries and breakthroughs. In 1979 he received the Nobel Prize in Physics together with Sheldon Glashow and Abdus Salam, for “contributions to the theory of the unified weak and electromagnetic interaction between elementary particles, including inter alia the prediction of the weak neutral current”. The latter had already been discovered at CERN by the Gargamelle collaboration in 1973. A decade later, in 1984, the UA1 and UA2 experiments at CERN were to discover the intermediate bosons, W and Z, with the masses predicted by the electroweak theory.

Weinberg first visited Europe after graduating from Cornell University, just as the provisional CERN became the fully fledged European Organization for Nuclear Research in 1954. Following advice from Dick Dalitz on where a young theorist should go for a study year in Europe, he joined the Institute of Theoretical Physics in Copenhagen, which at the time was home to CERN’s nascent theory group. He returned to Europe for a second year in 1961, this time at Imperial College, London, and in July 1962 he visited CERN’s Meyrin site for the first time, to attend the 11th “Rochester” Conference on high-energy physics. It was, he recalls, the beginning of an extraordinarily exciting period that extended until the mid-1980s. “There was a wonderful interplay between theory and experiment, with current algebra, electroweak theory and then QCD – and the brilliant experiments at CERN.” To the discoveries of neutral currents and the W and Z, he adds the success of the Large Electron–Positron collider in showing how many types of quarks and lepton there are. By the end of the 1980s, “so many things became clear that had seemed murky”, he explains, adding that at last “you could give a rationally organized course in particle physics”. The Standard Model of particle physics had arrived.

From particle physics to cosmology

Since then, he feels that the field of particle physics has not been so exciting. “The discovery of neutrino mass is the only new thing,” he says, pointing out that even this is not so new because the first signs were already there in the late 1960s in the results from Ray Davis’s solar-neutrino experiment. Instead, the past 20 years or so have been marked by what Weinberg acknowledges as “heroic efforts” to go beyond the Standard Model, for example with string theory. In his view, while these ideas are more mathematically profound than the Standard Model, they have little contact with observation. The problem facing particle physics is that “the Standard Model worked too well!”.

Back in the 1960s, Weinberg threaded his way through the theoretical jungle, reaching his unified description of weak and electromagnetic interactions in terms of an exact but spontaneously broken symmetry in 1967. This is the work for which he received the Nobel prize in 1979 and for which he is known far and wide (Weinberg 1967). He tells its story with his characteristic eloquence in the acceptance speech he gave in Stockholm (Weinberg 1979). Less universally well known is his work on chiral dynamics and effective field theories, in which he takes pride because he developed a point of view that became widely accepted. It resulted from some 15 years of work that took him from current algebra to effective field theory, with around 20 significant published papers. Together these form “a coherent body of work that changed the way people look at things”, Weinberg explains, and which has relevance to areas from low-energy hadron theories to superconductivity and gravitation. “I’m very proud of that,” he adds.

Weinberg often writes papers because he is trying to learn something. “Therefore they’re unimportant papers,” he comments. By contrast the books for which he is well known in the physics community represent the final crystallization of what he taught himself in a subject over the years, for example in the masterful three volumes on The Quantum Theory of Fields (CERN Courier May 2000 p37) and most recently Cosmology (CERN Courier May 2009 p43). He says that he never sees the books as an end in themselves – it is a bonus if they are valuable to others and he will be pleased if they become classics. Non-physicists – and probably many aspiring physicists – are no doubt more familiar with his lucid writing for the general public, for example in The First Three Minutes, which became a classic in science writing in the 1970s. Aficionados will be looking forward to his next publication, Lake Views: This World and the Universe (Harvard University Press), a compilation of essays that he has written on a wide variety of topics, from cosmology to religion.

Towards asymptotic safety

In line with his own experience in the particle physics of the 1960s, Weinberg believes that aspiring physicists should choose fields that are “messy and confusing”. Ten years ago he would have recommended students to go into cosmology. “It’s still having a wonderful run,” he says, “and it will continue to be exciting…but with the LHC, maybe it’s time for particle physics again.” His advice now would be to master both subjects – with the aid of his books, of course.

Weinberg’s current work continues to reflect his interest in both particle physics and cosmology. One aspect that he is pursuing concerns cosmological applications of “asymptotic safety” – that is, the idea of a theory that is safe from having its couplings blow up asymptotically, rather akin to the requirement of renormalization. This is leading to an approach to general relativity at very high energies that he feels is starting to look promising; the goal is an asymptotically safe quantum field theory of gravity with no problems at infinite energy. He presented these ideas in his colloquium at CERN on “The quantum theory of fields: effective or fundamental”. Beginning with a look at the fluctuating popularity of quantum field theory, he went on to pose the question: is quantum field theory fundamental or does it arise from some deeper theory, such as string theory? His recent work suggests that perhaps it is possible to have a quantum theory of gravity without strings. “I don’t want to discourage string theorists,” he says, “but maybe the world is what we’ve always known: the Standard Model and general relativity.”

Looking forward to the restart of the LHC and to the physics results to come, Weinberg acknowledges that he expects it to reveal the Higgs boson. “I have a stake in that,” he admits, referring to his 1967 paper on electroweak unification, which contained the first serious prediction of the essential scalar boson as a real particle. “The real hope is to restore the exciting environment of particle physics that we remember from the 1960s and 1970s,” he says.

• For the video of Weinberg’s colloquium at CERN, see http://cdsweb.cern.ch/record/1188567/.

Gargamelle: the tale of a giant discovery

Example of the leptonic neutral current

On 3 September 1973 the Gargamelle collaboration published two papers in the same issue of Physics Letters, revealing the first evidence for weak neutral currents – weak interactions that involve no exchange of electric charge between the particles concerned. These were important observations in support of the theory for the unification of the electromagnetic and weak forces, for which Sheldon Glashow, Abdus Salam and Steven Weinberg were to receive the Nobel Prize in Physics in 1979. Their theory became a pillar of today’s Standard Model of particles and their interactions, but in the early 1970s, it was not so clear that it was the correct approach and that the observation of neutral currents was a done deal.

The story of the discovery has been told in many places by many people, including in the pages of CERN Courier, notably by Don Perkins in the commemorative issue for Willibald Jentschke, who was CERN’s director-general at the time of the discovery, and more recently in the issue that celebrated CERN’s 50th anniversary, in an article by Dieter Haidt, another key member of the Gargamelle Collaboration (CERN Courier October 2004 p21).

The huge bubble chamber, named Gargamelle after the giantess created 400 years earlier in the imagination of François Rabelais, took its first pictures in December 1970 and a study of neutrino interactions soon started under the leadership of André Lagarrigue. The first main quest, triggered by recent hints from SLAC of nucleon structure in terms of “partons”, was to search for evidence of the hard-scattering of muon-neutrinos (and antineutrinos) off nucleons in the 18 tonnes of liquid Freon inside Gargamelle. Charged-current (CC) events in which the neutrino transformed into a muon would be the key. So the collaboration, spread over seven institutes in six European countries, set to work on gathering photographs of neutrino and antineutrino interactions and analysing them for CC events to measure cross-sections and structure functions.

The priorities changed in March 1972, however, when the collaboration saw first hints that hadronic neutral currents might exist. It was then that they decided to make a two-prong attack in the search for neutral-current (NC) candidates. One line would be to seek out potential leptonic NC events, involving the interaction with an electron in the liquid; the other to find hadronic neutral currents in which the neutrino scattered from a hadron (proton or neutron). In both cases the neutrino enters invisibly, as usual, interacts and then moves on, again invisibly. The signal would be a single electron for the leptonic case, while for hadronic neutral currents the event would contain only hadrons and no lepton (figures 1 and 2).

Neutral Current Event

The leptonic NC channel was particularly interesting because previous neutrino experiments had shown that the background was very small and also because Martin Veltman and his student Gerard ‘t Hooft had recently demonstrated that electroweak theory was renormalizable. ‘t Hooft was able to calculate exactly the cross-sections for NC interactions involving only leptons, with the input of a single free parameter, sin2θw, where θw is the Weinberg angle. Theorists at CERN – Mary K Gaillard, Jacques Prentki and Bruno Zumino – encouraged the Gargamelle Collaboration to hunt down both types of neutral current.

Such leptonic NC interactions would, however, be extremely rare. By contrast hadronic NC events would be more common but it was not yet clear how the theory worked for quarks. In this case the process was not easy to calculate, although Weinberg published some estimates during 1972. In addition there was the problem of a background coming from neutrons that are produced in CC interactions in the surrounding material and could imitate a neutral current signal.

By March 1973 there were as many as 166 hadronic NC candidates

Over the following year various teams carefully measured and analysed candidate events from film produced previously in several runs. The first example of a single-electron event was found in December 1972 by Franz-Josef Hasert, a postgraduate student at Aachen. Fortunately he realized that an event marked by a scanner as “muon plus gamma ray” was in fact something more interesting: the clear signature of an electronic NC interaction written in the tracks of an electron knocked into motion by the punch of the unseen projectile (figure 1). This was a “gold-plated” event because it was found in the muon-antineutrino film in which any background is extremely small. Its discovery gave the collaboration a tremendous boost, strengthening the results that were beginning to roll in from the analyses of the hadronic NC events. However it was only one event, while by March 1973 there were as many as 166 hadronic NC candidates (102 neutrino events and 64 antineutrino events) although the question of the neutron background still hung over their interpretation.

Members of the team then began a final assault on the neutron background, which was finally conquered three months later, as Haidt and Perkins describe in their articles in CERN Courier. On 19 July 1973, Paul Musset presented the results of both hadronic and leptonic analyses in a seminar at CERN. The paper on the electron event had already been received by Physics Letters on 2 July (F J Hasert et al. 1973a); the paper on the hadronic events followed on 23 July (F J Hasert et al. 1973b). They were published together on 3 September.

Weak Neutral Current papers

It was an iconoclastic discovery, leaving many unconvinced. This was mainly because of the stringent limits on strangeness-changing neutral currents and the lack of understanding of the new electroweak theory. Gargamelle continued to increase the amount of data and by the summer of 1974, after the well known controversy described by Haidt and Perkins, several experiments in the US confirmed the discovery. From this time on the scientific community recognized that the Gargamelle Collaboration had discovered both leptonic and hadronic neutral currents.

Thirty-six years later the European Physical Society (EPS) has decided to award its 2009 High Energy and Particle Physics Prize to the Gargamelle Collaboration for the “Observation of weak neutral currents” (Prize time in Krakow at EPS HEPPP 2009). However, it somewhat confounded the collaboration in citing only the authors of the hadronic neutral-current paper, thus neglecting the contributions of the five who signed the electronic paper, but not the hadronic paper (Charles Baltay, Helmut Faissner, Michel Jaffre, Jacques Lemonne and James Pinfold). Though the collaboration is honoured to receive the prize, its members feel that the award should not rewrite history. They feel, and rightly so, that the two papers were of equal importance in the discovery of neutral currents. Also, like many other physicists and the EPS prize committee, they feel that it was perhaps the greatest discovery of CERN. The prize was collected on behalf of the collaboration at the EPS HEP 2009 Conference in Krakow by Antonino Pullia and Jean-Pierre Vialle. Sometime in September the medal will be attached to the Gargamelle chamber, which now stands in CERN’s grounds, and a reunion dinner for the collaboration will follow.

The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius

by Graham Farmelo, Faber. Hardback ISBN 9780571222780, £22.50.

CCboo1_07_09

On 13 November 1995 the president of the Royal Society, Sir Michael Atiyah, unveiled a plaque in the nave of Westminster Abbey in London commemorating the life of Paul Dirac. Speaking at the ceremony, Stephen Hawking summed up Dirac’s life: “Dirac has done more than anyone this century, with the exception of Einstein, to advance physics and change our picture of the universe.” The plaque depicted Dirac’s equation in a compact relativistic form and the man himself would no doubt have appreciated its terse style. At the time of his passing in 1984 Dirac ranked among the greatest physicists of all time. With the publication of Graham Farmelo’s book The Strangest Man, we have an account of Dirac’s life that is a tour de force.

Dirac’s Swiss father, Charles, taught French at the Merchant Venturers’ Technical College in Bristol and married Florence Holten in 1899. They had three children, Beatrice, Reginald (who committed suicide in 1924) and Paul, who was born on 8 August 1902 – the same year that Einstein started work at the patent office in Bern and Planck initiated the quantum theory of matter and light. This was the start of the modern era in which classical physics was revolutionized by two great advances – special relativity and quantum mechanics.

Dirac’s early years were overshadowed by his domineering father and a browbeaten, needy mother. “I never knew love nor affection when I was a child,” Dirac once remarked. Certainly, his difficult childhood seems to have deeply influenced the development of his “strange” character. Farmelo also explores another explanation for Dirac’s introversion, literality, rigid behaviour patterns and egocentricity: perhaps Dirac, like his father, was autistic. Nonetheless, in his thirties, Dirac met and married Manci Balázs, an extroverted and passionate woman – his “antiparticle”. Farmelo’s candid and sympathetic account of the couple’s improbable life together makes compelling reading. Yet, according to Farmelo, Dirac only cried once in his life, and that was when Einstein died.

Dirac’s seminal contribution to physics was the unification of Heisenberg and Schrödinger’s quantum mechanics with Einstein’s special relativity, which allowed him to write down a relativistic equation for the electron – the famous Dirac equation. With it he revealed the concept of spin and predicted the existence of antiparticles, subsequently discovered in studies of cosmic rays. In 1933, aged 31, he shared the Nobel prize with Schrödinger.

Dirac was also the creator of quantum electrodynamics and one of the chief architects of quantum-field theory. For him, the beauty of mathematical reasoning and physical argument were instruments for discovery that, if used fearlessly, would lead to unexpected but valid conclusions. Perhaps the single contribution that best illustrates Dirac’s courage is his work on the magnetic monopole, the existence of which would explain the quantization of electric charge. The monopole’s story is still far from complete and more revelations could be forthcoming.

Farmelo succeeds brilliantly in unifying all of the shadowy and contradictory perspectives of Dirac’s character with his life as a scientific genius, and creates a complete picture of the man who played a leading role in the growth of modern physics. The book reveals how Dirac, although aloof and unworldly, was deeply affected by the turbulent and troubled history of the 20th century.

bright-rec iop pub iop-science physcis connect