Comsol -leaderboard other pages

Topics

ASACUSA measures antiproton mass with record precision

The Japanese-European ASACUSA team at CERN has measured the antiproton-to-electron-mass ratio to record-breaking accuracy. The answer is 1836.153674, with an error margin of 5 in the last decimal place, which is equivalent to measuring the distance between Paris and London to within 1 mm. The corresponding ratio for the proton is 1836.15367261, so the new result shows that the mass of the antiproton is the same as that of the proton to nine significant figures (Hori 2006). This precision has been achieved using the “frequency comb” technique, development of which earned John Hall and Theodor Hänsch, the Nobel prize in 2005.

In the ASACUSA experiment, samples of antiprotonic helium – an atom with an antiproton and an electron orbiting a normal helium nucleus – were produced using CERN’s Antiproton Decelerator facility, and irradiated with a tunable laser beam, the frequency of which could be measured very precisely with the Hall-Hänsch frequency-comb technique. The laser beam could be tuned to one of several characteristic frequencies of the antiprotonic atoms, each frequency corresponding to an atomic transition of the antiproton. Since these frequencies were determined by the properties of the antiproton, the ratio of the antiproton mass to the electron mass could then be calculated from the measured values.

The results can also be combined with an earlier high-precision measurement of the antiproton’s cyclotron frequency (which determines the curvature of its path in a magnetic field). This shows that there is no difference in the proton and antiproton charges either, apart from the sign. Still more precise experiments are planned with the optical comb, and may soon give an even smaller margin of error for the antiproton than the best one obtained for the proton itself (currently about five times smaller). Surprisingly, the antiproton may soon be known better than the proton.

Quasar jets could be powerful accelerators

Infrared observations of the quasar 3C 273 by the Spitzer Space Telescope are giving new insight into the physics at play in its large-scale jet. The new images, combined with complementary radio, optical and X-ray observations, reveal two distinct spectral components. There is evidence that the component emitting X-rays is also producing synchrotron radiation, implying that ultra-energetic particles are continually accelerated all along the jet.

Back in 1963, 3C 273, an apparently faint star with associated radio emission, was found to be at a cosmological distance (redshift of 0.158). This implied that this “quasi-star” – the first identified quasar – was about 100,000 billion times more luminous than the Sun. It is also the brightest of this class of extreme active galactic nuclei, which completely outshine their host galaxies. At the time of discovery, deep optical images of 3C 273 revealed a faint jet with an extension of about 100,000 light-years ending at the exact position of a second radio source.

More than 40 years later, the detailed structure of this jet has been studied by NASA’s three great observatories: in visible and ultraviolet light by the Hubble Space Telescope, in X-rays by the Chandra Observatory and now also in the infrared by the Spitzer Space Telescope. This, together with radio observations by the Very Large Array (VLA), enables this powerful jet to be studied across the whole electromagnetic spectrum.

A team led by Y Uchiyama at Yale University has now shown that the overall spectrum of individual bright features in the jet contains two distinct spectral components. The first component extends from the radio to the infrared and the second becomes dominant in the visible up to the X-rays. While the low-energy component is undoubtedly of synchrotron origin (electron radiation in a magnetic field), the nature of the second component is uncertain.

The strong X-ray emission of quasar jets detected by Chandra was thought to be due to inverse-Compton radiation produced by electrons scattering off the cosmic microwave background (CMB) photons. This model requires a strong bulk velocity of the jet flow to enhance relativistically the CMB photon field as seen from the electrons. The new finding that the visible ultraviolet emission is apparently related to the X-ray spectral component adds additional constraints to this model, which conflicts with the observed radio and optical polarization.

It now seems more likely that the high-energy component is also of a synchrotron nature. This would require a distinct electron acceleration process all along the jet, which could occur in a region of velocity shear between a central spine of the jet flow and an outer sheath, as suggested by S Jester at Fermilab and collaborators. In such conditions, according to Uchiyama et al., ultra-high-energy protons could also be accelerated to energies up to 1019 eV enabling proton-synchrotron X-ray emission. This would mean that quasar jets are powerful particle accelerators producing extragalactic cosmic-ray protons with energies of 1016-1019 eV.

Further reading

S Jester et al. (in press) Astrophysical Journal http://arxiv.org/abs/astro-ph/0605529.

Y Uchiyama et al. (in press) Astrophysical Journal http://arxiv.org/abs/astro-ph/
0605530
.

The future’s bright for the Pierre Auger Observatory

Water tanks

In 1938, Pierre Auger and colleagues in Paris discovered that showers of cosmic rays can extend over wide areas when they recorded simultaneous events in detectors placed about 30 m apart. Nearly 70 years later, on the pampas of western Argentina, a cosmic-ray observatory bearing Auger’s name is studying extensive air showers over a much wider area, many times the size of Paris itself. These showers are generated by particles with far higher energies than any man-made accelerator can reach, and they continue to challenge our understanding.

The nature and origin of the highest-energy cosmic rays remain obscure. Above 1019 eV (10 EeV) the rate of particles falling on the Earth’s atmosphere is about 1/km2 a year while at 100 EeV, where a small number of particles may have been identified, it falls to less than 1/km2 a century. Thus detectors must be deployed over vast areas to accumulate useful numbers of events. Remarkably, this approach is practical because the cosmic rays generate giant cascades, or air showers, with more than 1010 particles at shower maximum for a 10 EeV primary cosmic ray. Some of the shower particles reach ground-level where they are spread over about 20 km2. The particles also produce fluorescence light by the excitation of atmospheric nitrogen, which provides an alternative and powerful means of detecting the showers and useful complementary information.

The strategy behind the design of the Pierre Auger Observatory is to study showers through detecting not only the particles, with an array of 1600 water Cherenkov detectors, but also the fluorescence light, using four stations, each with six telescopes overlooking the particle detectors. The water tanks are used to measure the energy flow of electrons, photons and muons in the air showers, while the faint light emitted isotropically as the shower moves through the atmosphere can be detected with the fluorescence telescopes. The observatory is now around 70% complete and has been taking data for more than two years.

An event detected

Unlike previous observatories for ultra-high-energy cosmic rays, the Pierre Auger Observatory combines the potential of high statistics from the water tanks, which are on nearly all of the time, with the power of a calorimetric energy determination from the fluorescence devices; it has become known as a hybrid detector. Figure 2. shows an event in which signals are seen in water tanks and in a fluorescence detector.

Alone, the signals from the tanks can be related to the energy of the primary cosmic ray by making assumptions about hadronic interactions. Our limited knowledge about such interactions at the relevant energies will be enhanced by the LHC at CERN, particularly from the forward-physics projects that are being prepared. For now, the energy transferred into the leading particle, the multiplicity and the cross-section for the interaction must all be estimated, while the sparse information on pion-nucleus collisions can be boosted by fixed-target experiments. Additionally, assumptions about the mass of the primary particle must be made, as an iron nucleus, for example, will yield a smaller number of particles at ground level than a proton. With the fluorescence technique, however, these problems can be finessed, and the primary energy can be deduced rather directly.

Locations of 1113 water tanks

Figure 3 shows the layout of the Pierre Auger Observatory on 31 March, by which time 1113 tanks had been deployed with all but five of them filled with 12 tonnes of pure water; 953 are fitted with electronics and are fully operational. Three of the four fluorescence stations are taking data; the building for the fourth is under construction and telescopes will be installed there in late 2006. When completed, the area covered will be about 30 times bigger than Paris.

All the water tanks operate in an autonomous mode: three 9 inch photomultipliers view each volume of water (10 m2 × 1.2 m) with a trigger rate set at about 20 Hz. The tank signals are calibrated in units of vertical equivalent muons, each of which gives a summed signal of around 300 photoelectrons. The time of each trigger, determined using a GPS receiver, is sent to a central computer using a purpose-built radio system coupled to a microwave link. The computer is used to find detectors clustered in space and time in the manner expected for an air shower, and when a grouping is identified a signal is sent to each detector requesting that other data be transmitted. Currently about 50 events are recorded every hour above a threshold below 1 EeV, with about two events a day from primaries with energies above 10 EeV. Solar cells provide 10 W for the electronics of each tank. Once in position and operational, as in the example in figure 1, these detectors need little attention except replacing the batteries every four years.

The 440-photomultiplier camera

A single fluorescence station contains six telescopes, each fitted with a camera that collects light falling on an 11 m2 mirror (see figure 4). The camera has 440 hexagonal photomultipliers (40 mm across), each viewing a different part of the sky. Nitrogen fluorescence at wavelengths of 300-400 nm is observed through a filter, which also keeps out dust. Schmidt optics eliminate coma aberration, achieving a spot size of 0.5°. The trigger for these detectors requires that one of a pre-defined set of patterns is recognized within a group of five photomultiplier pixels. Trigger details are then transmitted to the computer that records the water-tank information. Data from the fluorescence signal about the plane in space that contains the shower direction are combined with the time at which water tanks are struck to define the core position and direction with high precision. The core can be located to about 60 m while the direction is obtained to within around 0.5°. This accuracy is much higher than is possible with either detector type alone. Large showers are sometimes seen in stereo by two fluorescence detectors and a few tri-ocular triggers have been obtained: such events allow the accuracy of event reconstruction to be cross-checked.

A major goal of the Pierre Auger Observatory is to make a reliable measurement of the cosmic-ray energy spectrum above 10 EeV. In particular, the researchers aim to answer the question as to whether or not the spectrum steepens above around 50 EeV, as predicted by Kenneth Greisen, Georgi Zatsepin and Vadim Kuzmin shortly after the discovery of the 2.7 K microwave background. The point is that for protons above this energy the microwave radiation is seen Doppler-shifted to the extent that the Δ+ resonance is excited. This reaction drains a proton of energy so rapidly that for a proton to be detected at 100 EeV, the cosmic-ray source is expected to lie within 50 Mpc. If there are heavy nuclei in the primary beam, they will be fragmented through photo-disintegration, with the diffuse infrared photon field as important as the 2.7 K radiation. The spectrum at the sources will also, of course, be reflected in the details of the spectrum shape.

A further key quantity at the highest energies is the mass of the primary cosmic-ray particles

To determine the spectrum, the Auger Collaboration aim to collect as many events as possible with the surface detectors and to measure the energy of a sub-sample using the fluorescence detectors. The hybrid event shown in figure 2 serves as an example, though in this case reconstruction of the fluorescence light curve, and hence the shape of the cascade, was somewhat simplified because the shower axis was nearly at right-angles to the direction of view. At other orientations the received light is a mixture of fluorescence and Cherenkov radiation arising from high-energy electrons traversing the air. The latter is a particularly serious problem when the trajectory of the shower is towards a telescope.

The data reported so far are from an exposure of 1750 km2 steradian years a year, slightly larger than that achieved by the Akeno Giant Air Shower Array (AGASA) group in Japan. The energy spectrum has been derived from around 3500 events above 3 EeV. Above this energy, the full geometrical area of the detector, defined by the layout of the water tanks, is sensitive so that determination of the flux of events is relatively straightforward. The calibration of the tank signals against the fluorescence detector currently contains relatively large systematic uncertainties (about 30% at 3 EeV and around 50% near 100 EeV), which arise from statistical limitations and uncertainties in the fluorescence yield. The former issue will improve as more data are analysed, while the absolute value of the fluorescence yield is being measured in accelerator laboratories by a small team from the collaboration. Just as with a calorimeter operating in a particle-physics detector, missing energy must be taken into account: although the estimate of this is model- and mass-dependent, the systematic uncertainty in the correction is understood at the 10% level.

Air-shower energy spectrum

The spectrum from the Auger data is shown in figure 5, where it is compared with those from AGASA, HiRes in the US, and the Yakutsk Extensive Air Shower Array in Russia. The general form is similar but, even allowing for the systematic uncertainties still present, it appears that at the highest energies significantly fewer events are seen than expected from the AGASA analysis. The claim of the HiRes team that the spectrum steepens at the highest energies can neither be confirmed nor denied with the present exposure. One event was recorded in April 2004 for which the fluorescence reconstruction gives an energy greater than 140 EeV, but the particle array was small at that date and the shower core fell outside of the fiducial area. Details of the spectrum will be greatly clarified with the data that have been accumulated since June 2005.

A further key quantity at the highest energies is the mass of the primary cosmic-ray particles. This is a significant challenge because of the uncertainties in our understanding of hadronic interactions. Showers produced by iron nuclei will contain more muons than protons, but the magnitude of the difference is relatively small, muons are expensive to identify and model predictions are uncertain. A practical approach is to study the change of the depth of shower maximum as a function of energy. Again it is necessary to make comparisons with models, but here an additional variable – the magnitude of the fluctuation in the depth of shower maximum – is probably less sensitive to details of different models. The fluctuation in the position of shower maximum is smaller for iron nuclei than for protons. To increase statistics, it is desirable to find a parameter measurable with the surface detectors, so the researchers are exploiting both the fall-off of signal size with distance, and features of the time structure of the tank signals measured with 25 ns flash ADCs.

The collaboration has also developed techniques to search for the photon flux that is expected if the highest-energy cosmic rays arise from the decay of super-heavy relic particles, such as cryptons or wimpzillas, which some theorists speculate were produced in the early universe. On average, showers generated by photons are expected to have maxima deeper in the atmosphere by around 200 g/cm2. However, account has to be taken of the orientation of the photon with respect to the Earth’s magnetic field, as electron-pair production is possible and this elevates the depth of maximum. The Landau-Pomeranchuk-Migdal effect must also be accounted for as it leads to significant fluctuations in the shower maximum. A first study has established a limit of 16% above 10 EeV with only 29 events. This limit is not yet very discriminatory, but the technique has significant potential and the result has been submitted to Astroparticle Physics.

A 30 EeV event

Another goal is to search for anisotropies in arrival directions with detection of point sources being the “Holy Grail”. Claims of significant effects at high energies have never been confirmed by independent work with higher statistics. So far, the analysis of data from the Pierre Auger Observatory repeats that story. A search for anisotropies associated with the galactic centre near 1 EeV claimed by the Adelaide group in a re-analysis of material from the Sydney array and by the AGASA group has failed to provide confirmation. Searches at the highest energies have so far been similarly unrewarding.

The Pierre Auger Collaboration is developing the study of inclined events, and showers with zenith angles above 85° have been seen. This was expected as they had been detected long ago with much smaller arrays, but the richness of the new data is impressive. Figure 6 shows an event at about 88° with 31 detectors, and even the present array is too small to contain it. A preliminary estimate of its energy is around 30 EeV. An understanding of these events will lead to additional aperture for collection of the highest-energy particles and also give additional routes to understanding the mass composition. Further, these events form the background against which a neutrino flux might be detectable. There is an exciting future ahead.

COMPASS homes in on the nucleon spin

The COMPASS spectrometer

The concept for the COmmon Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS) experiment first appeared on paper in a proposal submitted to CERN in 1996. A decade later, COMPASS has reached maturity, and is again taking data after the shut-down of most of the CERN accelerator complex during 2005. The year-long break provided the opportunity to carry out important upgrades to the experiment’s spectrometer, and the configuration is now very close to the one first envisaged 10 years ago. The first years of running have in the meantime already shed important light on our understanding of spin in the proton and neutron.

The goal of the COMPASS experiment is to investigate hadron structure and spectroscopy, both of which are manifestations of non-perturbative quantum chromodynamics (QCD). At large scales, QCD appears as a simple and elegant theory. However when it comes to hadrons, it is difficult to link some of their fundamental properties to quarks and gluons. Questions such as “How is the proton spin carried by its constituents?” and “Do exotics, non-qqbar mesons or non-qqq baryons exist?” still do not have clear answers. In this article we will focus on the contribution of COMPASS to the problem of nucleon spin, as it follows in the footsteps of earlier experiments at CERN.

Investigations of the spin structure of the nucleon are best performed by measuring double spin asymmetries in the deep inelastic scattering (DIS) of polarized leptons (electrons or muons) on polarized proton and neutron targets. These measurements allow the spin-dependent structure function g1(x) for the proton and for the neutron to be extracted.

The first measurements of polarized electron-proton scattering were performed at SLAC in the 1980s by the E80 and E130 Collaborations, and yielded results that were consistent with the Ellis-Jaffe sum rule. The comparison with the Bjorken sum rule is particularly important, but could not be performed at the time as the SLAC experiments did not measure the neutron. Derived as early as 1966 using current algebra tools, this sum rule relates the difference of the first moments of g1 for the proton and the neutron to GA/GV, that is, to fundamental constants of the weak interaction.

A breakthrough occurred when the European Muon Collaboration (EMC) at CERN extended these measurements to a much larger kinematic range. Using a polarized muon beam with an energy 10 times higher than at SLAC, and the largest solid polarized target ever built (about 2 l), in 1988 the collaboration reported a significant violation of the Ellis-Jaffe sum rule for the proton. In the context of the quark-parton model this implied that the total contribution of the quark spins to the proton spin is small – a major surprise that soon came to be known as the “spin crisis”. Soon after, the Spin Muon Collaboration (SMC) experiment was proposed to CERN, with the aim of improving the measurement of g1 for the proton and performing the same measurement with a polarized deuteron target.

Early results

SMC soon achieved a major accomplishment with the first measurement of g1 for the deuteron in 1992. The result, when combined with the EMC result, was in agreement with the Bjorken sum rule, and implied that the Ellis-Jaffe sum rule was also violated for the neutron. This result was particularly important because the first evidence from a competing experiment at SLAC (E142) was quite different, which suggested that either the EMC result or the Bjorken sum rule was wrong. Given the extremely sound theoretical foundations of the Bjorken sum rule, the obvious inference was that experimental finding at CERN was wrong.

Preliminary data from COMPASS

However, this was not the case. Both the EMC and the SMC experiments were right, and the original discrepancy with E142 turned out to be mostly driven by higher-order QCD corrections. So it was already safe to conclude in 1993 that the spin crisis was a well-established phenomenon for both the proton and the neutron, and that it occurred within the boundaries given by the Bjorken sum rule.

The SMC experiment also provided another important result, determining for the first time the separate contributions of the valence and sea quarks to the nucleon spin via semi-inclusive DIS measurements. Given the large range in x covered by the measurement, the polarized quark distributions could be integrated to obtain the first moments, Δq = ∫10xΔ q(x)dx, with resulting values Δuv = 0.77±0.10±0.08, Δdv = -0.52±0.14±0.09 and Δqbar = 0.01±0.04±0.03 (Adeva 1998). The polarization of the strange sea could not be accessed as this requires full particle identification, which the SMC spectrometer could not provide.

Several experiments at SLAC (E143, E154, E155, E155x), and more recently HERMES at HERA, have confirmed the results from SMC on the structure functions g1. The HERMES Collaboration has also recently reported results on the strange sea polarization.

All these measurements accurately determine ΔΣ, the contribution of both valence and sea quark spins to the nucleon spin, to be only 20%. However it was already clear in the mid-1990s that a better understanding of nucleon spin structure demanded separate measurements of the missing contributions, i.e. the gluon polarization ΔG/G and the orbital angular momentum of both the quarks and the gluons. In particular, several theoretical analyses suggested a large contribution ΔG as a solution to the spin crisis.

Direct measurements of the gluon polarization

Progress required a new experimental approach, namely semi-inclusive DIS with the identification of the hadrons in the current jet, because the determination both of Δq and Δqbar and of ΔG requires a flavour-tagging procedure to identify the struck parton. A suggestion to isolate the photon-gluon fusion (PGF) process and measure ΔG directly had been put forward several years previously, and implied measuring the cross-section asymmetry of open charm in DIS. A new experiment, with full hadron identification and calorimetry, therefore seemed to be necessary.

At the same time, transversity, an interesting new physics case for semi-inclusive DIS measurements, was also developing rapidly. To specify the quark state completely at the twist-two level, it was realized that the transverse spin distribution ΔTq(x) has to be added to the momentum distribution q(x) and to the helicity distribution Δq(x). The DTq(x) distribution is difficult to measure because, owing to its chiral-odd nature, it cannot be measured in inclusive DIS processes. A possible way to access DTq(x) is via the Collins asymmetry, that is, an azimuthal asymmetry of the final hadron with respect to the direction of the transversely polarized quark.

Today transversity is a big issue and is a major part of the programme of many experiments. Originally the idea was much debated in the US, where it was largely responsible for the Spin Project at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. In Europe, a few enthusiasts met at a workshop in March 1993, organized by the late Roger Hess of the University of Geneva, and set down the case for a proposal (called HELP). This was submitted to CERN in the autumn, but was not accepted. However, the physics case was not given up, and, together with the measurement of ΔG, it became one of the important goals laid down in the proposal for a new experiment, COMPASS.

The COMPASS two-stage spectrometer, with particle identification and calorimetry, and the capability to handle a muon beam rate of 108 s-1, was proposed for hall 888 at CERN, after completion of the SMC experiment. Submitted in March 1996, the proposal was fully approved in October 1998 with the first physics run in 2002.

The COMPASS spectrometer

The ambitious goals of the COMPASS experiment required an entirely new spectrometer, making use of state-of-the-art detector technology and data-acquisition systems. A huge step forward had to be made in statistical accuracy and particle identification. In comparison with the SMC experiment, the incident muon flux was increased by a factor of five and a deuteron target material (6LiD) with a dilution factor roughly two times better was chosen, together accounting for a 20 fold improvement with respect to SMC. The angular acceptance for particles produced at the primary vertex was increased from ±70 mrad to ±180 mrad by a new superconducting target magnet system. The 3 m long COMPASS solenoid, with a 60 cm diameter, provides a magnetic field with a homogeneity of ±3 × 10-5 over the 1.3 m long target volume; it is being used for the first time in the 2006 run. Oppositely polarized target sections permit a direct measurement of the cross-section asymmetry.

The large acceptance requires a two-stage spectrometer. Figure 1 shows an artistic view of the apparatus, which including the beam detection fills the 100 m long experimental hall. Particles with large angles and relatively low momentum are detected in the first stage, while the fast, more central particles are analysed in the second stage, which comprises a spectrometer magnet with a stronger field and a smaller gap.

A ring imaging Cherenkov (RICH) detector provides charged particle identification. This has 116 UV-reflective mirrors forming upper and lower spherical surfaces, which focus the photons onto the upper and lower photon detectors. These detectors are multiwire proportional chambers (MWPCs) with CsI photocathodes and 80,000 pixelized read-out channels. With an area of 5.3 m2, they represent the largest such system ever deployed. A new dead-time-less APV-based read-out system for the MWPCs will be in operation for the 2006 run, while the central quarter of the system has been replaced by multi-anode photomultipliers, each with an individual lens system. This technique, which has developed enormously since the RICH was originally designed, will considerably improve background rejection and rate capability.

Preliminary Collins asymmetries

Photons, and therefore also neutral pions, are analysed in two electromagnetic lead-glass calorimeters, one in each spectrometer stage. The larger one, ECAL1, is being used for the first time in the 2006 run. Two hadron calorimeters reinforce particle identification and support the formation of the trigger, which is based on an array of scintillator hodoscopes and is formed in the 500 ns following the interaction. For the detector control the supervisory control and data-acquisition system selected for the LHC was chosen. Here as in many other areas COMPASS has done pioneering work that will benefit the LHC.

The high particle rates in COMPASS present a real challenge for the central particle tracking, as conventional tracking detectors would suffer from big inefficiencies. COMPASS has therefore turned to novel technologies, using micromesh gaseous structure (Micromegas) and gaseous electron-multiplier (GEM) techniques in large sizes and in large quantities for the first time. Both techniques are based on the concept of minimizing the distance that positive ions can travel by confining the gas amplification region to 50-100 μm. The Micromegas technology is based on an idea by Nobel Prize winner Georges Charpak, while the GEM is a development by Fabio Sauli’s group at CERN. Both detector types have been operating for three years in the intense muon beam of COMPASS without any sign of deterioration. Another new concept, to be employed when COMPASS uses a hadron beam, concerns cold silicon-strip detectors, which reduce the aging effect to a large extent.

To complete the overall detector assembly, precise timing information is provided by scintillating fibre trackers placed throughout the spectrometer close to the beam region. In the more peripheral region multiwire proportional chambers, drift chambers, drift tubes and straw chambers perform the large-angle particle tracking.

To cope with the high data rate arising from 250,000 read-out channels at a trigger rate of up to 20 kHz, the data acquisition has also had to enter new territory. Once the trigger is formed, the data are taken from the memory of custom-made front-end electronics, transferred to the event-building computers and stored on tape, at a rate of about 5 TB/day. Data storing and handling represent a challenge in themselves. The offline system is dealing with a raw data size of 400 TB/year, and once again COMPASS has been the guinea pig for the future experiments at the LHC. In the first three years of operation 20 billion events have been put on tape and processed several times.

The first important results have already been obtained from the huge amount of data collected by COMPASS. The g1 structure function of the deuteron has been measured with unprecedented accuracy in the low-x region, improving by at least a factor of six the precision of the SMC measurement (Ageev et al. 2005). Essential data for g1 come from SLAC and HERA (and recently from Jefferson Lab), but the CERN experiments are unique at low x, giving an invaluable contribution to the evaluation of the first moment of g1 and thus ΔΣ, which requires the data to be extrapolated to x = 0. The Q2 evolution of g1 also contains important information on ΔG. Here the COMPASS data have a particular impact, since they lie at the high-Q2 end of the available experimental information. The new preliminary COMPASS data are shown in figure 2, together with the SMC data and the result of a recent QCD fit to the world data set comprising 230 data points. Recent fits now suggest rather small values for ΔG.

Competition breeds innovation

Direct measurements of ΔG are particularly important. In this field COMPASS is in competition with the experiments at RHIC, which look at the cross-section asymmetry of prompt photons or π0s produced in collisions between polarized protons to estimate ΔG. Three independent measurements have been performed by COMPASS using the cross-section asymmetry of (i) open-charm production (detecting either D or D* charmed mesons); (ii) high-pT hadron pairs in DIS events (Q2 > 1 GeV2); and (iii) high-pT hadron pairs in photoproduction (Q2 < 1 GeV2). In all these processes, the PGF contribution is important, but the background is different. COMPASS is unique in the open-charm measurement. The high-pT hadron-pairs method was invented within the COMPASS Collaboration while setting up the experiment, and has already been applied to estimate ΔG by HERMES (all Q2) and SMC (Q2 > 1 GeV2).

The COMPASS results (Ageev et al. 2006) are shown in figure 3 together with the results from the other collaborations and next-to-leading order QCD fits corresponding to a first moment of ΔG at Q2 = 3 GeV2 of 2.5, 0.6 and 0.2 for the maximum, standard and minimum scenarios, respectively. Small values for ΔG are favoured, and method (iii) from COMPASS now provides fairly precise information.

Another prime objective of COMPASS is the investigation of transverse spin effects

Another prime objective of COMPASS is the investigation of transverse spin effects. The transversity distributions are difficult to measure because they can be obtained from the transverse spin asymmetries only after unfolding the Collins effect. This requires a global analysis of transverse spin asymmetries of several identified hadrons produced in semi-inclusive DIS, as well as the analysis of spin asymmetries in e+e → 2 hadrons, as currently measured by the BELLE Collaboration. In this worldwide effort, COMPASS has provided the first asymmetry data for the deuteron. The measured asymmetries are very small (Alexakhin et al. 2005) (figure 4). Taking into account the fact that the HERMES Collaboration has measured non-zero Collins asymmetries on a transversely polarized proton target, the COMPASS result very likely points to a cancellation between proton and neutron, much as for the longitudinal case where g1 for the neutron has the opposite sign to g1 for the proton. Further investigations of transverse-spin effects are related to ongoing measurements of the Sivers asymmetry, the two-hadron interference function and the Λ polarization transfer.

The search continues

The COMPASS analysis group is currently investigating many more physics channels. The wealth of data allows for the search for new states in quasi-real photoproduction; the recently announced pentaquark states Θ+(1530) and the Ξ(1860) have been looked for here, but so far with negative results. Very large samples of Λ hyperons allow the study of reaction mechanisms and polarization transfer. In a similar way, the measurement of the spin density matrix of vector mesons (ϕ, ρ, ρ’) provides stringent tests on reaction mechanisms, such as s-channel helicity conservation; a phase-shift analysis of the π+ π+ π π has recently begun.

Running-in the new spectrometer has taken some time. Owing to the non-availability of the COMPASS polarized-target magnet, the experiment has until now used the SMC target system, which has a much smaller acceptance. Physics data were collected in 2002, 2003 and 2004, doubling the amount of data each year, thanks to several improvements in the apparatus. This should again be the case for the 2006 run, when the new COMPASS polarized-target magnet is used for the first time.

COMPASS is scheduled to take data until the end of the decade both for its hadron programme and for its muon programme with a polarized proton target. An Expression of Interest has been put forward for a new experimental programme, based on an upgraded COMPASS spectrometer (COMPASS-II) and an even higher beam flux. The emphasis of the future programme will be on the still unknown orbital angular momentum of the partons inside the nucleon. This will be addressed in two different ways, first by the measurement of generalized parton distributions in deeply virtual Compton scattering and in hard exclusive meson production processes, and second by a precise determination of the first moments of the transversity distributions that are linked to the orbital angular momentum via the Bakker-Leader-Trueman sum rule. Of course, an important part of the COMPASS-II programme will still be spectroscopy, where many open questions remain.

Let there be axions

One of the biggest mysteries of science is the nature of dark matter, which first became apparent as astronomer Fritz Zwicky’s “dunkle Materie” in 1933. The two leading particle candidates for this “missing matter” are weakly interacting massive particles (WIMPs) and axions – hypothesized uncharged particles that have a very small but unknown mass, which barely interact with other particles. To bring together the widespread axion community, the Integrated Large Infrastructure for Astroparticle Science (ILIAS), the CERN Axion Solar Telescope (CAST) collaboration and CERN have organized a series of training workshops on current axion research, including open discussions between theorists and experimentalists. The first two of these were held at CERN in November and at the University of Patras in Greece, in May. This article highlights the presentations at both meetings.

CCElet1_07-06

The idea of the axion has been around for some 30 years, proposed as a solution to the strong charge-parity (CP) problem in quantum chromodynamics (QCD), the theory of strong interactions. According to the basic field equations of QCD, strong interactions should violate CP symmetry, rather as weak interactions do. However, strong interactions show no sign of CP violation. In 1977, Roberto Peccei and Helen Quinn suggested that to restore CP conservation in strong interactions, a new symmetry must be present, compensating the original CP-violating term in QCD almost exactly – to at least one part in 1010. The breakdown of this gives rise to the so-called axion field proposed by Steven Weinberg and Frank Wilczek, and the associated pseudo-scalar particle – the axion. Appropriately, Peccei, from the University of California Los Angeles, gave the first lecture of the workshop series and described the theoretical raison d’être of the Peccei-Quinn symmetry.

CCElet2_07-06

Evidence for strong CP violation should in particular appear in an electron dipole moment (EDM) for the neutron, but this has not yet been detected. Instead, we know from a high-precision measurement using polarized ultracold neutrons at the Institut Laue Langevin (ILL) in Grenoble that the neutron EDM is at least some 10 orders of magnitude below expectation. Peter Geltenbort of ILL presented the recently announced limit of 3 × 10-26 e cm. This is part of a series of experiments started by Nobel laureates Norman Ramsey and Edward Purcell in the 1950s, which continues today with the ambitious goal of reaching 10-28 e cm by the end of the decade. Other proposed neutron EDM experiments include those at the Paul Scherrer Institut and at the Spallation Neutron Source in Oak Ridge with goals of 10-27 e cm and 10-28 e cm, respectively. A new technique with the deuteron may provide the route for the next sensitivity scale, reaching 10-29 e cm, as Yannis Semertzidis of Brookhaven explained.

CCElet3_07-06

Stars and dark matter

CP violation seems to be necessary to explain the survival of matter at the expense of antimatter after the Big Bang. Thus the creation of relic axions shortly after the dawn of time could have been enormous, perhaps amounting to some six times more in mass than ordinary matter. In addition to the scenario of relic axions, Georg Raffelt, an axion pioneer from the Max Planck Institute, introduced the connections between astrophysics and axions, with the stars as axion sources as his central topic. The effect of such an energy-loss channel on stellar physics provides constraints on the interaction strength of axions with ordinary particles. The Sun, our best known star, should be a strong axion source in the sky, allowing a direct search for these almost-invisible particles.

CCElet41_07-06

This is precisely the objective of the CAST helioscope at CERN, which searches for solar axions using a recycled LHC test dipole magnet pointing at the Sun for some three hours a day. The signal of solar axions will be an excess of X-rays detected during solar tracking. While the relic axions are expected to move slowly at about 300 km/s, those escaping from the solar core must be super relativistic, despite their assumed kinetic energy of only about 4 keV. CAST is the first helioscope ever built with an imaging X-ray optical system, whose working principle was explained by Peter Friedrich from Max-Planck-Institut für extraterrestrische Physik and Regina Soufli from Lawrence Livermore National Laboratory (LLNL) in their lectures on X-ray optics. For axion detection, the X-ray optics act as a concentrator to enhance the signal-to-noise ratio by focusing the converted solar X-rays into a small spot on a CCD chip or a micromesh gaseous structure (Micromegas), as developed by Yannis Giomataris and Georges Charpak. CAST has been taking data since the end of 2002 and has already published first results.

CCElet5_07-06

The possible existence of axions in the universe means that they are a candidate for (very) cold dark matter, as another axion pioneer, Pierre Sikivie, from the University of Florida explained. He also described the technique that he invented in 1983 for detecting axions. The idea is that axions in the galactic halo may be resonantly converted to microwave photons in a cavity permeated by a strong magnetic field. The expected signals are extremely weak, measured in yoctowatts, or 10-24 W. The same holds also for the solar axions inside the CAST magnet, whose energies of a few kilo-electron-volts (keV) are several orders of magnitude higher. The process depends on various parameters, such as the magnetic-field vector and size, the plasma density, the (unpredictable) axion rest mass and the photon polarization – all of which provide the multiparameter space in which axion hunters search for their quarry.

CCElet6_07-06

Sikivie also described the search for relic axions at LLNL, the topic of the CERN seminar at the start of the first workshop, presented by Karl van Bibber from LLNL. The Axion Dark Matter eXperiment (ADMX), which uses a microwave cavity to look for axionic dark matter as proposed by Sikivie, has been taking data for a decade. It is now undergoing an upgrade to use near-quantum-limited SQUID amplifiers. In his review, van Bibber also described CARRACK, a similar experiment in Kyoto, which uses a Rydberg-atom single-quantum detector as the back-end of the experiment.

The axion, together with the Higgs boson – another so-far undetected particle required by theory – may contribute not only to dark matter but also to dark energy, as Metin Arik from Istanbul explained. This leads to the question of why the dark-energy density is so small.

Light polarization

Giovanni Cantatore presented the Polarizzazione del Vuoto con LASer (PVLAS) experiment at the INFN Legnaro National Laboratory, which has recently caused a stir in the axion community. In a recent paper in Physical Review Letters, the PVLAS collaboration reports that a magnetic field can be used to rotate the polarization of light in a vacuum. The detected rotation is extremely small, about 0.00001°. The slight twist in the polarization, the result of photons of a given polarization disappearing from the beam, could suggest the existence of a light, new neutral boson, as the signal strength observed by PVLAS is much larger than would be expected on the basis of quantum electrodynamics alone.

The particle suggested by PVLAS is not exactly the expected axion; its coupling to two photons is so strong that experiments searching for axions, such as CAST, should have seen many of them coming from astrophysical sources. It would need peculiar properties not to conflict with the current astrophysical observations, but there is no fundamental reason barring it from having such properties. Eduard Masso from the University of Barcelona reviewed the theoretical motivation for axions and the importance of an axion-like coupling to photons, and addressed the apparent conflict between the PVLAS results and CAST and the astrophysically derived bounds.

Andreas Ringwald from DESY pointed out that the possible interpretation of the PVLAS anomaly in terms of the production of an axion-like particle has triggered a revisit of astrophysical considerations. Models exist in which the production of axion-like particles in stars is suppressed compared with the production in a vacuum. In these models, the bounds derived from the age of stars or from CAST may be relaxed by some orders of magnitude. The workshop participants agreed unanimously that the PVLAS result needs direct confirmation of the particle hypothesis with laboratory-based experiments.

Semertzidis spoke about a PVLAS-type experiment that was performed at Brookhaven more than 15 years ago, with most of the PVLAS collaborators as major players. They also observed large signals, which they attributed however to the laser light motion at the magnet frequency. He went on to suggest that laser motion at the magnet rotation frequency might also produce signals at the second harmonic that would look like axion signals. The PVLAS collaboration has spent five years looking for a systematic artifact that might explain their observations, and plans to attempt to settle the question in a new photon-regeneration experiment. Here, any particles produced from photons in a first magnet, would propagate into a second magnet blocked to photons, where they would convert back into photons.

The solar-axion energy range less than 0.5-1 KeV remains a challenging new territory

Detection of such regenerated photons would provide a very robust confirmation of the particle interpretation of the PVLAS result, and similar regeneration experiments are in preparation elsewhere. Keith Baker presented the plans by the Hampton University-
Jefferson Lab collaboration to use the world’s highest-power tunable free-electron laser (FEL), in the LIght Pseudoscalar-Scalar Particle Search (LIPSS) experiment, which will run during the coming months. As Ringwald pointed out, there are a number of experiments based either on photon polarization or on photon regeneration measurements that should soon exceed the sensitivity of PVLAS. At DESY, there is a proposal to exploit the photon beam from the Free-electron LASer in Hamburg (FLASH) for the Axion Production at the FEL (APFEL) experiment, which will take advantage of unique properties of the FLASH beam. The available photon energies (around 40 eV) are just in the range where photon regeneration is most sensitive to masses in the milli-electron-volt range. In addition, the tuning possibilities of FLASH will allow a mass determination, and the pulsed nature of the photon beam allows noise reduction by timing.

Two linked experiments to search for axions proposed by a team from CERN and several other institutes are also well advanced. These were presented by Pierre Pugnat from CERN, who explained how this approach allows for simultaneous investigations of the magneto-optical properties of the quantum vacuum and of photon regeneration. The team could start next year to check the PVLAS result. The two experiments are integrated in the same LHC superconducting dipole magnet and so can provide solid results via mutual cross-checks.

Carlo Rizzo from Université Paul Sabatier/Toulouse presented a different detection concept in the Biréfringence Magnétique du Vide experiment at the Laboratoire National des Champs Magnétiques Pulsées in Toulouse. The goal is to study quantum vacuum magnetism and the experiment will be in operation this summer to test the PVLAS result.

Frank Avignone from South Carolina reviewed possibilities that go beyond the current experimental searches for axions, such as the use of coherent Bragg-Primakoff conversion in single crystals, coherence issues in vacuum and gas-filled magnetic helioscopes, and novel proposals to detect hadronic axions with suppressed electromagnetic couplings. Emmanuel Paschos of the University of Dortmund addressed possible coherence phenomena in low-energy axion scattering and its potential use for axion detection. This could be an important application of light-sensitive detectors used in underground dark-matter experiments, where they may allow the first low-energy axion searches, as reported by Klemens Rottler from the University of Tübingen and the CRESST dark-matter experiment. After all, the solar-axion energy range less than 0.5-1 keV remains a challenging new territory.

From the Sun and beyond

The signatures of axions are not confined to the solar system, and there were a number of interesting presentations on searches for axions or axion-like particles with telescopes on the ground or in orbit. A cosmologically interesting topic concerns axion-photon conversion induced by intergalactic magnetic fields, which offers an alternative explanation for the dimming of distant supernovae, without the need for cosmic acceleration. However, the same mechanism would cause excessive spectral distortion of the cosmic microwave background (CMB). Alessandro Mirizzi of Bari concludes that owing to the spectral shape of the CMB, photon-axion oscillation can play only a relatively minor role in supernova dimming. Nevertheless, a combined analysis of all the observables affected by the photon-axion oscillations would be required to give a final verdict on this model.

In related work, Damien Hutsemékers from the University of Liège has investigated the potential for photon-axion conversion within a magnetic field over cosmological distances, as it can affect the polarization of light from distant objects such as quasars. He reported on the remarkable observation, using the ESO telescopes in Chile, of alignments of quasar polarization vectors that might be due to axion-like particles along the line of sight.

Rizzo also discussed potential axion signatures in astrophysical observations, presenting an impressive movie. He reported that axion and quantum vacuum effects have been studied in the double neutron-star system J0737-3039. Astrophysical observations of such effects will be possible in 2007 with the ESA XMM/Newton and NASA GLAST telescopes in orbit.

Coming nearer to Earth, Hooman Davoudiasl from the University of Wisconsin-Madison showed that solar axion conversion to photons in the Earth’s magnetosphere can produce an X-ray flux, with average energy about 4 keV, which is measurable on the dark side of the Earth. (The low strength of the Earth’s magnetic field is compensated for by a large magnetized volume.) The signal has distinct features: a flux of X-rays coming from the dark Earth, pointing back to the core of the Sun, with a thermal distribution characteristic of the solar core, and orbital as well as annual modulations. For axion masses less than 10-4 eV, a low-Earth-orbit X-ray telescope could probe the axion-photon coupling well below the current laboratory bounds, with a few days of data-taking. Also, the question was discussed as to whether axion-photon oscillations occur inside solar magnetic fields, sufficient to give the enhanced X-ray emission from places such as in sunspots.

Another possibility is the detection of the radiative decay of massive axions, predicted in extra dimensional models, which change drastically their mass, lifetime and detection, as Emilian Dudas from Ecole Polytechnique argued. In this context, Juhani Huovelin from Helsinki Observatory presented space-borne X-ray observations of the Sun and the sky background with ESA’s SMART-1, the first European mission to the Moon, which began operation in 2004 and will continue data-taking until September 2006. The important instruments onboard for axion research are an X-ray camera from CCLRC Rutherford Appleton Laboratory in the UK, and the X-ray Solar Monitor (XSM) from the University of Helsinki. The XSM measures solar X-ray spectra with high time resolution in the 1-20 keV energy range.

Extensive data have already been accumulated, including a series of lengthy observations of the X-ray Sun during quiescence and flares, as well as various observations of the background sky. Preliminary analysis of the data indicates possible residual emission at several intervals in the 2-10 keV range after fitting known solar and sky-background emission components. A future more-refined analysis will show whether the residual emission is statistically significant, and possibly related to X-rays from the decay of gravitationally trapped massive axions. The NASA solar mission RHESSI has also entered this kind of research, with the aim of detecting the same sort of particles near the surface of the Sun, as we published with Luigi di Lella at CERN five years ago. SMART and RHESSI use the Moon and Sun respectively to block out the background sky, thereby creating a large fiducial volume to search for axion radiative decay. The 1 m3 DRIFT detector operating in the Boulby Mine in the UK provides a similar capability in an underground experiment, as Eirini Tziaferi and Neil Spooner from Sheffield explained.

The friendly atmosphere of the two workshops saw plenty of fruitful discussions in which new ideas could emerge. For example, Ringwald has recently suggested a laboratory photon-regeneration experiment with X-rays. It seems that the ESRF at Grenoble offers one of the best opportunities worldwide for such an experiment, with photon energies in the 3-70 keV range. Also, as Sikivie highlighted, there is strong scientific interest in building a next-generation microwave cavity embedded in a large-bore superconducting solenoid to detect galactic-halo axions. CERN, together with several collaborating institutes, for example, could build a microwave cavity of around 1 m3 integrated inside a 8-10 T magnetic field.

The workshop participants unanimously concluded with a call to CERN to become a focal point for axion physics. There will be more ideas and new results by the next workshop in June 2007 in Patras.

Ettore Majorana: genius and mystery

Ettore Majorana was born in Sicily in 1906. An extremely gifted physicist, he was a member of Enrico Fermi’s famous group in Rome in the 1930s, before mysteriously disappearing in March 1938.

CCEett1_07-06

The great Sicilian writer, Leonardo Sciascia, was convinced that Majorana decided to disappear because he foresaw that nuclear forces would lead to nuclear explosives a million times more powerful than conventional bombs, like those that would destroy Hiroshima and Nagasaki. Sciascia came to visit me at Erice where we discussed this topic for several days. I tried to change his mind, but there was no hope. He was too absorbed by an idea that, for a writer, was simply too appealing. In retrospect, after years of reflection on our meetings, I believe that one of my assertions about Majorana’s genius actually corroborated Sciascia’s idea. At one point in our conversations I assured Sciascia that it would have been nearly impossible – given the state of physics in those days – for a physicist to foresee that a heavy nucleus could be broken to trigger the chain reaction of nuclear fission. Impossible for what Enrico Fermi called first-rank physicists, those who were making important inventions and discoveries, I suggested, but not for geniuses such as Majorana. Maybe this information convinced Sciascia that his idea about Majorana was not just probable, but actually true – a truth that his disappearance further corroborated.

CCEett2_07-06

There are also those who think Majorana’s disappearance was related to spiritual faith and that he retreated to a monastery. This perspective on Majorana as a believer comes from his confessor, Monsignor Riccieri, who I met when he came from Catania to Trapani as Bishop. Remarking on his disappearance, Riccieri told me that Majorana had experienced “mystical crises” and that, in his opinion, suicide in the sea was to be excluded. Bound by the sanctity of confessional, he could tell me no more. After the establishment of the Erice Centre, which bears Majorana’s name, I had the privilege of meeting Majorana’s entire family. No one ever believed it was suicide. Majorana was an enthusiastic and devout Catholic and, moreover, he withdrew his savings from the bank a week before his disappearance. The hypothesis shared by his family and others who had the privilege of knowing him (Fermi’s wife Laura was one of the few) is that he withdrew to a monastery.

CCEett33_07-06

Laura Fermi recalls that when Majorana disappeared, Enrico Fermi said to his wife, “Ettore was too intelligent. If he has decided to disappear, no-one will be able to find him. Nevertheless, we have to consider all possibilities.” In fact, Fermi even tried to get Benito Mussolini himself to support the search. On that occasion (in Rome in 1938), Fermi said: “There are several categories of scientists in the world; those of second or third rank do their best but never get very far. Then there is the first rank, those who make important discoveries, fundamental to scientific progress. But then there are the geniuses, like Galilei and Newton. Majorana was one of these.”

CCEett4_07-06

A genius, however, who looked on his own work as completely banal: once a problem was solved, Majorana did his best to leave no trace of his own brilliance. This can be witnessed in the stories of the neutron discovery and the hypothesis of the neutrinos that bear his name, as recalled below by Emilio Segré and Giancarlo Wick (on the neutron) and by Bruno Pontecorvo (on neutrinos). Majorana’s comprehension of the physics of his time had a completeness that few others in the world could match.

Oppenheimer’s recollections

Memories of Majorana had nearly faded when, in 1962, the International School of Physics was established in Geneva, with a branch in Erice. It was the first of the 150 schools that now form the Centre for Scientific Culture, which today bears Majorana’s name. It is in this context that an important physicist of the 20th century, Robert Oppenheimer, told me of his knowledge of Majorana.

After having suffered heavy repercussions for his opposition to the development of weapons even stronger than those that destroyed Hiroshima and Nagasaki, Oppenheimer had decided to get back to physics while visiting the biggest laboratories at the frontiers of scientific knowledge. This is how he came to be at CERN, the largest European laboratory for subnuclear physics.

At this time, many illustrious physicists participated in a ceremony that dedicated the Erice School to Majorana. I myself – at the time very young – was entrusted with the task of speaking about the Majorana neutrinos. Oppenheimer wanted to voice his appreciation for how the Erice School and the Centre for Scientific Culture had been named. He knew of Majorana’s exceptional contributions to physics from the papers he had read, as any physicist could do at any time. What would have remained unknown was the episode he told me as a testimony to Fermi’s exceptional opinion of Majorana. Oppenheimer recounted the following episode from the time of the Manhattan Project, which in the course of only four years transformed the scientific discovery of nuclear fission into a weapon of war.

There were three critical turning points during the project, and during the executive meeting to address the first of these crises, Fermi turned to Eugene Wigner and said: “If only Ettore were here.” The project seemed to have reached a dead-end in the second crisis, during which Fermi exclaimed once more: “This calls for Ettore!” Other than the project director himself (Oppenheimer), three people were in attendance at these meetings: two scientists (Fermi and Wigner) and a military general. After the “top secret” meeting, the general asked Wigner, who this “Ettore” was, and he replied: “Majorana”. The general asked where Majorana was so that he could try to bring him to America. Wigner replied: “Unfortunately, he disappeared many years ago.”

By the end of the 1920s, physics had identified three fundamental particles: the photon (the quantum of light), the electron (needed to make atoms) and the proton (an essential component of the atomic nucleus). These three particles alone, however, left the atomic nucleus shrouded in mystery: no-one could understand how multiple protons could stick together in a single atomic nucleus. Every proton has an electric charge, and like charges repel each other. A fourth particle was needed, heavy like the proton but without electric charge. This was the neutron, but no-one knew it at the time.

Then Frédérick Joliot and Irène Curie discovered a neutral particle that can enter matter and expel a proton. Their conclusion was that it must be a photon, because at the time it was the only known particle with no charge. Majorana had a different explanation, as Emilio Segré and Giancarlo Wick recounted on different occasions, including during visits to Erice. (Both Segré and Wick were enthusiasts for what the school and the centre had become in only a few years, all under the name of the young physicist that Fermi considered a genius alongside Galilei and Newton). Majorana had explained to Fermi why the particle discovered by Joliot and Curie had to be as heavy as a proton, even while being electrically neutral. To move a proton requires something as heavy as the proton, thus a fourth particle must exist, a proton with no charge. And so was born the correct interpretation of what Joliot and Curie discovered in France: the existence of a particle that is as heavy as a proton but without electrical charge. This particle is the indispensable neutron. Without neutrons, atomic nuclei could not exist.

Fermi told Majorana to publish his interpretation of the French discovery right away. Majorana, true to his belief that everything that can be understood is banal, did not bother to do so. The discovery of the neutron is in fact justly attributed to James Chadwick for his experiments with beryllium in 1932.

Majorana’s neutrinos

Today, Majorana is particularly well known for his ideas about neutrinos. Bruno Pontecorvo, the “father” of neutrino oscillations, recalls the origin of Majorana neutrinos in the following way: Dirac discovers his famous equation describing the evolution of the electron; Majorana goes to Fermi to point out a fundamental detail: ” I have found a representation where all Dirac γ matrices are real. In this representation it is possible to have a real spinor that describes a particle identical to its antiparticle.”

The Dirac equation needs four components to describe the evolution in space and time of the simplest of particles, the electron; it is like saying that it takes four wheels (like a car) to move through space and time. Majorana jotted down a new equation: for a chargeless particle like the neutrino, which is similar to the electron except for its lack of charge, only two components are needed to describe its movement in space-time – as if it uses two wheels (like a motorcycle). “Brilliant,” said Fermi, “Write it up and publish it.” Remembering what happened with the neutron discovery, Fermi wrote the article himself and submitted the work under Majorana’s name to the prestigious scientific journal Il Nuovo Cimento (Majorana 1937). Without Fermi’s initiative, we would know nothing about the Majorana spinors and Majorana neutrinos.

One of the dreams of today’s physicists is to prove the existence of Majorana’s hypothetical neutral particles.

The great theorist John Bell conducted a rigorous comparison of Dirac’s and Majorana’s “neutrinos” in the first year of the Erice Subnuclear Physics School. The detailed version can be found in the chapter that opens the 12 volumes published to celebrate Majorana’s centenary. These volumes describe the highlights leading up to the greatest synthesis of scientific thought of all time, which we physicists call the Standard Model. This model has already pushed the frontiers of physics well beyond what the Standard Model itself first promised, so now the goal is the Standard Model and beyond.

Today we know that three types of neutrinos exist. The first controls the combustion of the Sun’s nuclear engine and keeps it from overheating. One of the dreams of today’s physicists is to prove the existence of Majorana’s hypothetical neutral particles, which are needed in grand unification theory. This is something that no-one could have imagined in the 1930s. And no-one could have imagined the three conceptual bases needed for the Standard Model and beyond.

Particles with arbitrary spin

In 1932 the study of particles with arbitrary spin was considered at the level of a pure mathematical curiosity, and Majorana’s paper on the subject remained quasi-unknown despite being full of remarkable new ideas (Majorana 1932). Today, three-quarters of a century later, this mathematical curiosity of 1932 still represents a powerful source of new ideas. In fact in this paper there are the first hints for supersymmetry, spin-mass correlation and spontaneous symmetry breaking (SSB) – three fundamental concepts underpinning the Standard Model and beyond. This means that our current conceptual understanding of the fundamental laws of nature was already in Majorana’s attempts to describe particles with arbitrary spins in a relativistically invariant way.

Majorana starts with the simplest representation of the Lorentz group, which is infinite-dimensional. In this representation the states with integer (bosons) and semi-integer (fermions) spins are treated equally. In other words, the relativistic description of particle states allows bosons and fermions to exist on equal footing. These two fundamental sets of states are the first hint of supersymmetry.

Another remarkable novelty is the correlation between spin and mass. The eigenvalues of the masses are given by a relation of the type m = m0/(J+1/2), where m0 is a given constant and J is the spin. The mass decreases with the increasing value of the spin, the opposite of what would come, many decades later, in the study of the strong interactions between baryons and mesons (now known as Regge trajectories). As a consequence of the description of particle states with arbitrary spins, this remarkable paper also contains the existence of imaginary mass eigenvalues. We know today that the only way to introduce real masses without destroying the theoretical description of nature is through the mechanism of SSB, but this could not exist without imaginary masses.

In addition to these three important ideas, the paper also contributed to a further development: the formidable relation between spin and statistics, which would have led to the discovery of another invariance law valid for all quantized relativistic field theories, the celebrated PCT theorem.

Majorana’s paper shows first of all that the relativistic description of a particle state allows the existence of integer and semi-integer spin values. However, it was already known that the electron must obey the Pauli exclusion principle and that it has semi-integer spin. Thus the problem arose of understanding whether the Pauli principle is valid for all semi-integer spins. If this were the case it would be necessary to find out the properties that characterize the two classes of particles, now known as fermions (semi-integer spin) and bosons (integer spin). The first of these properties are of statistical nature, governing groups of identical fermions and groups of bosons. We now know that a fundamental distinction exists and that the anticommutation relations for fermions and the commutation relations for bosons are the basis for the statistical laws governing fermions and bosons.

The spin-statistics theorem has an interesting and long history, the main players of which are some of the most distinguished theorists of the 20th century. The first contribution to the study of the correlation between spin and statistics comes from Markus Fierz with a paper where the case of general spin for free fields is investigated (Fierz 1939). A year later Wolfgang Pauli comes in with his paper also “On the Connection Between Spin and Statistics” (Pauli 1940). The first proofs, obtained using only the general properties of relativistic quantum field theory and which include microscopic causality (also known as local commutativity), are due to Gerhart Lüders and Bruno Zumino, and to N Burgoyne (Lüders and Zumino 1958; Burgoyne 1958). Another important contribution, which clarifies the connection between spin and statistics, came three years later with the work of G F Dell’Antonio (Dell’Antonio 1961).

It cannot be accidental that the first suggestion of the existence of the PCT invariance law came from the same people engaged in the study of the spin-statistics theorem, Lüders and Zumino. These two outstanding theoretical physicists suggested that if a relativistic quantum field theory obeys the space-inversion invariance law, called parity (P), it must also be invariant for the product of charge conjugation (particle-antiparticle) and time inversion, CT. It is in this form that it was proved by Lüders in 1954 (Lüders 1954). A year later Pauli proved that PCT invariance is a universal law, valid for all relativistic quantum field theories (Pauli 1955).

This paper closes a cycle started by Pauli in 1940 with his work on spin and statistics where he proved already what is now considered the classical PCT invariance, as it was derived using free non-interacting fields. The validity of PCT invariance for quantum field theories was obtained in 1951 by Julian Schwinger, a great admirer of Majorana (Schwinger 1951). It is interesting to read what Arthur Wightman, another of Majorana’s enthusiastic supporters, wrote about this paper by Schwinger: “Readers of this paper did not generally recognize that it stated or proved the PCT theorem” (Wightman 1964). It is similar for those who, reading Majorana’s paper on arbitrary spins, have not found the imprinting of the original ideas discussed in this short review of the genius of Majorana.

CMS closes up for magnet test and cosmic challenge

After many years of hard work and long hours, the team building the CMS detector at CERN will get the chance to test the giant magnet in the final stage of commissioning, together with pieces of all the sub-detectors, in the magnet test and cosmic challenge (MTCC). Over the past year one set of end-cap disks has been completely equipped with its muon detection system, the end-cap hadron calorimeters have been put in place and most of the barrel muon detectors have been installed and commissioned. In mid-July the CMS superconducting coil, the barrel rings containing pieces of the inner detectors and the semi-equipped endcaps, were pushed together to be tested for the first time. CMS is unique among the experiments for the Large Hadron Collider (LHC) as it is being assembled on the surface at intersection point 5 in Cessy, France.

CCEcms1_07-06

Meanwhile, a great opportunity exists for the collaboration to understand parts of the detector that have already been installed, including the hadronic calorimeter (HCAL – all of which is installed although only one section is being operated during the test), two supermodules of the electronic calorimeter (ECAL) and some pieces of the prototype tracker as well as the muon chambers. These sub-detectors will be read out using the real data-acquisition system when cosmic muons are detected during the cosmic challenge. This “slice testing” will continue for two months to ensure the correct alignment and synchronization of the detectors, as well as to confirm that the event builder works as expected and that the software is flexible enough to make any changes needed.

CCEcms2_07-06

Step one: cooling the solenoid

The gigantic CMS solenoid has already been cooled to 4.5 K, the operating temperature reached on 25 February after cooling began 23 days earlier. The magnet will be operated at this temperature throughout the summer, while the magnet team commission it and test all the systems. The huge coil consists of 14.5 tonnes of niobium-titanium superconducting cables embedded in 74 tonnes of aluminium, with 126 tonnes of high-mechanical-strength aluminum alloy and 9 tonnes of insulation. The temperature is extremely well contained inside the outer covering, enabling engineers to stand within the solenoid during cooldown (figure 1).

CCEcms3_07-06

After cooling the magnet, the next step was to turn on the current. However, before that could happen, the yoke had to be closed to channel the return flux. The CMS Magnet and Infrastructure Group tested the first low currents to check the control and safety systems before the yoke was closed. Final tests were then made on all auxiliary systems, including cryogenics, electronics and fine-tuning the control system and the power supply. During commissioning the current will rise from 1 kA to 19.5 kA before reaching the nominal magnetic field.

CCEcms4_07-06

Step two: the hadronic calorimeter

At the beginning of March the first half of the barrel HCAL was tested using a radioactive source before insertion into the solenoid in early April (figure 2). The second HCAL half-barrel was inserted a month later. Both operations involved moving the HCAL pieces from their storage alcoves on an air-pad system and then sliding the halves onto rails welded to the inside of the solenoid. The HCAL comprises layers of brass interleaved with plastic scintillators embedded with wavelength-shifting optical fibres. The light is read out via hybrid photodiodes.

Step three: the ECAL supermodules

Two out of 36 supermodules that make up the barrel ECAL have been installed specifically for the MTCC using a rotatable insertion device known as the “squirrel cage”. This is no easy task, as each section weighs more than 3 tonnes and is very delicate (figure 3). Inside each of these boxes lie 1700 lead-tungstate crystals inserted into glass-fibre “alveolar” structures. Scintillation light produced in the crystals by incident electrons and photons is detected by avalanche photodiodes glued to the back of the crystals. The supermodules also contain the associated front-end electronics, laser monitoring and cooling systems.

Step four: the prototype tracker

During the early hours of 19 May, a special climate- and humidity-controlled truck transported a prototype particle tracker to the CMS site. To avoid shocks to the delicate parts, the truck travelled at a maximum speed of 10 km an hour. Once it arrived, crews hoisted the 2 tonne apparatus up 10 m to the opening of the solenoid (figure 4). Surveyors then aligned the pieces, using the two supermodules of the ECAL for reference. While the prototype tracker is equipped with 2 m2 of silicon sensors, the real tracker will comprise 16,000 strip sensors and about 900 pixel sensors – around 200 m2 of sensors.

Once the MTCC is complete, CMS will be pulled apart in preparation for lowering the sections into the cavern (due to start in October/November). However, before the tracker is removed, an important test will be made of procedures to remove and replace one of the supermodules while the tracker remains inside. CMS plans to install all 36 supermodules into the HCAL on the surface before the real tracker is inserted, but if the schedule slips some supermodules may need to be installed underground with the tracker in situ. This is a demanding task, as neither piece can touch each other and there is only about a centimetre of clearance between them.

After the MTCC is complete, the team will remove the tracker and the ECAL, close CMS and conduct a magnetic-field map test. This will show how uniform the field is and where there are discrepancies. With this information, the differences can be incorporated into the operating software of the experiment.

The MTCC provides a unique opportunity for the CMS collaboration to test installation procedures and to commission a large fraction of the detector, including the data-acquisition and online-monitoring system. The lessons learned will be invaluable for the final push – the full installation of all elements for start-up in late summer 2007.

Pisa pushes new frontiers

The Pisa meetings on Frontier Detectors for Frontier Physics (FD4FP) began 25 years ago as a small gathering in Tirrenia, near the INFN Pisa Laboratory. This year more than 300 participants from 21 countries attended the 10th in the series, held on Elba on 21-27 May. Lello Stefanini, chair of the FD4FP executive board, reminded the audience in his opening address that, after beginning in Pisa, the meeting moved to Castiglione della Pescaia in 1983 and 1986, and finally settled in La Biodola on Elba in 1989. This year it attracted about 200 selected contributions. As detector development and construction takes place in close collaboration with industries, hi-tech firms from all over the world displayed their products alongside presentations and direct interaction with researchers.

CCEpis1_07-06

To mark the 10th meeting, there were two main modifications to the schedule. The first was a special session on experiments for the Large Hadron Collider (LHC) at CERN. Following an introduction by Michelangelo Mangano of CERN and John Carr of Centre de Physique des Particules de Marseille to the future of high-energy and astroparticle physics, a number of speakers described the main features and the status of the LHC detectors. After years of detector R&D and construction, four large devices are becoming reality and beginning to take data with cosmic rays in preparation for the real beams.

In a second innovation, day two of the meeting included a round table on strategies for future accelerators, chaired by Albrecht Wagner, chair of the International Committee for Future Accelerators and director of DESY. Fermilab’s Jim Strait showed how the laboratory is running the Tevatron while broadening both its neutrino programme (with the MINOS and NoVA experiments) and its effort on the R&D for a future International Linear Collider (ILC).

During the round table, Jos Engelen of CERN stressed the importance at CERN of R&D for the Compact Linear Collider (CLIC) and SuperLHC studies while keeping the main focus of the laboratory on the start-up and the exploitation of the physics capabilities of the long awaited proton-proton collider, the LHC. Barry Barish, director of the ILC Global Design Effort, outlined how a triadic approach – with facilities for neutrino physics, an exploratory high-energy frontier with proton-proton colliders and a precision high-energy frontier with e+e- colliders – would address most of the open problems in particle physics. He also showed how the efforts of a large community that is gathered to design a baseline ILC are making progress towards a Reference Design Report to be available by the end of 2006.

Atsuto Suzuki of KEK showed the impressive results from the KEK-B facility and progress with the Japan Proton Accelerator Research Complex. The multipurpose accelerator complex is on schedule to provide beams to the users by 2008. At the same time the Japanese community is fully involved in the R&D for the ILC to be ready either to participate in an early built ILC or to upgrade the KEK-B facility. Finally in the round table, Roberto Petrozio, INFN president and chair of the Funding Agencies for the Linear Collider (FALC) committee, presented the INFN’s strategies for future accelerators aimed mainly at e+e (low and high energy) colliders, high-intensity radioactive beams for nuclear physics, and the exploitation of hadron beams for medical applications. He clearly indicated how the synergy among different projects is key to the approach. As chair of FALC, Petrozio later reported on recent discussions aimed at harmonizing and optimizing the human and financial resources in high-energy physics in the near future.

A discussion ended the round-table session, with several questions and comments raised from the floor, mainly aimed at understanding how the field will be able to widen the support for its projects and fulfil its promises with the resources available. There was a consensus that a successful start-up of the LHC will be a testbed for the capability of the particle-physics community and might boost it to become bolder and seek even more ambitious goals.

The remaining sessions followed the established tradition, covering all aspects of design, development and running of detectors for high-energy physics. Topics ranged from calorimetry to gas detectors, from solid-state devices to electronics, from particle identification to devices designed for astroparticle physics and cosmology, in so many presentations that only a few aspects can be highlighted here.

Silicon represented the lion’s share among the contributions. Over the years, the use of this material has extended from tracking detectors to calorimetry and particle identification, because the only limitation seems to be our own imagination. What was once used to miniaturize particle detectors and make them more compact is now the base for large-scale devices, for example for trackers for the LHC experiments and for the Gamma-ray Large Area Space Telescope. The need for a further reduction in the amount of material used and the requirements of the next generation of colliders demand an even closer integration of electronics and detectors, so more groups are now involved in developing and understanding monolithic devices.

Moving away from planar devices, 3D silicon detectors, presented at the ninth FD4FP meeting, are now better understood and seem able to provide detectors that are almost free from dead zones. At the ninth meeting, Valery Saveliev of Obninsk State University proposed silicon photomultipliers (SiPM). The past three years have seen a number of groups developing detectors based on this original R&D concept, so it is not surprising that several contributions presented results on SiPM or on devices based on similar concepts that are now being built by several firms around the world.

Several contributions focused on detectors based on gas, showing that they have a future besides the LHC, both on the ground and in space. Richard Wigmans of Texas Tech University presented the results of the dual read-out calorimeter project, DREAM. The separate measurement of the electromagnetic and hadronic component of a hadron shower seems the best way to a precise determination of its energy. It will be interesting to see whether in the near future an experiment will translate this R&D into a full-scale detector.

The growing application of high-energy physics techniques in other fields of research (mainly, but not exclusively, in medicine and biology) was well described in a dedicated session of posters and presentations. Reports on the results of two field studies in archaeological sites, at the Aquileia port near Udine and at the Traiano and Claudio port near Rome Airport, showed an intriguing use of muons, detected by scintillating fibres, for underground mapping.

Three young participants received the 2006 FD4FP Young Physicist Award for their work and presentations. Nicola Cesca, who has a fellowship at the University of Ferrara reported on the semiconductor small-animal scanner, SiliPET; Bilge Demirkoz, a graduate student at Oxford University, presented the ATLAS Semiconductor Tracker; and Judith McCarron, a graduate student at Edinburgh University, presented tests and results of the hybrid photon detector for the ring imaging Cherenkov detector of the LHCb experiment for the LHC.

The Cosmic Landscape. String Theory and the Illusion of Intelligent Design

by Leonard Susskind, Little Brown and Company. Hardback ISBN 0316155799, $24.95.

CCEboo4_07-06

In some theoretical physics institutes, uttering the words “cosmic landscape” may give you the feeling of walking into a lion’s den. Leonard Susskind courageously takes upon himself the task of educating the general public on a very controversial subject – the scientific view on the notion of intelligent design. The ancestral questions of “Why are we here?”, “Why is the universe hospitable to life as we know it?” and “What is the meaning of the universe?”, are earnestly addressed from an original point of view.

Darwin taught us that, according to the theory of evolution, our existence in itself has no special meaning; we are the consequence of random mutation and selection, or survival of the fittest. This is a baffling turn of the Copernican screw, which puts us even farther away from the centre of the universe. We live in the age of bacteria and we are nothing but part of the tail in the distribution of possible living organisms here on Earth.

A possible counter to this reasoning is the notion of a benevolent intelligence that designed the laws of nature so that our existence would be possible. According to Susskind, this is a mirage. Using current versions of string theory and cosmology he provides yet another turn of the Copernican screw. A good aphorism for this book can be found on p347 – the basic organizing principle of biology and cosmology is “a landscape of possibilities populated by a megaverse of actualities”. This may sound arcane, but the book gives a consistent picture based on recent scientific results that support this view. This is no paradigm shift, but an intellectual earthquake.

The author masterfully avoids the temptation to give a detailed account of our understanding of particle physics and cosmology. Instead, he provides an impressionistic, but more than adequate, description of the theories that have inspired us over the past 30 years, some verified experimentally (such as the Standard Model) and some more speculative (such as string theory). A more accurate description may have kept many readers away from the book, yet enough information is given to grasp the gist of the argument.

The main theme is the understanding of the cosmological constant – Albert Einstein’s brainchild, which later he called the biggest blunder of his life – the numerical value of which has been measured by recent astronomical observations. The numerical value of the universal repulsion force represented by this constant simply boggles the imagination. In natural units (Planckian units, as explained in the book) it is a zero, followed by 119 zeroes after the decimal point and then a one. Fine-tuning at this level cannot be explained by any symmetry or any other known argument. It is 120 orders of magnitude, something to make strong men quail.

We can appeal to the anthropic principle, but this is often taken as synonymous with the theory of intelligent design. Susskind avoids this temptation by turning to our best bet yet to unify, or rather make compatible, quantum mechanics and general relativity – string theory. Work from Bousso Polchinski and others implies that string theory contains a bewildering variety of possible ground states for the universe. In recent counts, the number is a one followed by 500 zeroes – a nearly unimaginably big number – and most of these universes are not hospitable to bacteria or us. However, the number is so big that it could perfectly accommodate some pockets where life as we know it is possible. No need then to fine-tune; the range of possibilities is so large that all we need is a procedure efficient enough to turn possibilities into actualities.

This is the megaverse provided by eternal inflation. The laws of physics allow for a universe far bigger than we have imagined so far and as it evolves it creates different branches, which among other properties contain different laws of physics, sometimes those that allow our existence.

This is radical, hard to swallow, and against all the myths that the properties of our observed and observable universe can be calculated by an ultimate theory from very few inputs – but it is remarkably consistent.

The topics analysed in this book are deep – it deals with many of the questions that humans have posed for millennia. It is refreshing to find a hard-nosed scientist coming out to address such controversial questions in the public glare, without fearing the religious or philosophical groups (or even worse, his colleagues), who for quite some time have monopolized the discussion.
Despite the difficult questions raised, when unfamiliar concepts are introduced, one finds a humorous punctuation based on the author’s personal experience, which lets you recover your breath. Some will find the arguments convincing, some will find them irritating, but few will remain indifferent.

Letters to a Young Mathematician. The Art of Mentoring

By Ian Stewart, Basic Books. Hardback ISBN 0465082319 £13.99 ($22.95).

CCEboo3_07-06

“Our society consumes an awful lot of mathematics, but it all happens behind the scenes […] You want your car navigation system to give you directions without your having to do the math yourself. You want your phone to work without your having to understand signal processing and error-connecting codes.”

Letters to a Young Mathematician is a collection of letters addressed to “Meg”, an imaginary young girl who already shows interest in mathematics at high school. The author follows Meg until university, gives her advice, talks to her about mathematics and its relation to society, family, work and careers. In this way, the reader learns more about the raison d’être of mathematics, its applications in our everyday life, its past, present and future.

I particularly liked the first half of the book in which the author talks about himself, his personal experience and his motivations for becoming a mathematician. In these first letters, the reader can really feel the author’s enthusiasm and share with him the wonder of discovering mathematics everywhere, for example in the way roads are designed, in the sea’s waves or in the colours that we see.

The narration then becomes more abstract and therefore less close to the reader. Here references to mathematicians of the past replace too often the personal experience of the author and this makes the reading slower.

However, it never goes too far and links with real life can be found throughout the book. Often this is done by demonstrating how apparently abstract mathematical formulae are used in physics and hence in technology or computing.

The book is inspiring and full of interesting information without being boring. I wish a similar collection of letters could be written to a young physicist!

bright-rec iop pub iop-science physcis connect