A new laboratory inaugurated in May at the University of Sussex, UK, aims to shed light on matter-antimatter asymmetry by measuring the electric dipole moment of the neutron. The new Centre for the Measurement of Particle Electric Dipole Moments has been created thanks to a £1.7 million (€2.6 million) award from the UK’s Joint Infrastructure Fund.
A quarter of a century ago, particle physicists were accustomed to making a major discovery more or less every year. Some of the greatest drama was provided by the highly unexpected announcement of the J/psi particle in 1974 – the “November Revolution”. After the initial amazement had died down, physicists learned that nature contained a fourth “charm” quark that augmented the traditional up-down-strange triplet.
However, even before this discovery, Makoto Kobayashi and Toshihide Maskawa had pointed out that more than four quarks were possible. To explain the mysterious violation of CP symmetry, they had suggested that six types of quark could be present. And so it came to be.
But in 1974, many physicists had trouble digesting a fourth quark, and initially paid little attention to a call for any more. In the newly extended four-quark picture, the charm quark was paired with the strange quark to form a second doublet, a heavier counterpart of the up-down pair which make up the protons and neutrons of stable nuclear matter. These two quark doublets could also be associated with two then-known electrically charged weakly interacting particles (leptons), the muon and the electron. The muon, being heavy, went with the heavier charm-strange quark pair, while the electron was associated with the lighter doublet.
Then, in the mid-1970s, a group led by Martin Perl working at the SPEAR electron-positron collider at the Stanford Linear Accelerator Center (SLAC) discovered a third electrically charged lepton, the tau, which was much heavier than the muon. It took time for the idea to be accepted, but a third charged lepton called for a third doublet of quarks, making the sextet suggested by Kobayashi and Maskawa.
Talk of heavier quarks became commonplace, and the shadowy third doublet was called either “top” and “bottom” (reflecting the “up” and “down” of everyday quarks) or “truth” and “beauty” by the more romantic. (The “top” label has now eclipsed “truth”, but “beauty” and “bottom” are both widely used. In either case the labels of the third quark doublet can be abbreviated to “t” and “b”.)
The first awareness that leptons could be served in different ways had come with Lederman, Schwartz and Steinberger’s 1962 neutrino beam experiment at Brookhaven, which showed that neutrinos (electrically neutral leptons) come in two kinds – one associated with electrons, the other with muons. (Describing this discovery in the March 1963 issue of Scientific American, Lederman wrote: “These days, the discovery of a new elementary particle is scarcely news…”)
Neutrino beams became a new physics tool, and one idea was to use them to uncover the W and Z carrier particles of the weak force, which are analogous to the photon carrier of electromagnetism. The signature of a W, the charged carrier of the weak force, would then be a pair of oppositely charged leptons (electrons or muons). This search soon bypassed the need for a neutrino beam altogether, and focused instead on the production of charged lepton pairs at high energy.
The upsilon
The W was not found via this route, but in the late 1960s Lederman’s team at Brookhaven uncovered a bump in the spectrum of muon pairs. The reason for this mysterious effect was not immediately clear, but Lederman’s curiosity was aroused. Physicists had learned that charged lepton pairs (either muons or electrons) could be a pointer to other photon-like particles, created in the annihilation of quarks and antiquarks deep inside subnuclear processes.
In this way, Sam Ting codiscovered the J/psi particle at Brookhaven in 1974. This, with a parallel experiment by Burt Richter at SLAC’s SPEAR ring, catalysed the November revolution. The J/psi is a tightly bound charmed quark-antiquark pair.
Having missed out on the J/psi discovery, which earned a Nobel prize for Richter and Ting, Lederman diligently continued his study of muon pairs, this time in a new energy domain using the Fermilab synchrotron. An initial sighting near 6 GeV triggered a discovery action plan, with the Greek letter upsilon being reserved for a possible new particle. This signal went away, but in 1977, muon pairs began to accumulate at 9.5 GeV. This time it was a new particle.
The upsilons (on close examination, there were several of them) are b quark-antiquark pairs, in the same way that the J/psi is made up of a charmed quark and antiquark. Unlike the J/psi, the upsilon was not discovered via the electron-positron annihilation route, as in 1977 no electron-positron collider had enough energy. Nearest in energy was the DORIS machine at DESY, Hamburg, and following the upsilon news the DORIS beam energy was turbocharged in a crash programme. By the following summer, the PLUTO and DASP detectors at DORIS had seen their first upsilons.
The lightest upsilons are tightly bound and therefore cannot decay into their component b quarks. The next collider to arrive on the B physics scene was Cornell’s CESR electron-positron collider, which came into operation in 1979, and with its higher energy could reveal a full array of upsilon resonances. By the following year, the CLEO and CUSB detectors at CESR were seeing the first B particles (containing the b quark) produced via the decay of the heaviest (4S) upsilon.
For the next few years, these detectors, together with the ARGUS detector at DORIS, were major players in B physics, which went on to become mainstream science at major machines all over the world. In a major effort, the spectroscopy of B particles took shape, and the parameters of the b quark were documented.
In the 1970s, the economical SPEAR electron-positron collider at SLAC showed how effective these machines could be. Bigger colliders – PETRA and PEP – were proposed at DESY and SLAC, respectively. Even bigger was the TRISTAN machine at the Japanese KEK laboratory. Unfortunately, the energy ranges covered by these machines were not as rich in discovery potential as SPEAR’s had been.
B-factories
The next chapter in the B physics saga is being written by a pair of new high-intensity machines which mass-produce B particles – PEP-II at SLAC and KEKB at KEK. Proposed in the mid-1990s, these machines are now making their first precision measurements, and provide the right conditions to explore CP violation in a new setting – B physics. The wheel has turned full circle – after providing the first indication that a third generation of quarks exists, CP violation is now being measured using those quarks.
Eclipsed by the new B-factories after making a decade of milestone contributions, notably with a line of CLEO detectors, the CESR machine at Cornell stopped for B physics in 2001. The physics of b quarks is far from being fully understood, and major mysteries remain. For the future, B physics will continue to be a major focus, notably at Fermilab’s Tevatron collider and with the LHCb detector at CERN’s LHC collider. (The “top” or t quark, the companion of the b quark in the third quark doublet, and the heaviest quark of all, was discovered at Fermilab in 1995, giving that laboratory a proprietary interest in the heaviest quark pair.)
Further information
Twenty years
The 20th anniversary of B physics in 1997 was marked by a symposium at the Illinois Institute of Technology in Chicago, US. The proceedings of this meeting, edited by Ray Burnstein, Daniel Kaplan and Howard Rubin, and published in the American Institute of Physics Conference Proceedings Series (Volume 424), provide a valuable pointer to early developments in B physics. Its title – “Twenty Beautiful Years of Bottom Physics” – underlines the continuing confusion about what to call the fifth (b) quark.
Exploration of the internal structure of the nucleon has reached a new stage. The international community of physicists studying hadron structure with electromagnetic probes has identified the main aims and goals for the near and mid-term future. This was the conclusion of the European Workshop on the QCD Structure of the Nucleon (QCD-N’02), which was held at the splendid Castello Estense in Ferrara, Italy, in April. Some 120 theorists and experimentalists reached a remarkable level of agreement on the hot new topics in the field, and on the avenues and strategies that need to be followed to unravel the inner structure of hadrons. This consensus has been translated into the “Declaration of Ferrara”, which has already been signed by many scientists interested in the future of hadronic physics.
Paola Ferretti Dalpiaz of Ferrara, Enzo De Sanctis of Frascati and Wolf-Dieter Nowak of DESY chaired the workshop, at which some 60 presentations focused on the issues of new distributions and fragmentation functions, generalized parton distributions and exclusive reactions, diffraction, nuclear effects and lattice quantum chromodynamics (QCD). A special session and a panel discussion, led by Peter Kroll of Wuppertal and Dirk Ryckbosch, spokesman of DESY’s HERMES experiment, was devoted to future facilities and measurements.
Several speakers emphasized the fact that the fundamental question of the origin of hadronic matter calls for a better understanding of the phenomenon of confinement in strong interactions. After all, only about 2% of the mass of the nucleon can be assigned to current quark masses, which are expected to be explained by the Higgs mechanism. The major part of the mass of hadrons is likely to originate from massless gluons – in other words from binding effects of strong interactions. The inner structure of nucleons, which make up most of the visible matter in the universe, as well as that of other hadrons, is still not understood from first principles in terms of quark and gluon degrees of freedom as described by the underlying quantum field theory, QCD.
The spin of the nucleon is a key issue in the investigation of its structure. It has been confirmed that the nucleon’s quark and antiquark constituents carry only 20-30% of its longitudinal spin. The rest is provided by the polarization of gluons and by the orbital angular momenta of quarks and gluons. First indications on the sign and size of the gluon polarization have been seen by the HERMES experiment, and precision measurements are on the way from COMPASS at CERN, and in the US from RHIC-spin at Brookhaven and E-161 at SLAC. However, another fundamental piece of the puzzle is still missing for the full picture – to complete our knowledge of nucleon spin we have to consider a situation where the nucleon spin is oriented perpendicularly to the direction of its motion. The associated distribution function is dubbed “transversity”, and has recently been the subject of major theoretical and experimental efforts. The difficulty with measuring transversity is related to its unusual spin property (it is a chirally odd function), which requires the occurrence of a second object with a similarly unusual spin property in an observable. It can be measured by looking at final-state hadrons in semi-inclusive experiments (where particles in the final state are studied), which involves additional fragmentation functions that help to describe certain spin-related aspects of hadronization. These new fragmentation functions are not only indispensable tools for the extraction of transversity, but are also of interest in themselves, since they bear witness to how confinement comes about. The use of the nuclear medium as a “fermiometer” to understand the scales and the dynamics of the hadronization processes of quarks was pointed out at the workshop. Information from the clean process of lepton-nucleus scattering is also needed for a better understanding of the hadron yield and spectra produced in heavy-ion reactions.
The complexity of the task of mapping transversity requires the interplay of different and highly complementary measurements. A strong combined effort in this direction was presented as a major part of the programme of HERMES, COMPASS, RHIC-spin and BELLE at Japan’s KEK laboratory.
Physicists have realized in the last few years that there is more to learn about nucleon structure from exclusive reactions. The advent of new fundamental and conceptual ideas, the so-called generalized parton distributions (GPDs), has triggered enormous theoretical and experimental activity. GPDs provide a unified description of exclusive (where all produced particles are studied in conjunction with the incident particle) and inclusive (averaged over all final states) hard reactions. Moreover, the formalism of GPDs has a sound basis in QCD and relies on formal factorization theorems.
Discussion on GPDs was initiated by the exciting possibility of accessing the orbital angular momentum contribution of the nucleon’s spin, which is so far completely unknown. A qualitative new feature of GPDs is that they allow insight into the transverse structure of the nucleon. Inclusive deep inelastic scattering processes probe the ordinary momentum distribution of the nucleon. Exclusive reactions, on the other hand, allow the distribution to be probed as a function of the distance of a quark or gluon from the centre of mass of the nucleon. In other words, this effectively enables nucleon tomography, since by combining information from different measurements the nucleon can be scanned in transverse slices.
Encouraging first results along these lines from the H1, ZEUS and HERMES experiments at DESY, and from CLAS at Jefferson Lab in the US were presented at the workshop. These are in general agreement with more recent GPD expectations, and so have demonstrated the feasibility of proposed future measurements. Moreover, the emerging interest to link the classical diffractive description of exclusive processes at high energy with the GPD description of exclusive processes will require precise measurements over a broad kinematic range.
Moments of the transversity distribution and of the generalized parton distributions are among the quantities amenable to lattice gauge calculations. For example, results on the lowest moment of transversity, the so-called tensor charge, were reported. The strongly QCD-inspired instanton model calculations also provide verifiable predictions. These are very promising ways to link phenomenological observations to first principle theoretical considerations.
The Declaration of Ferrara
After presentations of current exploratory studies and discussions of new ideas, workshop participants concluded that future dedicated facilities are needed. This led to the view of a large community of European hadron structure researchers being expressed in the Declaration of Ferrara.
It has become clear that in-depth studies of the energy, momentum and spin-dependence of exclusive cross-sections cannot be performed with the current generation of accelerators and spectrometers, but require substantial advances in experimental facilities and techniques. This is mostly due to luminosity, duty-cycle and kinematic resolution limits, since the most interesting reactions are rare and have large backgrounds. High experimental precision can only be accomplished with luminosities of at least 1035 cm-2s-1, requiring an accelerator with a duty cycle of 10% or more. Beam energies of 25-50 GeV are needed to cover a kinematic range suitable for extracting cross-sections and their scale-dependence in exclusive measurements. For non-exclusive studies of hadron structure, the optimal beam-energy range is 50-100 GeV. The highest possible polarization of beam and targets is required in both cases. Large-acceptance detector systems with high-rate capabilities and with a mass resolution of the order of a third of the pion mass, essential for the measurement of exclusive channels, will also lead to a new quality of meson and baryon spectroscopy in lepton-nucleon scattering.
Several scenarios for a new generation of precision experiments to study QCD with electromagnetic probes are being studied in Europe and the US. Collider and fixed-target options are fully complementary, since the former cover a wider kinematic range and the latter provide considerably higher luminosities. The fixed-target option, favoured by European scientists, can be technically realized in a cost-effective way based upon the technology being developed for future linear colliders, especially if existing or projected infrastructure can be used.
The conclusion of the workshop was that in order to keep Europe’s leading role in studying hadron structure and QCD at all scales with energetic electromagnetic probes, a new fixed-target facility with a high duty cycle providing polarized beams in the energy range 25-100 GeV is needed.
The universe around us is nothing like it looks. The stars make up less than 1% of the matter in the universe; while all the gas and other forms of baryonic matter account for less than 5%. We know little about the other 95% except that it is probably divided into 35% cold dark matter and about 60% dark energy.
Dark energy is detected by the recent acceleration of the universe and is observed by the study of type 1a supernova sources. A series of symposia have been organized in Southern California for the past 8 years to hear the latest in the developments in this field of particle cosmology. It was at the 1998 meeting that the two teams that have observed the accelerating universe first made a joint announcement of these important results.
The particle physics of dark matter is perhaps the most advanced in our understanding of these phenomena. Perhaps the best motivated and best understood form of particle dark matter comes from supersymmetry (SUSY).
This theory gives a “semi-natural” explanation of the amount of dark matter in the universe, which would take the form of weakly interacting massive particles (WIMPs) – the parameters are constrained by data from CERN’s LEP experiments and elsewhere. The strong interplay between proposed dark matter detectors and the direct observation of SUSY particles at CERN’s forthcoming Large Hadron Collider (LHC) reveals a strong connection between collider particle physics and astroparticle physics.
There was a complete discussion of the current search for SUSY dark matter and future detectors at the meeting. The DAMA experiment at Italy’s Gran Sasso underground laboratory continues to claim a signal for SUSY due to an observed annual variation. However, there are now three experiments – Edelweiss at Modane in France, ZEPLIN I at Boulby in the UK, and CDMS I at the Stanford Linear Accelerator Center in the US – that cut deeply into the region allowed by DAMA. These experiments all use some form of background discrimination.
A joint analysis of the CDMS I data at DAMA was claimed to exclude the DAMA signal from a WIMP source to 98% confidence level, even assuming all of the CDMS I events are not neutron-induced. The DAMA group disputes this claim, however. The DAMA experiment is being upgraded and hopefully this dispute will be resolved soon. The current predictions for the rate of SUSY WIMP detectors are nearly all well below the DAMA sensitivity, as was discussed extensively at the meeting.
Bigger machines
It was generally agreed that a new generation of much larger detectors will be needed to provide a clean detection of the SUSY WIMP signal. There are several discriminating detectors in the 10-30 kg mass range being constructed or upgraded such as CDMS II, Edelweiss and ZEPLIN II. To provide a clear study of the WIMP signal, detectors of the target mass of 1 tonne will be needed, and there are preliminary studies of possible detectors for this mass range. It is truly remarkable that detectors of this great sensitivity are being developed.
Dark energy
The issue of the origin of dark energy is more complex and possibly much more obscure. After the pioneering work of the two teams working on type 1a supernovae, there are projects for two impressive detectors that will to try to identify the equation of state of the dark energy.
The SNAP satellite would observe type 1a supernovae out to a redshift of around z = 1.5. The other possibility is to study type 1a supernovae from the ground using a large “dark matter” telescope in Chile called the Large Synoptic Survey Telescope (LSST). It may be that both of these methods will be needed to unravel the equation of state and demonstrate that the effect is either due to a cosmological constant or some other elementary particle-like source.
In one of the most interesting talks at the meeting, Paul Stenhardt of Princeton discussed the impact of an accelerating universe on the old question of whether the universe may be cyclic in time. It is possible that an accelerating universe could wipe out the entropy of the universe over a long time and then if the equation of state of the dark energy complies, the universe might contract to a “big crunch”. According to this viewpoint, the accelerating state of the universe is actually required rather than being a bizarre add-on to a Friedmann universe as currently held belief would prefer.
There was considerable discussion of the possibility of self-interacting, warm and hot dark matter (in light of recent claims for the observation of double-beta decay). None of these issues was clarified at the meeting.
During the course of the Southern California meetings, a much clearer picture of the bulk of components of the universe has emerged, but we have yet to find any evidence of what this stuff really is. Hopefully this will change as the new WIMP detectors underground and new detectors in space start taking data and the LHC is turned on. The next symposium will be held in February 2004 in Marina del Rey.
In the mid-1970s Al Maschke of the US Brookhaven National Laboratory suggested that heavy-ion beams, rather than laser beams, could be used as a driver to implode inertial-fusion targets for the commercial generation of electrical power. The beams would deliver the kinetic energy that would heat the surface of a capsule containing deuterium and tritium, with the resulting ablation driving a compression that causes nuclear fusion. Heavy ions have the advantage that energy deposition is simpler with them than with photons, while much of the accelerator technology necessary has already been demonstrated to have long life, a sufficiently high pulse repetition rate and high electrical efficiency.
In the US, researchers from three laboratories – Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL) and Princeton Plasma Physics Laboratory (PPPL) – have formed the US Heavy-Ion Fusion Virtual National Laboratory (HIF-VNL) to coordinate their work on heavy-ion fusion.
Heavy-ion fusion research is carried out in several laboratories around the world. As well as the HIF-VNL in the US, a high-space-charge electron ring at the University of Maryland will study intense beam physics. Researchers from the Naval Research Laboratory, the Mission Research Corporation, the University of Michigan, the Massachusetts Institute of Technology, the Sandia National Laboratory and the Stanford Linear Accelerator Center are also involved in the US heavy-ion fusion programme. Other efforts aimed at both accelerator physics and studying the interaction of heavy ions with hot matter exist at GSI in Germany; the Tokyo Institute of Technology, RIKEN, Utsunomiya University and Osaka’s Institute of Laser Engineering in Japan; Orsay in France; and the Russian Institute for Theoretical and Experimental Physics. This article describes the progress and plans of the HIF-VNL programme.
Induction linac drivers
For its driver accelerator the US programme has chosen an induction linac – a linear accelerator that accelerates ions by changing the strength of a magnetic field in magnetic material encircling the beams. The induction cores of such linacs have high efficiency at the high beam currents that fusion demands, and their cost is relatively low. Moreover, electrical losses in induction cores remain effectively constant as the beam current increases, while more and more energy goes into the beam, since essentially the same electric accelerating field is produced. And because linacs are one-pass machines, they are able to stably transport the extremely intense beams that are necessary to implode the target; around 1-7 MJ must be deposited in approximately 10 ns, corresponding to around 1016 ions of mass 200 amu at 3 GeV.
Quadrupole focusing field limits make it very uneconomical to transport such a large amount of charge in a single beam. The approach being followed is therefore to transport multiple (100-200) beams in parallel through a common set of induction cores that would encircle the beam array. Beams from a multiple-beam injector would enter the linac at an energy of about 2 MeV. They would then be accelerated over a few kilometres to a few GeV. Electrostatic quadrupoles would be used for focusing up to about 100 MeV in some designs (figure 1); thereafter focusing would be done by arrays of superconducting quadrupoles.
At the end of the accelerator a coherent “velocity tilt” would be applied, accelerating the tail of the beam more than the head to produce longitudinal compression of a factor of 10-20, shortening the pulse to around10 ns. The beams would then go through a final focusing system, and transport through the target chamber to the target. The pulse repetition rate would be about 5 Hz, with clearing of the chamber of target debris, molten salt, and gas being the limiting factor. The challenge is to maintain very low emittance, both transverse and longitudinal, in the presence of the beams’ high space charge, so that the beams will focus to a spot a few millimetres across at the end of the driver.
Beam dynamics in this accelerator are determined largely by space charge – the space-charge depression of the betatron phase advance per lattice period is approximately a factor of 10, so that space charge nearly cancels out external focusing forces. The beams act like non-neutral plasmas, exhibiting normal modes and instabilities not found in single-particle dynamics. Therefore self-consistent particle-in-cell (PIC) time-domain simulations are the main tools used to calculate beam behaviour. In the target chamber the problem is complicated by the need to shield the chamber walls from neutrons, radiation and target debris. Designs include sheets and crossed jets of the neutron-thick molten salt FLiBe (a salt of fluorine, lithium and beryllium) in the target chamber, shielding the walls from these target products. Beams pass through spaces between the jets. FLiBe vapour can then neutralize the beam, helping focusing, but will also strip beam ions, and under some conditions can lead to plasma instabilities. Photoionization of the salt also provides a copious source of electrons near the target after ignition. Modelling in the target chamber therefore requires multispecies, 3D, time-dependent electromagnetic simulation with fully self-consistent space charge.
Experiments at LBNL in the 1980s and 1990s showed that space-charge-dominated beams are stable, and can be accelerated and compressed in an induction accelerator, combined and focused to a spot. A quarter-turn experiment at LLNL demonstrated the bending of an intense beam. These were scaled experiments at up to 1 MeV, with currents of a few tens of milliamps or less. Dimensionless physics parameters were designed to be in the same range as in a driver, so that physics tests were valid. The HIF-VNL programme is now moving to experiments with currents similar to those of a driver beam at low energy (0.1-1 A), so that effects dependent on the beam’s electrostatic potential can be studied. The programme currently focuses on three experimental thrusts: the High Current Experiment (HCX); a Neutralized Transport Experiment (NTX); and experiments exploring a new “minibeamlet” injector concept.
The HCX at LBNL saw its first beam in January and is in the process of commissioning. Its main mission is to investigate the optimum aperture for transporting an intense high-current beam. Since induction cores must encircle the whole array of beams in a driver, the selection of the transverse aperture allotted to each beam can significantly affect driver cost, and therefore design optimization. The HCX programme will investigate the influence on beam propagation and brightness of a range of physics – image forces, mismatch of the beam envelope to the focusing system (which through a resonance can pump ions into a halo), and gas and electrons produced by scraping halo. The HCX is a single-beam experiment, using a drifting 0.2-0.5 A, 1.0-1.8 MeV beam of K+ ions. Since the beam potential is about 2 kV, beam space charge has a strong effect on electron orbits.
The experiment currently consists of an injector followed by 10 electrostatic quadrupoles. At least four magnetic quadrupoles will be added to study the production and orbits of electrons produced intentionally by beam scraping. Next year up to 30 more quadrupoles will be added to continue the dynamic aperture studies. Finally a small induction core will be used to explore the longitudinal confinement of the beam by tailoring the induction pulse.
Princeton is making a plasma source for the NTX, which is currently at the design stage and will also be at LBNL. Starting in 2003, it will investigate beam physics in the final focus system along with intentional neutralization of the beam after the final lenses. The NTX will consist of a 400 keV injector followed by a four-quadrupole focusing system. A plasma source downstream will study various neutralization methods that could counteract the space charge of the beam, producing a smaller spot at the target. The effects of both a small plasma source upstream of the target chamber and bulk plasma in the target chamber – as would be produced by photoionization of FLiBe – will be investigated. Another important area of interest for the NTX is the correction of aberrations in the final lens system. This is a well known problem for beams with negligible space charge, but the effects of space charge in the HIF case are significant and the prescription for aberration correction is not well understood.
A new concept for an intense beam injector is in the design stage, with an experimental test planned for 2004. The Child-Langmuir law shows that, because voltage limits increase only as (approximately) the square root of the voltage for a large diode, the current density of the beam increases as the current and radius of the beam decrease. Therefore it is theoretically possible to make the injector more compact – a very important feature for a multibeam accelerator – by making each of the accelerated beams with small, very bright beamlets, which merge later in the injector to form a single beam. Arranging the beamlets to match parameters for the downstream lattice can also eliminate the transverse blow-up of the beam in the matching section. The experiment will merge around 100 high-current-density beamlets of 12 mm radius near the end of the injector to make a single 0.5 A, 1.6 MeV heavy-ion beam. 3D PIC simulations project the emittance to be similar to that of beams from a standard large diode. The beamlet-merging idea has already been used in neutral-beam-heating accelerators for tokamaks, but in that case protons were used and emittance was not important.
Once the experiments are completed, the programme will be ready for a source-to-target experiment that integrates all the beam manipulations needed in a driver. A workshop held in October 2001 chose the overall design parameters for an Integrated Beam Experiment (IBX). The IBX is envisioned as a single-beam experiment, possibly upgradable to more beams, with final energy in the 10-20 MeV range. It will incorporate almost all of the physics of the driver (and much engineering at full scale), the exceptions being in the areas of beam-target physics, multiple-beam interactions and high-energy effects such as self-magnetic and inductive effects. In particular, the experiment will be able to study longitudinal beam dynamics, including wave motion on the beam, halo formation and beam heating over intermediate transport lengths, the bending of space-charge-dominated beams, and self-consistent final drift compression, final focus and neutralization.
Looking ahead, the HIF-VNL envisions an Integrated Research Experiment of a few hundred MeV that carries arrays of multiple beams all the way to the target and is capable of beam-target studies. All this is taking the accelerator aspects of heavy-ion fusion research in the US from exploration of the concept through to proof of principle.
The Karlsruhe Forschungszentrum is the home of several of the most active theorists and Monte Carlo modellers in cosmic-ray physics. It is also the site of the densely instrumented KASCADE cosmic-ray air-shower array, which has produced some of the most definitive data for energies below 100 PeV (1 PeV = 1015 eV). Karlsruhe was therefore an obvious venue for the Needs from Accelerator Experiments for the Understanding of High-Energy Extensive Air Showers (NEEDS) workshop, organized by Hans Bluemer, Andreas Haungs and Heinigerd Rebel of Karlsruhe, and Lawrence Jones of the University of Michigan.
Physicists generally understand that cosmic rays with energies up to about 1 PeV are produced by shock acceleration in supernovae. At energies up to around 1014 eV, their composition is similar to that of stars, with minor and well understood differences – for example additional lithium, beryllium and boron from spallation of carbon nuclei on interstellar nuclei. The differential spectrum falls steeply with energy as about E-27. At higher energies, however, it is unclear what the acceleration mechanism is – it is difficult to provide the required energy from supernovae shocks. Furthermore, there are indications that the composition changes, with heavier primaries becoming relatively more abundant. There is also a change in the slope of the spectrum, steepening to about E-3 at about 3 PeV. This corresponds to the momentum range where particles may escape confinement in the microgauss-level galactic magnetic fields.
Astrophysicists are interested in learning about the sources, composition and energy spectrum of the cosmic rays extending to energies above 1020 eV. Below about 1014 eV, the spectrum and composition are well known from direct observation with sophisticated detectors flown on balloons and earth satellites. However, at energies above 1 PeV, the flux is only about 100 particles per square metre per steradian per year – too low for useful direct observation. Consequently, everything we know at such energies is based on ground-level observations of air showers of electrons and photons with coincident hadrons and muons. Properties such as their densities, radial distributions, energy distributions and dependence on depth in the atmosphere can be interpreted in terms of the primary cosmic-ray energies and nuclear mass numbers.
Such interpretations of ground-level observations are heavily dependent on Monte Carlo simulations of the primary interaction in the upper atmosphere and the evolution of the resulting particle cascade. The cascade is dominated by lower-energy phenomena that are reasonably well understood. However, the primary and early subsequent interactions involve energies up through the PeV range, and existing Monte Carlos are almost entirely based upon data from fixed-target accelerator experiments below 1 TeV. A sense of the confusion that currently exists is clear on a plot showing the average of the logarithm of the nuclear mass number of the primary cosmic rays versus energy (figure 1).
Small angle measurements
Fermilab’s Tevatron Collider provides proton-antiproton collisions at a centre of mass energy approaching 2 TeV, equivalent to a cosmic ray of about 2 PeV incident on a stationary proton. Brookhaven’s Relativistic Heavy Ion Collider (RHIC) provides energies of more than 100 GeV per nucleon in beam-beam collisions of nuclei. For example, a nitrogen-nitrogen collision at RHIC is equivalent to a 5 ¥ 1014 eV cosmic-ray nitrogen nucleus incident on an air nucleus. CERN’s Large Hadron Collider (LHC), with collisions of 14 TeV in the centre of mass, will provide energies equivalent to a proton of about 1017 eV incident on a stationary proton. The LHC will also produce nucleus-nucleus collisions.
The current generation of colliders plus the LHC will, in principle, be able to provide the data for the refinement of Monte Carlo models to provide a less ambiguous interpretation of cosmic-ray air-shower data. However, most accelerator studies are made with detectors that do not cover angles within one or two degrees of the beamline. Since it is within such small angles that most of the final-state energy flow occurs, this is the region that dominates air-shower observables. About 80% of the final-state energy flow in the Tevatron, for example, is estimated to be within a 28 mrad cone centred on the beam. For the LHC, this figure is 95%.
If a 2 PeV primary proton collides with an air nucleus and continues with half its initial energy, acquiring 200 MeV/c transverse momentum in the collision, it makes an angle of only 0.2 µrad with its initial direction. The equivalent Tevatron process is a TeV proton colliding with an antiproton and scattering at an angle of 0.4 mrad, well within the cone that is unobserved by detectors. This is a typical final state of interest in the calculation of air-shower development, and it is here that measurements are needed. The average value and distribution of inelasticity (1 minus the fraction of the incident energy carried by the most energetic final state hadron) in a nucleon-nucleon collision and its distribution are also quite uncertain, and vary among current Monte Carlo models. A highly inelastic interaction of a high-energy cosmic-ray proton could produce ground-level observables indistinguishable from those from a low-inelasticity first interaction of a heavier primary nucleus of the same energy.
The Karlsruhe group has developed CORSIKA, an elegant Monte Carlo code for simulating air showers. One input to this code is the physics of the first interaction of the primary cosmic ray with an air nucleus, and several codes have been developed for that simulation. It is here that the problems arise.
Markus Risse, Gerd Schatz and Andreas Haungs from Karlsruhe, and Johannes Knapp and Markus Roth from Leeds, among others, discussed results from KASCADE, from air-shower and emulsion experiments at mountain elevations, and from other observations, citing their comparisons with various Monte Carlo models. In general, none of the models fit the data as well as could be hoped for. It was encouraging, however, to learn that the models have been tuned recently to improve their agreement with data. Eugene Loh of Utah discussed events of over 1020 eV, the highest energy observed, with the Fly’s Eye technique, and Oscar Saavedra of Turin discussed unusual cosmic-ray events observed at 5200 m on Bolivia’s Mount Chacaltaya.
The status of Monte Carlo models and their varying degrees of success in simulating observations was discussed by Dieter Heck and Sergej Ostapchenko of Karlsruhe, Ralph Engel and Todor Stanev of the University of Delaware, Hannes Jung of Lund, Jean-Noel Capdevielle of the College de France, and Giuseppe Battistoni of Milan. Following these discussions, the accelerator experiments relevant to these questions were described. Speakers included Andrei Rostovtsev and Martin Erdmann of DESY, Damian Bucher and Johannes Ranft who discussed Brookhaven’s RHIC, Valeria Tano of Fermilab, and Stefan Tapprogge and Aris Angelis who discussed projects in preparation for the LHC. Lower-energy fixed-target experiments at the three laboratories, with beams of 5-120 GeV, were also discussed. Kai Zuber and Giles Barr presented the CERN’s HARP experiment. Brett Fadem of Iowa State University discussed Brookhaven’s E941, and Carl Rosenfeld of the University of South Carolina presented Fermilab’s main injector particle production (MIPP) experiment, all of which measure particle production cross-sections that are valuable for high-energy cosmic-ray work.
Priority list
A primary objective of the workshop was to develop a priority list of desired accelerator measurements that could be used to reconstruct cosmic-ray interactions in Monte Carlo models much more accurately than is currently possible. These would significantly improve the interpretation of cosmic-ray observations. The highest priority is to obtain inclusive final state spectra for protons, neutrons, charged pions, neutral pions and charged kaons from proton-proton (or proton-antiproton) interactions over the range 0.1 < I>x < 1.0, where I>x is the ratio of the longitudinal momentum of the final-state hadron to its kinematic maximum. These data would be desirable over the energy ranges spanned by RHIC, the Tevatron and the LHC. Similar data from RHIC and the LHC would be desirable from proton-nitrogen collisions, representing proton collisions with air nuclei. Inclusive final-state data from nucleus-nucleus collisions would also be useful. The primary cosmic rays of interest range up to iron, so data from iron-nitrogen collisions would be very interesting. Total cross-sections and total inelastic cross-sections for proton-proton, proton-nucleus and nucleus-nucleus collisions are highly desired, particularly for nitrogen. Pion-proton and pion-nucleus inclusive final state data would be useful, although such measurements are limited to fixed-target sub-TeV energies for the foreseeable future. The lower-energy data from the HARP, MIPP and E491 experiments will also be valuable for tuning CORSIKA and other Monte Carlos that model the atmospheric cascade.
Loh’s contribution to the priority list concerns the use of the Earth’s atmosphere as a calorimeter. The air scintillation technique used in the Fly’s Eye detectors is quite well understood, but the fraction of the total energy of the incident cosmic ray that does not appear as ionization is based on educated guesswork. It would be very useful to know better what fraction of the total incident energy is invisible to the air scintillation observations, taking the form of neutrinos, high-energy muons that lose most of their energy in the earth and nuclear binding energy, for example.
The final item on the list came from the Karlsruhe group, who would like to see spectra dependent on centrality (how close to head-on the collisions are). Such data would make microscopic knowledge of interaction mechanisms possible, rather than the currently available data averaged over all impact parameters.
Although not on this accelerator priority list, Saavedra, Jones and others noted the desirability of locating an air-shower detector array with the complexity and sophistication of KASCADE at high mountain elevations.
The workshop concluded that two elements are of primary importance; stronger links between the accelerator and cosmic-ray high-energy communities, and commitment on the part of cosmic-ray physicists to contribute actively to accelerator experiments.
Those who took part in the NEEDS workshop believe that they have taken a step towards realizing these goals.
The Sun shines, just as described by our best theories of its thermonuclear furnace. Neutrinos oscillate – at least, electron-neutrinos change into another type. These are the main conclusions of the neutral-current (NC) results from the Sudbury Neutrino Observatory (SNO) in the Creighton Mine, Ontario, Canada.
In 1985, Herb Chen from the University of California, Irvine, first pointed out that heavy water offered a direct approach to solving the “solar neutrino problem” – the deficiency between the number of solar neutrinos detected on Earth and the flux predicted by the standard solar models. The discrepancy raised the possibility that the electron-neutrinos emitted by the Sun changed to another type (muon or tau) somewhere between emission and detection. Chen realized that an experiment was needed to detect all neutrino types equally, and that the way to do this was through NC interactions between neutrinos and nuclei. (Elastic scattering, or ES, between neutrinos and electrons is possible but more complicated because there are contributions from both NC and charged-current, or CC, reactions for electron-neutrinos, but not for the other species.) He proposed that heavy water would be the ideal detection medium. The NC reaction, due to all neutrinos, simply splits the deuterium nucleus into a proton and a neutron, while a CC reaction, due only to electron-neutrinos, changes the neutron into a proton accompanied by an electron. In both cases, the heavy water acts as both target and detector. The neutrons released in the NC reaction can be detected through the 6.25 MeV gamma ray released when they are captured by deuterium. The gamma rays and the electrons produced in the CC reaction are observed through the Cerenkov radiation they create in the water.
Chen’s proposal led directly to the construction of the SNO, based on 1000 tonnes of heavy water, although sadly Chen himself did not live to see the detector he had envisioned. Last year the SNO collaboration published the first results from the CC and ES reactions. When combined with data from other detectors, these results provided strong evidence that neutrinos change type, or oscillate. Chen’s dream has now been realized, and the first results from the NC interactions of solar neutrinos in heavy water have been announced. SNO has unambiguous evidence for neutrino oscillation in data from a single detector.
To detect the NC reactions, the SNO team looks for the Cerenkov light from the gamma rays from neutron capture. There are many background signals, in particular from daughter products of the natural uranium and thorium decay chains, which produce free neutrons through photodisintegration of deuterium. As Chen realized, the very nucleus that makes the detector effective is also the cause of its biggest background problem. The vessel containing the heavy water is therefore surrounded by 7000 tonnes of light water, to absorb gamma rays and neutrons from radioactivity in the surrounding rock. In addition, the SNO collaboration has developed a water purification system that reduces concentrations of elements from the uranium and thorium decay chains to a million times lower than those in natural water, which means impurity levels less than 10-14 g/g for the heavy water and less than 10-13 g/g for the light water.
So far the team has applied detailed analysis to data taken between November 1999 and May 2001. They use NC reactions to measure the total flux of solar boron-8 neutrinos (to which SNO is sensitive), which they find to be 5.09 + 0.44 – 0.43 (statistical) + 0.46 – 0.43 (systematic) ¥ 106 cm-2s-1. This is completely consistent with standard solar models – there are no missing solar neutrinos. The measurements for the CC and ES reactions, in contrast, lead to the fluxes of electron-neutrinos and neutrinos of other types from the boron-8 decays in the Sun. The electron-neutrino component of the flux is found to be 1.76 + 0.05 – 0.05 + 0.09 – 0.09 ¥ 106 cm-2s-1, while the non-electron-neutrino component is about twice as large, at 3.41 + 0.45 – 0.45 + 0.48 – 0.45 ¥ 106 cm-2s-1, or 5.3 standard deviations above zero. This is compelling evidence that about two-thirds of the electron-neutrinos from the Sun do indeed change to another type or “flavour” before they are detected.
The challenge now is to discover more about the precise mechanism that mixes the different neutrino flavours and makes them oscillate from one type to another. SNO has already begun this further exploration through a first measurement of day and night energy spectra for solar neutrinos. Travel through the Earth might alter the spectrum according to certain theories of neutrino mixing, through enhancement by matter. The SNO finds a night-day asymmetry for electron-neutrinos of 7.0% ± 4.9% + 1.3% – 1.2%. A global fit to these SNO data and those from other experiments, in terms of oscillations between two flavours, limits possible theories by strongly favouring a solution with large mixing angles. The “missing” solar neutrinos may no longer be missing, but they are providing a means to learn more about the particles themselves.
<textbreak=Reference>Q R Ahmad et al. nucl-ex/0204008 and nucl-ex/0204009 at http://www.arxiv.org/.
Cosmic-ray particles with the highest energies could give us clues about the mass of the relic particles from the Big Bang with the lowest energies – neutrinos – according to recent research. Big Bang cosmology predicts the existence of a background gas of free photons and neutrinos. The measured cosmic microwave background radiation supports the applicability of standard cosmology back to approximately 100,000 years after the Big Bang. A measurement of the relic neutrinos, which are nearly as abundant as the relic photons, could provide a new window to earlier times when the universe was just 1 s old. Since neutrinos interact only weakly, however, relic neutrinos have not yet been detected directly in laboratory experiments.
A recently proposed possibility for detecting relic neutrinos indirectly is based on so-called Z-bursts resulting from the resonant annihilation of ultrahigh-energy cosmic neutrinos with relic neutrinos into Z bosons, mediators of the weak interaction. On resonance, the corresponding cross-section is enhanced by several orders of magnitude. If neutrinos have non-vanishing masses – for which there is convincing evidence in view of the observation of neutrino oscillations (Direct evidence seen for oscillations) – the respective resonance energies, in the rest system of the relic neutrinos, correspond to around 4 x 1021 eV.
Such resonance energies are, for neutrino masses in the 1 eV range, remarkably close to the energies of the highest-energy cosmic rays observed at Earth by means of air-shower detectors such as the Akeno Giant Air Shower Array (AGASA) in Japan. Indeed, it has been argued recently that ultrahigh-energy cosmic rays above the predicted Greisen-Zatsepin-Kuzmin (GZK) cut-off around 4 ¥ 1019 eV are mainly protons from Z-bursts. This would possibly solve one of the outstanding problems of ultrahigh-energy cosmic-ray physics – the observation of cosmic rays with energies above the GZK cut-off – in an elegant and economical way without invoking new physics beyond the Standard Model, other than neutrino masses.
The GZK puzzle hinges on the fact that nucleons with super-GZK energies have a short attenuation length of about 50 Mpc, due to inelastic interactions with the cosmic microwave background, while plausible astrophysical sources for those energetic particles are much farther away.
Ultrahigh-energy neutrinos produced at cosmological distances, on the other hand, can reach our cosmological neighbourhood unattenuated and their resonant annihilation with relic neutrinos could result in the observed cosmic rays of the highest energies.
The energy spectrum of the highest-energy cosmic rays depends critically on neutrino mass if they are indeed produced via Z-bursts. From a comparison of the predicted spectrum with the observed one, the required mass of the heaviest neutrino can therefore be inferred. It turns out to lie in the range 0.04-0.76 eV, which compares favourably with current experimental indications. The required ultrahigh-energy cosmic neutrinos should be observed in the near future by existing neutrino telescopes, such as AMANDA at the South Pole, and by cosmic-ray air-shower detectors currently under construction, such as the Pierre Auger Observatory. If they are not, the Z-burst hypothesis for the origin of the highest-energy cosmic rays will be ruled out.
In the 90 years since the discovery of cosmic rays, there have been many theories about their origin, but little experimental evidence for actual sources. Now two groups have evidence for sources of cosmic rays in two distinct energy regions – up to about 1015 eV, and ultrahigh energies around 1020 eV.
The majority of cosmic rays are protons with energies less than around 1015 eV. In 1949 Enrico Fermi suggested that these particles could be accelerated in moving magnetic clouds, as in the shock waves surrounding a supernova. Direct evidence for acceleration in a supernova remnant has been found, but only for cosmic-ray electrons. Now, however, a team working with data from the CANGAROO II (Collaboration of Australia and Nippon for a GAmma Ray Observatory in the Outback) cosmic-ray telescope in Woomera, South Australia, has evidence that points to a specific supernova remnant as an accelerator of cosmic protons. CANGAROO II is a 10 m air Cerenkov telescope, which detects the electromagnetic showers created when gamma rays with energies of around 1012 eV strike the atmosphere. The team has analysed data from one of the intrinsically brightest sources of gamma rays in our galaxy, the supernova remnant RX J1713.7-3946. They found that the measured energy spectrum does not fit with models in which the gamma rays are emitted by accelerated electrons. However, it agrees well with the assumption that the gamma rays come from the decays of neutral pions, presumably produced by the interaction of high-energy protons accelerated in the supernova remnant.
The possible sources of ultrahigh-energy cosmic rays, with energies around 1020 eV, present a different problem. While undoubtedly exotic, the sources must also be relatively nearby, as otherwise the cosmic rays would lose energy through interactions with the cosmic microwave background radiation (the so-called GZK cut-off, after Greisen, Zatsepin and Kuzmin). Nearby dead quasars – or quasar remnants – which contain spinning supermassive black holes at their centre are one possibility, for which there is now some observational evidence. A team from Princeton University and NASA Goddard Space Flight Center has searched a catalogue of several thousand galaxies for those likely to be suitable quasar remnants nearer than 50 Mpc (or about 160 million light-years, the GZK cut-off for 1020 eV) and found a sample of 12 candidates. They then looked for correlations with the arrival directions for high-energy events from AGASA (Akeno Giant Air Shower Array) in Japan. Their results indicate a non-random correlation between three galaxies and 34 events with energies greater than 4 x 1019 eV; for 7 events with energies greater than 1020 eV, the correlation is less clear. The team does not yet know if the black holes in these galaxies are spinning – a necessary condition for the proposed cosmic-ray accelerator – but they do suggest further studies of nearby possible quasar remnants, particularly with the Auger Observatory.
Astrophysics, particle physics, cosmology and fundamental physics in space have much in common to bring their practitioners together. CERN and the European Southern Observatory (ESO) have already held several joint symposia, and in 2000 a workshop organized by CERN and the European Space Agency (ESA) was held in Geneva. In March this year, some 200 scientists travelled to ESO’s headquarters near Munich for the first symposium to be hosted by all three organizations.
Following an introduction from ESO’s director-general, Catherine Cesarsky, the global properties and evolution of the universe took centre stage. These have been studied in terms of a few parameters whose values have been broadly established in a remarkably short time. The universe is flat with a critical density (W = 1), but baryons constitute only about 5%. Dark non-baryonic matter accounts for 25% of the overall density, and about 70% is “dark energy” with a negative pressure accelerating the expansion of the universe. All these contributions to the overall density should be precisely known within a decade. The apparent concordance of the parameters describing the universe obtained through very different measurements is already impressive, and leads to the question of why these parameters have the values they do.
Some speakers were even tempted to raise the anthropic principle, although this tenacious myth is neither quantitative nor falsifiable, and does not teach us anything new. Nevertheless, it has gathered new momentum within a framework where many different universes could have been born, or even within a single universe where widely differing domains could exist and where we happen to live in the one domain providing for our needs.
The global properties of the universe were covered in the opening sessions. Neil Turok of Cambridge spoke about the very early universe – past and future – since the universe he proposed has a succession of Big Bangs and Big Crunches. After discussing the different contributions to W, Turok stressed that each measurement has little value on its own unless it is assessed within a particular theoretical framework, and that we should keep challenging the framework.
An explanation for many observed features is found in the standard inflationary universe scenario, which Turok challenged with a model of colliding branes. He called for an open-minded approach to such ideas, and for further tests of inflation such as the polarization of the cosmic microwave background (CMB) and the observation of a background of long-wavelength gravitational waves.
Paolo de Bernardis of the University of Rome reviewed CMB properties from an experimental point of view. The key result is the flatness of the universe, but the observation of peaks in the angular analysis of the CMB are what have allowed the baryonic density to be pinned down to 5%, and shown that temperature fluctuations are scale-independent. De Bernardis stressed, however, that measurements are currently restricted to a very limited coverage of the sky. This will be much extended with NASA’s MAP and ESA’s Planck missions, which will also measure CMB polarization.
A key feature is the current acceleration of the universe’s expansion, direct evidence for which comes from observations of supernovae. ESO’s Bruno Leibundgut showed how the observation of 27 low-redshift type 1a supernovae has built confidence that they are reliable standard candles. This has allowed observations of 54 high-redshift ones to be interpreted as fainter than expected from standard Hubble expansion. Taken at face value, this could result from a vacuum energy density of 0.7, which accelerates expansion today but would have led to a deceleration in the past when the matter density was higher. Leibundgut added a cautionary note, however, saying that much remains to be understood about the systematic uncertainties of type 1a supernovae. Much larger statistics, probing to higher redshift, are needed, as is a better understanding of the their explosion. Yannick Mellier of the Observatoire de Paris reviewed the complementary determination of the matter density via weak gravitational lensing. There is good agreement on the matter and vacuum density between six different teams. When combined with the analysis of the CMB, the matter density is pinned down to around 0.3 and the vacuum density to around 0.7. It is puzzling that the vacuum energy density should be so tiny compared with the Planck scale or electroweak breaking scale.
Dark matter
Direct searches for dark matter were reviewed by Charling Tao of Marseille. After recalling the first hints provided by the rotation curves of galaxies, interpreted as being due to a halo of baryonic matter, she explained how massive compact halo objects identified through gravitational lensing are too few to account for the effect. Underground experiments have looked for weakly interacting massive particles, so far to no avail. Concluding the discussion of exotic dark matter candidates was Georg Raffelt of the Max Planck Institute (MPI) in Munich, who described the CERN axion solar telescope, CAST.
With their tiny but non-zero masses, neutrinos provide the first clear departure from the Standard Model of particle physics, but are no longer expected to provide an appreciable contribution to dark matter. Pilar Hernandez of CERN reviewed the status of neutrino physics. Oscillations with mass square differences of the order of 10-3-10-5 eV2 and maximal mixing are now favoured.
Reviews of the Standard Model of particle physics and the exciting prospects for research at CERN’s forthcoming Large Hadron Collider were the subject of many presentations. Antonio Pich of Valencia spoke of low-energy Standard Model tests, while CERN’s Fabiola Gianotti covered high-energy tests. John Ellis of CERN went beyond the Standard Model, raising the possibility of extra dimensions at short distances as a rival to supersymmetry.
Studying the extreme
Edward van den Heuvel of Amsterdam reviewed gamma-ray bursts (GRBs). A beautiful example of serendipity, GRBs were discovered while looking for something else – nuclear tests in the atmosphere. Now known as the most powerful cosmic explosions, they occur at the level of one per day with energy output reaching that of up to a million supernovae. They last from seconds to minutes and appear at random in the sky, which is to be expected, since their X-ray and visible afterglows are associated with highly redshifted host galaxies. Many efforts are underway to observe and study GRBs with redshifts up to z = 12.
Remaining with the extreme, Heinrich Völk of Heidelberg discussed very-high-energy gamma rays, and Alan Watson of Leeds talked about the highest-energy cosmic rays. The observation of cosmic rays above about 1011 GeV is a puzzle, since as Greisen, Zatsepin and Kuzmin pointed out, the CMB should make space opaque to them. Understanding their origin will rely on observations with the Auger observatory and later with ESA’s Extreme Universe Space Observatory.
Francis Halzen from the University of Wisconsin-Madison talked about high-energy neutrinos from astrophysical sources. Detection at rates appropriate for meaningful study demands very large detectors such as the 1 km3 Icecube detector being installed deep under Antarctic ice, and underwater detectors such as ANTARES.
One session was devoted to massive objects. It now seems likely that massive black holes at the centre of galaxies fuel the massive energy output of quasars, equivalent to 1012-1015 suns, and that all galaxies were once active. Quasar density peaks at a redshift of z = 2. In closer galaxies, the presence of a relatively quiet black hole of millions of solar masses is inferred from the swift motion of stars around a dark centre. The central part of our Milky Way, for example, has been observed to within 3 light years of the centre, where stars circle at velocities up to 1500 km/s. Interpretations more exotic than the presence of a giant black hole are not expected to hold.
Gravitational waves
Bernhard Schutz of the MPI in Potsdam gave a review of gravitational wave sources, while Karsten Danzmann of the MPI in Hannover discussed experimental searches. Gravitational waves carry huge energies but interact very feebly, crossing the universe almost unperturbed. Ground-based detectors, sensitive to frequencies above 10 Hz, are complementary to detectors in space, which will look for frequencies below 0.1 Hz. Both should be sensitive to amplitudes below 1022. In his talk, Stephano Vitale of Trento discussed several ESA fundamental physics missions including SMART-2, a test mission for the ambitious LISA gravitational wave interferometer, which should fly in 2006.
Other forthcoming space experiments include the ESA-NASA STEP mission, which will test the equivalence principle to six orders of magnitude better than the present limit. Roberto Battiston of Perugia described Alpha Magnetic Spectrometer (AMS) findings on properties of the cosmic-ray flux, and showed how future AMS missions could bring down the anti-helium to helium ratio from 10-6 to around 10-9.
Planetary systems
Ewine van Dishoeck of Leiden discussed the formation of star and planetary systems from large clouds of gas and dust, saying that around 15% of stars have a disk from which planets could form. Solar system formation would take some 100 million years. This field will soon see major developments with new tools such as the Atacama Large Millimetre Array, ESA’s Infrared Space Observatory, NASA’s Space Infra Red Telescope Facility and the Next Generation Space Telescope.
Michel Mayor of Geneva recalled that more than 80 extra solar planets have already been seen, some of them with masses as low as 50 times the mass of the Earth. He discussed how planets are found through radial velocity surveys and planetary transit, the latter giving direct evidence for gaseous giants like Jupiter. The diversity of planets observed was not anticipated – some have very short periods, elongated orbits or very large masses up to 10 times the mass of Jupiter. Michael Perryman of ESA showed how missions under study could make the search for Earth-like planets possible, saying these may be as numerous as one per thousand stars. Many conditions, however, would need to be satisfied to make life possible. Perryman outlined a “habitable zone” requiring the presence of a Jupiter-type planet as a protection against meteorites, and stressed that it would need to exist over billions of years.
The meeting drew to a close with presentations about future directions at CERN, ESA and ESO. Exciting projects are being completed, are under construction or are at the planning stage in all three organizations. The closing lecture was given by Martin Rees of Cambridge, who brought the symposium to a brilliant finale with the conclusion that we live in exciting times.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.