Announced at the recent Neutrino 2000 meeting in Sudbury, Canada, were the first results from the K2K long-distance neutrino beam experiment in Japan. For the first time, synthetic neutrinos made in a physics laboratory are seen to disappear.
In the K2K study, neutrinos (of the muon-like variety) generated at the Japanese KEK laboratory are directed towards the Superkamiokande underground detector 250 km away. In the detector, 22.5 kT of water are monitored by sensitive photomultipliers to pick up the tiny flashes of light produced by particle interactions.
The experiment, which began running last year, was able to announce at Sudbury that 17 neutrino counts had been picked up. The pulses of parent protons at the source accelerator can be used to clock the arrival of the neutrinos, so the results are essentially free of spurious background.
About 29 neutrino counts were expected, assuming the neutrinos despatched from the KEK laboratory arrived unscathed at Superkamiokande. Such a deficiency, if it continues to be seen, implies that something happens to the particles along their 250 km flight path.
This is not a surprise. In 1998, initial results from Superkamiokande on muon signals generated by neutrinos produced via cosmic-ray collisions in the atmosphere showed that the signal from muon-like neutrinos arriving from the atmosphere directly above the detector was very different from the signal arriving from below.
This is not a result of absorption in the Earth – 99.9999…% of neutrinos pass through the Earth as though it were not there. The effect was interpreted as neutrino metamorphosis – “oscillations” – as the particles passed through the planet.
Neutrinos come in three varieties – electron, muon and tau – according to the particles that they are associated with. For a long time, physicists thought that these neutrino allegiances were immutable – a neutrino produced in an electron environment could remain an electron-like neutrino for ever.
However, this is only so if the neutrinos have no mass and travel at the speed of light. If the particles do have a tiny mass, they can, in principle, switch their electron/muon/tau allegiance en route. The 1998 atmospheric neutrino effects seen in Superkamiokande provided the first firm evidence for such neutrino oscillations. These Superkamiokande data have now been consolidated, while other evidence has also appeared.
With neutrino oscillations, neutrinos of a certain type are more likely to change into neutrinos of another type than to interact with matter. If the viewing detector is sensitive only to neutrinos of a certain type, then some neutrinos disappear from view.
Such neutrino disappearance also correlates with the long-observed dearth of neutrino signals from the Sun, where measurements using a variety of detectors (including Superkamiokande) give only a fraction of the number of expected electron-type solar neutrinos. The Superkamiokande detector is sensitive to electron-like and muon-like neutrinos. However, if the parent KEK muon-like neutrinos change into tau-like neutrinos, Superkamiokande would not see them. This is the interpretation of the initial deficiency logged by the experiment in Japan. However, these are only the first results to appear from the K2K experiment, and it usually takes a long time to assemble reliable neutrino data.
The transmutation of muon-like into tau-like neutrinos is good news for long-distance neutrino experiments now under construction. The MINOS experiment in the US will send neutrinos 730 km from Fermilab to a detector in the Soudan mine, Minnesota, while neutrinos from CERN will be despatched towards new detectors in the Italian Gran Sasso underground laboratory, also 730 km distant. The detectors in these projects hope to pick up signs of tau-like neutrinos not present when the beams left the parent laboratories.
Because of the scant affinity of neutrinos for matter, intercepting them in a detector is always a challenge. The DONUT experiment at Fermilab recently presented possible evidence for the production of tau-like neutrinos in particle interactions.
The K2K experiment is a collaboration involving Japan, Korea and the US.
Now coming into action for physics is CERN’s new Antiproton Decelerator (AD), opening another chapter of CERN’s tradition of physics with antiprotons. With the AD, the focus switches from exploiting beams of antiprotons to capturing the precious nuclear antiparticles.
When CERN’s low-energy antiproton ring (LEAR) was closed in 1996 after more than 10 years of operation, it had supplied 1.3 x1014 antiprotons – enough to supply about 10 000 particles to everyone on the planet, but representing a theoretical accumulation of only 0.2 ng of antimatter.
Although LEAR slowed down the particle beams supplied by CERN’s antiproton factory from 26 GeV by a factor of about 10 (itself no mean feat), its antiprotons were nevertheless still moving extremely fast. A particle with 100 MeV momentum corresponds to a temperature of billions of degrees.
Of all LEAR’s antiprotons, just a few were privileged to be selected and eventually cooled down to temperatures approaching absolute zero. The techniques learned in this work opened up substantial economies for antiparticles – probably one of the rarest, and therefore most expensive, commodities in the world.
Cooling antiprotons is a tricky business. They quickly annihilate with ordinary matter such as liquid helium, the conventional ultra-refrigeration medium. Instead, antiprotons have to be supercooled by a gas of electrons (negatively charged antiprotons can peacefully coexist with electrons).
In this way the TRAP Bonn-Harvard-Seoul collaboration was able to stack several thousand ultracold antiprotons at a time. Antiprotons cooled to such a low energy by the electrons were locked in a shallow trap using electric and magnetic fields to contain the valuable antiparticles. Meanwhile a large electromagnetic well was opened alongside to receive a fresh batch of antiprotons, which were then similarly cooled. The energies of the individual antiparticles were then just one ten-millionth of what they were in LEAR.
Interesting antiproton physics thus became feasible using a less ambitious antiproton source. This is the motivation behind CERN’s new AD, which supplies antiprotons to several experiments – ATRAP (son of TRAP at LEAR), ATHENA and ASACUSA.
One ultimate physics objective at LEAR was to isolate a lone antiproton and study it carefully. Gradually reducing the electromagnetic “depth” of its snare, the TRAP team spilled out excess antiparticles until just a single antiproton survived.
Like any other captive electrically charged particle, an antiproton orbits in a magnetic field – the principle of the cyclotron. Comparing the frequencies of this rotation for an antiproton and a proton gives a direct comparison of the masses of the particle and its antiparticle.
The TRAP team at LEAR was able to ascertain that the proton and antiproton masses are equal with increasing precision, eventually to just one part in 10 billion. Making a measurement to such astonishing accuracy is equivalent to fixing the position of an obect on the surface of the Earth to within a few millimetres.
This is by far (a factor of a million) the most incisive comparison yet of proton and antiproton properties. According to the fundamental theorems of physics, a particle and an antiparticle should be exactly equal and opposite so that their scalar quantities, like mass, are the same, but quantum numbers, like electron charge, should have opposite signs.
The major objective of ATRAP and ATHENA at the new AD is to synthesize and study antihydrogen – the simplest electrically neutral atoms of antimatter, each made up of a positron orbiting a lone antiproton.
Antihydrogen was first produced by experiment PS210 at LEAR in 1995. Synthesizing atomic antimatter was a major achievement, but no measurements were made – the antihydrogen was too hot and dissociated quickly into its component positrons and antiprotons.
Using electromagnetic traps, ATRAP and ATHENA aim to collect supercold antihydrogen that can be stored for further study. Comparing the properties of this antihydrogen with hydrogen under the same conditions will provide a much more stringent test of whether matter and antimatter behave in exactly the same way.
ASACUSA uses antiprotons for collision and annihilation studies, particularly to form exotic atoms, in which the negatively charged antiproton is captured in a target atom, replacing the electron of everyday atoms.
In the early 1970s, evidence that the masses of strongly interacting particles increased with their internal angular momentum led the Japanese theorist Yoichiro Nambu to propose that the quarks inside these particles are “tied” together by strings. Today the string theories that emerged from this idea are being examined as candidates for the ultimate theory of nature.
Meanwhile, we have learned that the strong interactions are instead described by quantum chromodynamics (QCD), the field theory in which quarks interact through a “colour” force carried by gluons. Although it is therefore not fundamentally a string theory, numerical simulations of QCD (“lattice QCD”; July p23) have demonstrated that Nambu’s conjecture was essentially correct: in chromodynamics, a string-like chromoelectric flux tube forms between distant static colour charges. This leads to quark confinement – the potential energy between a quark and the other quarks to which it is tied increases linearly with the distance between them. The phenomenon of confinement is the most novel and spectacular prediction of QCD – unlike anything seen before.
The ideal experimental test of this new feature of QCD would be to study the flux tube of figure 1 directly by anchoring a quark and antiquark several femtometres apart and examining the flux tube between them. In such ideal circumstances, one of the characteristics of the gluonic flux tube would be the model-independent spectrum shown in figure 2. The excitation energy is p/r because the flux tube’s mass is entirely due to its stored energy. There are two initially excited longest wavelength vibrations with identical energies because the motion of the flux tube is in the two symmetrical dimensions perpendicular to its length.
Particles with gluons
Such a direct examination of the flux tube is, of course, not possible, so in reality we must be content with systems in which the quarks move. Fortunately we know both from general principles and from lattice QCD that an approximation to the dynamics of the full system that ignores the impact of these two forms of motion on each other works quite well – at least down to the mass of the charm quark.
To extend the flux tube picture to yet lighter quarks, requires models, but the most important properties of this system are determined by the model-independent features described above. In particular, in a mass region around 2 GeV/c2 a new kind of particle must exist in which the gluonic degree of freedom of mesons is excited.
The smoking gun characteristic of these new states is that the vibrational quantum numbers of the string, when added to those of the quarks, can produce a total angular momentum, J, a total parity (or mirror-inversion symmetry), P, and a total charge conjugation (or quark-antiquark interchange) symmetry, C, not allowed for ordinary quark-antiquark states. These unusual JPC combinations, like 0+-, 1-+ and 2+-, are called exotic and the states are referred to as exotic hybrid mesons.
Not only general considerations and flux tube models but also first-principle lattice QCD calculations require that these states are in this mass region, while also demonstrating that the levels and their orderings will provide experimental information on the mechanism that causes the colour flux tube to form. Moreover, tantalizing experimental evidence has appeared over the past several years for exotic hybrids as well as for gluonic excitations with no quarks at all (glueballs).
For the past two years a group of 80 physicists from 25 institutions in seven countries has been working on the design of the definitive experiment to map out the spectrum of these new states required by the QCD confinement mechanism. This experiment is part of the planned 12 GeV upgrade of the CEBAF complex at Jefferson Lab, Newport News, Virginia.
Photon beams are expected to be particularly favourable for the production of the exotic hybrids. The reason is that the photon sometimes behaves as a vector meson (a quark-antiquark state with the quark spins parallel, giving a total quark spin of S = 1). When the flux tube in this S = 1 system is excited to the levels shown in figure 2, both ordinary and exotic JPC are possible. In contrast, when the quark spins are antiparallel (S = 0), as in pion or kaon probes, the exotic combinations are not generated.
To date, most meson spectroscopy has been done with incident pion, kaon or proton probes. High-flux photon beams of sufficient quality and energy have not been available, so there are virtually no data on the photoproduction of mesons below a mass of 3 GeV/c2. Thus experimenters have not been able to search for exotic hybrids precisely where they are expected to be found.
The detector in Jefferson Lab’s new Hall D is optimized for incident photons in the 8-9 GeV energy range in order to access the desired meson mass range. A solenoidal spectrometer allows for the measurement of charged particles with excellent efficiency and momentum determination, while at the same time containing the shower of unwanted electron-positron pairs associated with the photon beam.
Photons will be produced using the “coherent bremsstrahlung” technique, whereby a fine electron beam from the CEBAF accelerator is passed through a wafer-thin diamond crystal. At special orientations of the crystal its atoms can be made to recoil together from the radiating electron, boosting the emission at particular photon energies and yielding linearly polarized photons.
With the planned photon fluxes of 107/sec and the continuous CEBAF beam, the experiment will accumulate statistics during the first year of operation that will exceed extant data with pions by at least an order of magnitude. With this detector, the high statistics and the linear polarization information it will be possible to map out the full spectrum of these gluonic excitations.
A committee chaired by David Cassel (Cornell) and consisting of Frank Close (Rutherford), John Domingo (Jefferson), William Dunwoodie (SLAC), Donald Geesaman (Argonne), David Hitlin (Caltech), Martin Olsson (Wisconsin) and Glenn Young (Oak Ridge) reviewed the project in December 1999. It concluded that the project is “well suited for definitive searches for exotic states that are required according to our current understanding of QCD.” They further pointed out that, because of the exceptional quality of the beams at Jefferson Lab, the laboratory is uniquely suited for carrying out such studies.
To achieve the required photon energy and flux with coherent bremsstrahlung a 12 GeV electron beam is required. Figure 4 shows the current CEBAF complex with the existing three experimental halls (A, B and C) and the planned Hall D. The addition of state-of-the-art accelerating units (“cryomodules”) in the linear sections of the accelerator, along with upgrading the arc magnets, will increase the electron energy from the current maximum of 5.5 GeV to 12 GeV.
When the spectrum and decay modes of these gluonic excitations have been mapped out, we will have made a giant step forward in understanding one of the most important new phenomena discovered in the 20th century – quark confinement.
Experiments at CERN’s low-energy antiproton ring (LEAR), closed in 1996, brought many very-high-precision and sometimes surprising antiproton results. Some continue to appear, the latest being the apparent independence of the size of the target of the antiproton-nucleus annihilation rate at very low energy. Clearly antiproton annihilation is a mysterious business.
Antiproton-nucleus annihilation was measured at LEAR by the OBELIX experiment at very low antiproton momenta, down to 40 MeV/c. This momentum seems quite large with respect to the characteristic momentum in particle-antiparticle systems bound by electromagnetic attraction (Coulomb force). For proton-antiproton, this is of the order of 4 MeV/c. Nevertheless, this attraction appears to be important and can even affect the annihilation rate.
In fact, in this energy range, Bethe’s usual 1/v law is replaced by a 1/v2 one, where v is the relative velocity of the interacting particles.
This 1/v2 regime was predicted in 1948 by Wigner and is well known in atomic physics. In nuclear physics, in contrast, one usually encounters electromagnetic repulsion between protons, which gives rise to an exponential decrease of the reaction rate at low energies, a phenomenon that is particularly important in nuclear astrophysics.
The OBELIX experiment, for the first time, investigated with very high precision the behaviour of the reaction rate in a system with Coulomb attraction. In the figure, the measured antiproton-proton, antiproton-deuteron and antiproton-helium annihilation cross-sections are presented as a function of antiproton momentum.
These cross-sections are multiplied by the square of the relative velocity. For the proton-antiproton system, the situation is very clear: one can see that the product tends to a constant value with decreasing antiproton momentum. For a 1/v behaviour, this product should tend to zero. For the deuteron and helium cases, the analysis is more complicated.
This change of regime is instructive but not really unexpected. The most interesting observation comes from the comparison of the values of these three cross-sections. At high energies they are quite different – the antiproton nucleus annihilation cross-sections are several times that for antiproton-proton. Surprisingly, at low antiproton momentum, the antiproton-deuteron and antiproton-helium annihilation cross-sections drop to the proton antiproton level or even below it.
An accurate analysis of these annihilations shows that this is not a kinematic effect; it is a direct result of the dynamics of the antiproton-nucleus interaction.
This was confirmed independently by another LEAR experiment – PS207 – which measured, for the first time, the shift and the broadening of the antiproton-deuteron atomic ground state. This extremely difficult experiment showed that the width of this level, entirely determined by the annihilation process, is approximately the same for antiproton-proton and antiproton-deuteron atoms.
A geometrical picture of annihilation would suggest that the probability of this process should increase with the number of possible annihilating partners – the number of nucleons in nuclei. However, these experiments demonstrate clearly that this is not the case.
To understand the mystery, these experiments should be continued at lower energies and with heavier nuclei, not only to understand the dynamics of the annihilation process but also to measure the cross-sections.
This knowledge would be important, in particular for astrophysicists, who search for antimatter in the universe and need to know about the properties of low-energy matter-antimatter interaction. CERN’s antiproton decelerator (AD), currently starting operations, will be a powerful tool in obtaining this precious antimatter information.
The possibility of observing solar neutrinos began to be discussed seriously following Holmgren and Johnston’s experimental discovery in 1958 that the cross-section for the production of beryllium-7 by the fusion of helium-3 and helium-4 was more than a thousand times as large as had been previously believed. This led to Willy Fowler and Al Cameron suggesting that boron-8 might be produced in the Sun in sufficient quantities (from beryllium-7 and protons) to produce an observable flux of high-energy neutrinos from boron-8 beta decay.
Looking inside the Sun
We begin our story in 1964, when we published back-to-back papers in Physical Review Letters, arguing that it was possible to build a 100 000 gallon detector of perchloroethylene that would measure the solar neutrino capture rate on chlorine. Our motivation was to use neutrinos to look into the interior of the Sun and thereby test directly the theory of stellar evolution and nuclear energy generation in stars. The particular development that made us realize that the experiment could be done was the demonstration by John Bahcall, in late 1963, that the principal neutrino absorption cross-section on chlorine was 20 times as large as had been previously calculated, owing to a super-allowed nuclear transition to an excited state of argon.
If you have a good idea today, you are likely to require many committees, many years and many people to get the project from concept to observation. The situation was very different in 1964.
As Ray Davis was a member of the Brookhaven chemistry department, we presented our case to Dick Dodson, who was
chairman of the Brookhaven chemistry department, and to laboratory director Maurice Goldhaber. Dodson was excited about the possibility of supporting a fundamental new direction within the chemistry department. Goldhaber, on the other hand, was sceptical about all astrophysical calculations, but was intrigued by the nuclear physics of the neutrino analogue transition. Following only a few weeks of consideration, the project received the required backing from Brookhaven, and Dodson and Davis visited the Atomic Energy Commission (AEC) to inform the people in the chemistry division of the plans to begin a solar neutrino experiment. The way was paved by Charlie Lauritsen and Fowler, who had strong scientific and personal connections with the AEC as a result of their wartime work. The project received a warm welcome at the AEC.
A small team, comprising Davis, Don Harmer (on leave from Georgia Tech) and John Galvin (a technician who worked part-time on the experiment), designed and built the experiment. Kenneth Hoffman, a young engineer, provided expert advice on technical questions. The money came from Brookhaven’s chemistry budget. Neither of us remember a formal proposal ever being written to a funding agency. The total capital expenditure to excavate the cavity in the Homestake Gold Mine in South Dakota, build the tank and purchase the liquid was $0.6 million (in 1965).
During 1964-1967, Fred Reines and his group worked on three solar neutrino experiments in which recoil electrons produced by neutrino interactions would be detected by observing the associated light in an organic scintillator. Two of the experiments, which exploited the elastic scattering of neutrinos by electrons, were actually performed and led to a higher than predicted upper limit on the boron-8 solar neutrino flux. The third, which was planned to detect neutrinos absorbed by lithium-7, was abandoned after the initial chlorine results showed that the solar neutrino flux was low.
These experiments introduced the technology of organic scintillators into the arena of solar neutrino research, a technique that will only finally be used in 2001 when the BOREXINO detector begins to detect low-energy solar neutrinos. Also during this period, Bahcall investigated the properties of neutrino-electron scattering and showed that the forward peaking from boron-8 neutrinos is large – a feature that was incorporated 25 years later in the Kamiokande (and later SuperKamiokande) water Cherenkov detectors.
The first results from the chlorine experiment were published in Physical Review Lettersin 1968, again in a back-to-back comparison between measurements and standard predictions. The initial results have been remarkably robust; the conflict between chlorine measurements and standard solar model predictions has lasted over three decades.
The main improvement has been in the slow reduction of the uncertainties in both the experiment and the theory. The efficiency of the Homestake chlorine experiment was tested by recovering carrier solutions, by producing argon-37 in the tank with neutron sources and by recovering chlorine-36 inserted in a tank of perchloroethylene. The solar model was verified by comparison with precise helioseismological measurements.
For more than 20 years the best estimates for the observational result and for the theoretical prediction have remained essentially constant. The discrepancy between the standard solar model prediction and the chlorine observation became widely known as “the solar neutrino problem”.
Very few people worked on solar neutrinos during 1968-1988. The chlorine experiment was the only solar neutrino experiment to provide data in these two decades. It is not easy for us to explain why this was the case; we certainly tried hard to interest others in doing different experiments and we gave many joint presentations. Each of us had one principal collaborator during this long period – Bruce Cleveland (experimental) and Roger Ulrich (solar models).
A large effort to develop a chlorine experiment in the Soviet Union was led by George Zatsepin, but it was delayed by the difficulties of creating a suitable underground site for the detector. Eventually the effort was converted into a successful gallium detector, SAGE, led by Vladimir Gavrin and Tom Bowles, which gave its first results in 1990.
Oscillations proposed
Only one year after the first (1968) chlorine results were published, Vladimir Gribov and Bruno Pontecorvo proposed that the explanation of the solar neutrino problem was that neutrinos oscillated between the state in which they were created and a state that was more difficult to detect. This explanation, which is the consensus view today, was widely disbelieved by nearly all of the particle physicists whom we talked to in those days.
In the form in which solar neutrino oscillations were originally proposed by Gribov and Pontecorvo, the process required that the mixing angles between neutrino states should be much larger than the quark mixing angles, something that most theoretical physicists believed, at that time, was unlikely. Ironically, a flood of particle theory papers explained, more or less “naturally”, the large neutrino mixing angle that was decisively demonstrated 30 years later in the SuperKamiokande atmospheric neutrino experiment.
One of the most crucial events for early solar neutrino research occurred in 1968 while we were relaxing after a swim at the CalTech pool. Gordon Garmire (now a principal scientist with the Chandra X-ray satellite) came up to Davis, introduced himself and said that he had heard about the chlorine experiment. He suggested that it might be possible to reduce significantly the background by using pulse risetime discrimination, a technique used for proportional counters in space experiments. The desired fast-rising pulses from argon-37 Auger electrons are different from the slower-rising pulses from a background gamma or cosmic ray.
Davis went back to Brookhaven and asked the local electronic experts if it would be possible to implement this technique for the very small counters that he used. The initial answer was that the available amplifiers were not fast enough to be used for this purpose with the small solar neutrino counters. However, within about a year, three first-class Brookhaven electronic engineers, Veljko Radeca, Bob Chase and Lee Rogers, were able to build electronics fast enough to be used to measure the risetime in Davis’s counters.
This “swimming-pool” improvement was crucial for the success of the chlorine experiment and the subsequent radiochemical gallium solar neutrino experiments – SAGE, GALLEX and GNO. Measurements of the risetime as well as the pulse energy greatly reduce the background for radiochemical experiments. The backgrounds can be as low as one event in three months.
In 1978, after a decade of disagreement between the Homestake neutrino experiment and standard solar model predictions, it was clear that the subject had reached an impasse and a new experiment was required. The chlorine experiment is, according to standard solar model predictions, sensitive primarily to neutrinos from a rare fusion reaction that involves boron-8 neutrinos. These are produced in only 2 of every 104 terminations of the basic proton-proton fusion chain. In early 1978 there was a conference of interested scientists at Brookhaven to discuss what to do next. The consensus was that we needed an experiment that was sensitive to the low-energy neutrinos from the fundamental proton-proton reaction.
The only remotely practical possibility appeared to be another radiochemical experiment, this time with gallium-71 (instead of chlorine-37) as the target. However, a gallium experiment (originally proposed by Russian theorist V A Kuzmin in 1965) was expensive – we needed about three times the world’s annual production of gallium to do a useful experiment.
Gallium push
In an effort to generate enthusiasm for a gallium experiment, we wrote another Physical Review Letters paper, this time with a number of interested experimental colleagues. We argued that a gallium detector was feasible and that a gallium measurement, which would be sensitive to the fundamental proton-proton neutrinos, would distinguish between broad classes of explanations for the discrepancy between prediction and observation in the chlorine-37 experiment. Over the next five or six years, the idea was reviewed a number of times in the US, always very favourably. A blue-ribbon panel headed by Glenn Seaborg enthusiastically endorsed both the experimental proposal and the theoretical justification.
To our great frustration and disappointment, the gallium experiment was never funded in the US, although many of the experimental ideas that gave rise to the Russian experiment (SAGE) and the German-French-Italian-Israeli-US experiment (GALLEX) largely originated at Brookhaven. Physicists strongly supported the experiment and said that the money should come out of an astronomy budget; astronomers said it was great physics and should be supported by the physicists. The US Department of Energy (DOE) could not get the nuclear physics and the particle physics sections to agree on who had the financial responsibility. In a desperate effort to break the deadlock, Bahcall was even the principal investigator of a largely Brookhaven proposal to the US National Science Foundation (which did not support proposals from DOE laboratories). A pilot experiment was performed with 1.3 tons of gallium by an international collaboration (Brookhaven, Pennsylvania, MPI Heidelberg, IAS Princeton and the Weizmann Institute), which developed the extraction scheme and the counters eventually used in the GALLEX full-scale experiment.
In strong contrast with what happened in the US, Moissey Markov, head of the Nuclear Physics Division of the Russian Academy of Sciences, helped to establish a neutrino laboratory within the Institute for Nuclear Research, participated in the founding of the Baksan neutrino observatory, and was instrumental in securing 60 tons of gallium free for Russian scientists for the duration of a solar neutrino experiment.
The SAGE Russian-US gallium experiment went ahead under the leadership of Gavrin, Zatsepin (Institute for Nuclear Research, Russia) and Bowles (Los Alamos), while the mostly European experiment (GALLEX) was led by Till Kirsten (Max Planck Institute, Germany). Both had a strong but not primary US participation.
The two gallium experiments were performed during the 1990s and gave very similar results, providing the first experimental indication of the presence of proton-proton neutrinos. Both experiments were tested by measuring the neutrino rate from an intense laboratory radioactive source.
There were two dramatic developments in the solar neutrino saga, one theoretical and one experimental, before the gallium experiments produced observational results. In 1985 two Russian physicists proposed an imaginative solution to the solar neutrino problem that built on the earlier work of Gribov and Pontecorvo and, more directly, the insightful investigation by Lincoln Wolfenstein (Carnegie Mellon).
Alexei Smirnov and Stanislav Mikheyev showed that, if neutrinos have masses in a relatively wide range, then a resonance phenomenon in matter (now universally known as the MSW effect) could efficiently convert many of the electron-type neutrinos created in the interior of the Sun to more difficult to detect muon and tau neutrinos. The MSW effect can work for
small or large neutrino mixing angles. Because of the elegance of the theory and the possibility of explaining the experimental results with small mixing angles (analogous to what happens in the quark sector), physicists immediately began to be more sympathetic to particle physics solutions to the solar neutrino problem. More importantly, they became enthusiasts for new solar neutrino experiments.
Big breakthrough
The next big breakthrough also came from an unanticipated direction. The Kamiokande water Cherenkov detector was developed to study proton decay in a mine in the Japanese Alps and set an important lower limit on the proton lifetime. In the late 1980s the detector was converted by its Japanese founders, Masatoshi Koshiba and Yoji Totsuka, together with some US colleagues, Gene Beier and Al Mann of the University of Pennsylvania, to be sensitive to the lower energy events expected from solar neutrinos.
With incredible foresight, these experimentalists completed their revisions to make the detector sensitive to solar neutrinos in late 1986, just in time to observe the neutrinos from Supernova 1987a emitted 170 000 years earlier. (Supernova and solar neutrinos have similar energies – about 10 MeV – much less than the energies relevant for proton decay.) In 1996 a much larger water Cerenkov detector (with 50 000 tons of pure water) began operating in Japan under the leadership of Yoji Totsuka, Kenzo Nakamura, Yoichiro Suzuki (from Japan), and Jim Stone and Hank Sobel (from the US).
So far, five experiments have detected solar neutrinos in approximately the numbers (within a factor of two or three) and in the energy range (less than 15 MeV) predicted by the standard solar model. This is a remarkable achievement for solar theory, because the boron-8 neutrinos that are observed primarily in three of these experiments (chlorine, Kamiokande and its successor SuperKamiokande) depend on approximately the 25th power of the central temperature. The same set of nuclear fusion reactions that are hypothesized to produce the solar luminosity also give rise to solar neutrinos. Therefore, these experiments establish empirically that the Sun shines by nuclear fusion reactions among light elements in essentially the way described by solar models.
Nevertheless, all of the experiments disagree quantitatively with the combined predictions of the standard solar model and the standard theory of electroweak interactions (which implies that nothing much happens to the neutrinos after they are created). The disagreements are such that they appear to require some new physics that changes the energy spectrum of the neutrinos from different fusion sources.
Solar neutrino research today is very different from how it was three decades ago. The primary goal now is to understand the neutrino physics, which is a prerequisite for making more accurate tests of the neutrino predictions of solar models. Solar neutrino experiments today are all large international collaborations, each typically involving in the order of 100 physicists. Nearly all of the new experiments are electronic, not radiochemical, and the latest generation of experiments measure typically several thousand events per year (with reasonable energy resolution), compared with typically 25-50 per year for the radiochemical experiments (which have no energy resolution, only an energy threshold).
Solar neutrino experiments are currently being carried out in Japan (SuperKamiokande in the Japanese Alps), in Canada (SNO, which uses a kiloton of heavy water in Sudbury, Ontario), in Italy (BOREXINO, ICARUS and GNO, each sensitive to a different energy range and all operating in the Gran Sasso Underground Laboratory), in Russia (SAGE in the Caucasus region) and in the US (Homestake chlorine experiment). The SAGE, chlorine and GNO experiments are radiochemical; the others are electronic.
Since 1985 the chlorine experiment has been operated by the University of Pennsylvania under the joint leadership of Ken Lande and Davis. Lande and Paul Wildenhain have introduced major improvements to the extraction and measurement systems, making the chlorine experiment a valuable source of new precision data.
The most challenging and important frontier for solar neutrino research is to develop experiments that can measure the energies of individual low-energy neutrinos from the basic proton-proton reaction, which constitutes (we believe) more than 90% of the solar neutrino flux.
Solar neutrino research is a community activity. Hundreds of experimentalists have collaborated to carry out difficult, beautiful measurements of the elusive neutrinos. Hundreds of researchers have helped to refine the solar model predictions, measuring accurate nuclear and solar parameters and calculating input data such as opacities and equation of state.
Special mention
Three people have played special roles. Hans Bethe was the architect of the theory of nuclear fusion reactions in stars, as well as our mentor and hero. Willy Fowler was a powerful and enthusiastic supporter of each new step and his keen physical insight motivated much of what was done in solar neutrino research. Bruno Pontecorvo opened everyone’s eyes with his original insights, including his early discussion of the advantages of using chlorine as a neutrino detector and his suggestion that neutrino oscillations might be important.
Over the next decade, neutrino astronomy will move beyond our cosmic neighborhood and, we hope, will detect distant sources. The most likely candidates now appear to be gamma-ray bursts. If the standard fireball picture is correct and if gamma-ray bursts produce the observed highest-energy cosmic rays, then very-high-energy (1015 eV) neutrinos should be observable with a km2 detector. Experiments that are capable of detecting neutrinos from gamma-ray bursts are being developed at the South Pole (AMANDA and ICECUBE), in the Mediterranean Sea (ANTARES, NESTOR) and even in space.
Looking back on the beginnings of solar neutrino astronomy, one lesson appears clear: if you can measure something new with reasonable accuracy, then you have a chance to discover something important. The history of astronomy shows that it is very likely that what you will discover will not be what you were looking for. It helps to be lucky.
Further information
A version of this article originally appeared as a Millennium Essay in J N Bahcall and R Davis Jr 2000 Publications of the Astronomical Society of the Pacific112 429). Copyright 2000, Astronomical Society of the Pacific, reproduced with permission of the editors.
Further reading
J N Bahcall and R Davis Jr 1976 Solar Neutrinos: a scientific puzzle Science191 264.
J N Bahcall and R Davis Jr 1982 An account of the development of the solar neutrino problem Essays in Nuclear Astrophysicsed.
C A Barnes, D D Clayton and D N Schramm (Cambridge University Press) 243. (This article is also reprinted in Neutrino Astrophysicsby J N Bahcall (Cambridge University Press, 1989).)
At the smallest possible scales, physics calculations are extremely complicated. This is the dilemma facing particle physicists.
Lattice field theories were originally proposed by 1982 Nobel laureate Ken Wilson as a means of tackling quantum chromodynamics (QCD) – the theory of strong interactions – at low energies, where calculations based on traditional perturbation theory fail.
The lattice formulation replaces the familiar continuous Minkowski space-time with a discrete Euclidean version, where space time points are separated by a finite distance – the lattice spacing. In this way results can be obtained by simulations, but the computing power required is huge, requiring special supercomputers.
This methodology has been applied extensively to QCD: recent years have witnessed increasingly accurate calculations of many quantities, such as particle masses (including those of glueballs and hybrids) and form factors for weak decays, as well as quark masses and the strong (inter-quark) coupling constant. These results provide important pointers to future progress.
The romantic Ringberg Castle, with its panoramic view of the Bavarian Tegernsee, was the scene of a recent workshop entitled Current Theoretical Problems in Lattice Field Theory, where physicists from Europe, the US and Japan discussed and assessed recent progress in this increasingly important area of research.
Obstacles removed
Despite the many successes of lattice QCD, there are stubborn areas where little progress has been made. For instance, until recently it was thought that the lattice formulation was incompatible with the concept of a single left-handed fermion (such as the Standard Model neutrino). The notion of this chirality plays a key role for the strongly and weakly interacting sectors of the Standard Model. Furthermore, weak decays like that of a kaon into two pions have been studied on the lattice with only limited success.
A non-perturbative treatment of such processes is highly desirable, because they are required for our theoretical understanding of direct CP violation and the longstanding problem of explaining isospin selection rules in weak decays. However, there have been impressive theoretical advances in both of these areas, which were discussed at the Ringberg workshop.
Gian Carlo Rossi (Rome II) gave a general introduction to lattice calculations of KÆpp. By the early 1990s, all attempts to study this process on the lattice had been abandoned, because it was realized that the necessary physical quantity cannot be obtained from the correlation functions computed on the lattice. This Maiani-Testa No-go theorem was analysed in great detail by Chris Sachrajda (Southampton). Laurent Lellouch (Annecy) then described how the theorem can be circumvented by treating the decay in a finite volume, when the energy spectrum of the two-pion final state is not continuous, in turn violating one of the conditions for the No-go theorem to apply.
Furthermore, the transition amplitude in finite volume can be related to the physical decay rate. An implementation of this method in a real computer simulation requires lattice sizes of about 5-7 fm. This stretches the capacities of current supercomputers to the limit, but a calculation will certainly be feasible with the next generation of machines.
Guido Martinelli (Rome I) presented the decay from a different angle by relating it to the conceptually simpler kaon-pion transition. This strategy has been known for some time, and recent work concentrated on the final-state interactions between the two pions. The inclusion of these effects may influence theoretical predictions for measurements of direct CP violation. Given recent experimental progress in this sector, this is surely of great importance.
Many lattice theorists’ hopes of being able to study the electroweak sector of the Standard Model had been frustrated by another famous No-go theorem, this time by Nielsen and Ninomiya. This states that chiral symmetry cannot be realized on the lattice, which, for instance, makes it impossible to treat neutrinos in a lattice simulation.
Recently it has been shown how the Nielsen-Ninomiya theorem could be sidestepped: a chiral fermion (such as a neutrino) can be put on the lattice provided that its discretized Dirac operator satisfies the so-called Ginsparg-Wilson relation. Several solutions to this relation have been constructed, and the most widely used are known in the trade as “Domain Wall” and “Overlap” fermions.
Progress in understanding how nature works on the smallest possible scale depends on such theoretical and conceptual advances as well as sheer computer power
At Ringberg, Pilar Hernández (CERN) examined whether these solutions can be implemented efficiently in computer simulations. Obviously these more technical aspects have to be investigated before one can embark on more ambitious projects. Hernández concluded that the computational cost of both formulations is comparable, but substantially higher compared with conventional lattice fermions. In particular, her results indicate that the numerical effort needed to preserve chiral symmetry by simulating Domain Wall fermions is far greater than previously thought. This point was further explored during an open discussion session led by Karl Jansen (CERN) and Tassos Vladikas (Rome II). A conclusion was that conventional lattice fermions appear quite sufficient to address many – if not all – of the problems in applied lattice QCD.
As well as calculating hard results, the preservation of chiral symmetry on the lattice has also been exploited in the study of more formal aspects of quantum field theories. Oliver Bär (DESY) presented recent work on global anomalies, which can now be analysed in a rigorous, non-perturbative way using the lattice framework. SU(2) gauge theory coupled to one massless, left handed neutrino thereby leads to the lattice analogue of the famous Witten anomaly. Further work on anomalies was presented by Hiroshi Suzuki (Trieste), while Yigal Shamir (Tel Aviv) reviewed a different approach to lattice chiral gauge theories based on gauge fixing.
Among other topics discussed at Ringberg was the issue of non-perturbative renormalization, with contributions from Roberto Petronzio (Rome II), Steve Sharpe (Seattle) and Rainer Sommer (Zeuthen). The problem is to relate quantities (for example form factors and decay constants) computed on the lattice to their continuum counterparts via non-perturbatively defined renormalization factors. Such a procedure avoids the use of lattice perturbation theory, which is known to converge only very slowly.
The successful implementation of non-perturbative renormalization for a large class of operators removes a major uncertainty in lattice calculations. Furthermore, talks by Antonio Grassi, Roberto Frezzotti (both Milan) and Stefan Sint (Rome II) discussed recent work on QCD with an additional mass term which is expected to protect against quark zero modes. It is hoped that this will help in the simulation of smaller quark masses.
Many other contributions, for example two-dimensional models, Nahm dualities and the bosonizaton of lattice fermions, could also lead to further progress. However, the variety of topics discussed at the workshop underlines that lattice field theory is a very active research area with many innovative ideas. Progress in understanding how nature works on the smallest possible scale depends on such theoretical and conceptual advances as well as sheer computer power.
The Ringberg meeting was organized by Martin Lüscher (CERN), Erhard Seiler and Peter Weisz (MPI Munich).
Directions for lattice computing
Quantum physics calculations are not easy. Most students, after having worked through the solutions of the Schrödinger equation for the hydrogen atom, take the rest of quantum mechanics on trust. Likewise, quantum electrodynamics is demonstrated with a few easy examples involving colliding electrons. This tradition of difficult calculation continues, and is even accentuated, by the physics of the quarks and gluons inside subnuclear particles.
Quantum chromodynamics – the candidate theory of quarks and gluons – can only be handled using powerful computers, and. even then drastic assumptions must be made to make the calculations tractable. For example, a discrete lattice (several fm) has to replace the space-time continuum. Normally only the valence quarks, which give the particle its quantum number assignment, can be taken into account (the quenched approximation), and the myriad of accompanying virtual quarks and antiquarks have to be neglected.
The benchmark of lattice QCD is the calculation of particle masses, where encouraging results are being achieved, but physicists are still far from being able to explain the observed spectrum of particle masses. Future progress in understanding subnuclear particles and their interactions advances in step with available computer power.
To point the way forward, the European Committee for Future Accelerators recently set up a panel (chaired by Chris Sachrajda of Southampton) to assess both the computing resources required for this work and the scientific opportunities that would be opened up. The panel’s main conclusions were:
* The future research programme using lattice simulations is a very rich one, investigating problems of central importance for the development of our understanding of particle physics. The programme includes detailed (unquenched) computations of non perturbative QCD effects in hadronic weak decays, studies of hadronic structure, investigations of the quark-gluon plasma, exploratory studies of the non-perturbative structure of supersymmetric gauge theories, studies of subtle aspects of hadronic spectroscopy, and much more.
* The European lattice community is large and very strong, with experience and expertise in applying numerical solutions to a wider range of physics problems. For more than 10 years it has organized itself into international collaborations when appropriate, and these will form the foundation for any future European project. Increased coordination is necessary in preparation for the 10 Tflops generation of machines.
*Future strategy must be driven by the requirements of the physics research programme. We conclude that it is both realistic and necessary to aim for machines of the order of 10 Tflops processing power by 2003. As a general guide, such machines will enable results to be obtained in unquenched simulations with similar precision to those currently found in quenched ones.
* It will be important to preserve the diversity and breadth of the physics programme, which will require a number of large machines as well as a range of smaller ones.
* The lattice community should remain alert to all technical possibilities in realizing its research programme. However, the panel concludes that it is unlikely to be possible to procure a 10 Tflops machine commercially at a reasonable price by 2003, and hence recognizes the central importance of the apeNEXT project to the future of European lattice physics.
Physics was born with ancient feet firmly on the ground, but late in the 19th century the term “astrophysics” crept into use to define the newer quest to understand extra-terrestrial mechanisms as well as terrestrial ones.
At the turn of the millennium a new dictionary term, “cosmophysics”, might have been coined to describe the quest to understand the universe at large as well as its individual components.
In the past 20 years, as the mechanisms of the Big Bang have become increasingly understood, particle physics and cosmology have become inextricably linked. At the same time, new developments in space technology have enabled new experiments, such as AMS and GLAST, to be carried aloft, high above the stifling blanket of the Earth’s atmosphere. These provide new observations and measurements that have increased our understanding considerably.
As well as the physics involved, these studies call for a range of technological expertise to mount precision experiments under harsh conditions.
This development was underlined in a workshop entitled “Fundamental Physics in Space”, which was organized jointly by CERN and the European Space Agency (ESA), and held at CERN on 5-7 April. Although laboratory and space physics are developing along several parallel avenues, the meeting provided a valuable but rare opportunity for laboratory and space physicists to compare notes and discuss topics of common interest.
The workshop, which was initiated bythe Board of the Joint Astrophysics Division of the European Physical Society and the European Astronomical Society, followed on from the May 1998 decision by both ESA and CERN to create working groups to study joint activities.
Opening the summary talks at the meeting, CERN director-general Luciano Maiani pointed to the growing overlap between particle and space physics. Recently, both subjects have underlined the important role played by the most invisible aspect of the universe – the vacuum.
ESA director-general Antonio Rodotà recalled the pioneer work carried out in the 1970s that pointed out the need for opening up the physics exploration of space, particularly for precision measurements and the deeper exploration of gravity, which now provide cornerstone missions for the new millennium.
Cosmology is flourishing
What Chandrasekhar once called “the graveyard of astronomy” is now a flourishing field, commented Lodewijk Woltjer of the St Michel Observatory and former ESO director-general, as he commenced his summary of the cosmology sessions. Indeed, to hear the wealth of science presented and the number of new instruments in the pipeline, it looks like its future is bright.
Type 1A supernovae have long been used as “standard candles” to measure distances in space. A measure of their apparent luminosity gives a measure of their distance. However, the method is prone to many errors and different teams can get very different results. Gustav Tammann of Basle explained how, with new corrections for decline rate and colour, the Hubble constant becomes 59 ± 7, corresponding to the universe being 17 billion years old. “Photometry with the Hubble Space Telescope is working at the limits of what is possible, the main problem being the background,” he said.
Another cosmological parameter is omega, the ratio of matter in the universe to the critical level needed to halt the expansion of the universe. The inflation model of the Big Bang predicts that omega should be exactly equal to one – that the universe is “flat”.
That the universe is expanding forever seems to have a certain philosophical appeal for some people. However, I have never really understood this, because our fate won’t be very much different!
Lodewijk Woltjer
Woltjer summarized current results that suggest that contributions of both radiating and dark matter to the density of the universe give an omega of around one-third. A non-zero cosmological constant or some new form of energy would be needed to make up the difference.
Sidney Bludman from DESY and Penn State showed how quintessence, or negative pressure, could solve this problem. A consequence would be to increase the expansion of the universe with time – an accelerating universe. A non-zero cosmological constant has also been suggested by studies of distant supernovae.
This picture was confirmed by Jean-Loup Puget of Orsay in his round-up of observations of the cosmic microwave background radiation. Results from the Boomerang experiment announced earlier this year give omega as one (±0.3) and suggest a non-zero cosmological constant. Puget looked forward to results from ESA’s Planck satellite, which is due to be launched in 2007.
“That the universe is expanding forever seems to have a certain philosophical appeal for some people,” said Woltjer. “However, I have never really understood this, because our fate won’t be very much different!”
Imaging dark matter
The most exciting cosmology news was that gravitational lensing has now really come of age. Cosmic shear raised a significant amount of interest. Peter Schneider of Bonn showed how gravitational weak lensing can reveal the invisible. His team has discovered a “dark clump” of 1015solar masses (assuming a redshift of 0.8) with no optical counterpart, which he believes is the first-ever lensing-detected dark matter cluster.
Schneider was waiting for the results of infrared observations of the region. If confirmed, this technique will have enormous implications for cosmology. “The future is very encouraging,” he said. Indeed, he announced another area where gravitational weak lensing is showing results – measuring the effect of lensing across a large field can help to map the dark matter making up so-called galactic halos. Observations by the Sloan Digital Survey have shown no sign of halo truncation at distances of up to 150 kpc. In fact, says Schneider, galaxies probably don’t really have halos of dark matter at all; what is seen is just a correlation between the galaxies’ positions and the overall large-scale dark matter distribution.
This view is supported by Carlos Frenk of Durham. With the Virgo Consortium, he has carried out simulations of the evolution of matter and dark matter in the universe. His modelling shows dark matter evolving in enormous filaments with galaxies forming at high-density nodes.
Woltjer reminded participants that a lot of assumptions are made before carrying out such simulations, in particular regarding the relationship between gas, dust and stellar objects. “We are still a long way from constructing the universe from first principles,” he commented.
Frenk was optimistic about the future. “Enormous progress has been made in instrumentation over recent years. If the 1980s belonged to the theorists, then the late 1990s most certainly belonged to the experimentalists,” he said.
Future telescopes
The next-generation space telescope (NGST), still on the drawingboard, should contribute. At redshifts of greater than 5, only 5% of stars have formed. However, “this is a very interesting fraction of stars,” said Frenk. He believes that the NGST will detect primeval galaxies at redshifts of up to 10.
Peter Shaver of ESO reviewed the recent progress in detection techniques, in particular for observations of the first galaxies and quasars. The discovery of the Lyman alpha break in the spectrum of high-redshift objects has caused a revolution over the last five years or so, enabling more and more high-redshift galaxies to be recognized. “We are closing in on the reionization epoch,” he said. In his opinion, the furthest galaxy discovered to date is at a redshift of 5.74. He believes that claims for galaxies at a redshift of 6.68 are yet to be proven.
With the NGST it will be interesting to look at the evolution of galaxies at high redshift, and also the quasar epoch around redshift 2. NGST will be launched in around 2010. Another useful tool for studying early galaxies, which is to be launched in 2007, is ESA’s Far Infrared and Submillimetre Telescope (FIRST), explained Reinhard Genzel of Garching. These space observations will be paralleled by the ALMA ground-based infrared array.
Moving on to another type of radiation altogether, Martin Huber from ESTEC summarized the session on gravitational wave astronomy. Gravity waves are ideal probes of the universe because they interact very weakly and carry huge energies. Their existence has long been confirmed by measurements of the energy loss from binary star systems. However, they have never been detected directly.
Besides the classic resonance detectors, current ground-based detectors include the GEO 600 and TAMA interferometers. The next generation of ground-based detectors, Virgo and Ligo, will improve in accuracy by a factor of 10. “I am confident that we will detect gravity waves within the next decade,” said Huber. “However, it will be very difficult to pinpoint the sources.”
Most of the sources within the frequency range of the next detectors will be transient. Bernard Schutz of the Einstein Institute, Potsdam explained that the ideal sources are compact, such as black holes, and repeating, such as rotating binary systems. Ground detectors can only observe at frequencies above 1 Hz because the Earth’s background noise cannot be screened. Events in this frequency range are rare or weak, such as supernova collapses and compact binary spindown.
The future is the ESA cornerstone mission, LISA, which is to be built jointly with NASA. This interferometer in space will observe in the low-frequency window below 1 Hz, where emission occurs from many known strong sources, such as massive black holes and compact binary star systems.
An afternoon gravitation session served as a public presentation of the mission. Karsten Danzmann of Hannover gave a taster of the physics to come. “More than 90% of the universe is dark,” he said. “If part of the dark matter clumps, then gravitational wave detectors may be the only way to see it directly.”
Another exciting area is the stochastic gravitational wave background. “Just as the cosmic microwave background radiation shows us the universe when it was 300 000 years old, a gravitational wave background would be a picture of the Big Bang itself – when the universe was perhaps just 10-24 s old,” said Danzmann. The planned LISA launch date is in 2010. “It is a completely new field,” said Huber. “We should expect the unexpected.”
The other session on gravitation showed how space experiments could really test the physics of gravity. In particular, Pierre Touboul of ONERA and Nicholas Lockerbie of Glasgow talked about two new satellite experiments that are planned to test the equivalence principle, or the universality of free fall. The French team is working on mscope, which is to be launched in around 2003. It hopes to test the equivalence principle to 1 part in 1015– an improvement of three orders of magnitude on current experiments. The ESA/NASA STEP mission could be launched in around 2005 and will test to 1 part in 1018. “String theory gives a natural explanation of why gravity is dynamic without assuming it,” said Thibault Damour of IHES, Bures-sur-Yvette. “In theory, not only is space not rigid but there are also coupling constants that imply a violation of the equivalence principle.”
Accelerators in the sky
There is apparently no end to the mysteries of the heavens – our lifelong acquaintance with puny, everyday mechanisms makes us ill-equipped to understand the mighty forces at work in the depths of the universe.
New telescopes peering into the depths of space from fresh vantage points reveal sourcespumping out energy at unimaginable rates. Many of these, whatever they emit and however they are seen, are poorly understood and can be conveniently grouped under the heading “extreme sources”. In his summary, P L Biermann of Bonn said: “the sky contains all this and a lot more”.
Jewels in the intense source crown are the mysterious gamma-ray bursts – now an everyday occurrence. Attempts to explain how so much energy can be released focus on extremely relativistic fireballs. Other fireballs – active galactic nuclei, black holes, etc – are also held to be responsible for X- and gamma-ray fireworks.
While electromagnetic radiation points back to its source, cosmic rays, tangled by intergalactic magnetic fields, do not reveal where they come from. The tip of the mystery cosmic-ray iceberg is now 24 cosmic-ray events that, in principle, should never be seen – their energy is beyond that “allowed” by interactions with the all-pervading cosmic microwave background. How can such extreme energies be produced and how can they elude the all-pervading background radiation?
Cosmic rays – once the point of entry for particle physics – are now a new point of departure. The universe has to contain “radiogalaxy hot spots” – cosmic accelerators larger than a typical galaxy, to whirl charged particles to such “astronomical” velocities.
That most of the universe is composed of invisible dark matter is perhaps the ultimate physics paradox. Attempts to uncover dark matter and to resolve this paradox are a major theme in astrophysics research, both theoretical and experimental.
At the CERN/ESA meeting, Alvaro de Rujula of CERN summarized the dark matter sessions, where direct searches for exotic particles, such as axions (“aaxions”, according to de Rujula), have yet to turn up positive evidence. More promising is the area of gravitational lensing. Objects can be invisible but still exert a gravitational pull, which can disturb visible light in transit.
One specialist area is gravitational microlensing, which is looking for the effects of otherwise invisible objects as they cross the line of sight of a more distant luminous object. Interpreting this mass of results is still difficult, but de Rujula suggested that, while dark matter massive astronomical compact halo objects (MACHOs) are out of favour, weakly interacting massive particles (WIMPS) are coming in.
The DAMA (sodium iodide) detector at Gran Sasso has reported an annual signal variation that has been interpreted as possible evidence for galactic WIMP particles. Such a signal is not seen by the Cryogenic Dark Matter Search (CDMS) experiment at Stanford using silicon and germanium sensors.
This part of the programme also covered neutrino astronomy. As well as providing a new window on the universe, neutrino astrophysics has offered evidence for neutrino mixing, and therefore for non-zero neutrino mass. A new understanding of neutrinos would provide fresh light on the basic interactions of nature.
The limited seasonal and diurnal variation in solar neutrino signals provides important limits on neutrino-mixing mechanisms. The big Superkamiokande detector in Japan dominates the world data on extra-terrestrial neutrinos and has now intercepted 17 terrestrial neutrinos fired from the KEK laboratory, some 250 km distant – the first time that terrestrial neutrinos have been tracked over such a long path.
Extra-terrestrial neutrino physics “has a long past and a brilliant future”, ventured de Rujula.
In particle physics the continual demands to handle and analyse increased data rates and to attain greater precision provides a fertile ground for detector innovation.
Michel Spiro of Saclay, chairman of CERN’s LEP
Experiments Committee, summarized the session covering the
use and potential use in space experiments of instrumentation
developed for high-energy
physics.
Innovations in instrumentation
Detectors in space “see” X-ray and gamma radiation before it is absorbed by the atmosphere. Highly sensitive cryogenic X-ray detectors will be a useful new addition to the sensor armoury. The massive R&D programmes for the major experiments at CERN’s future LHC collider have already yielded an impressive array of techniques – pixel detectors as “eyes” and scintillators for energy measurement – which could go on to provide useful opportunities. Time projection chambers are another means of providing remarkable images of physics beyond the atmosphere.
As well as the detectors, read-out mechanisms too are developing quickly. Sensors and chips can be dissociated and exploit complementary technologies. Photomultiplier technology has received considerable impetus from experiments studying neutrinos.
The LHC experiments are also blazing new trails in data acquisition and handling (see “Grid” feature) and in semiconductor technology.
Spiro highlighted several new flagship space-borne experiments exploiting particle physics know-how – the AMS detector for the Space Station and the GLAST telescope, which is due for launch in 2005, while the Supernova Accelerator Probe (SNAP) and Extreme Universe Space Observatory (EUSO) proposals could continue this tradition.
Gert Viertel of ETH Zurich summarized the current instrumentation of space. Here the requirement for very high timing accuracy has driven the development of precise atomic clocks. Pixel detectors already have a distinguished track record of astronomical measurements. Superconducting tunnel junctions are poised to begin a new chapter of space research.
Away from the detectors, the highly successful GEANT simulation software developed for particle physics is finding increasing use in astrophysics and astronomy.
While particle physics is a fertile breeding ground for new detector technology, it is not the only variable in the equation. Space borne experiments, requiring years of fruitful operation with minimal or no manual maintenance and intervention, have their special requirements.
This new contact between particle physicists and cosmophysicists is already paying dividends on the instrumentation front. CERN’s “recognized experiment” status now covers a range of studies that do not use accelerator beams, but ensure that the laboratory remains a focal point of this physics. At the start of the millennium, the rapidly maturing field of cosmophysics is poised to make a major contribution to our knowledge and understanding of the universe.
A nucleus is like oranges stacked in a bag – with discernable “fruit” or nucleons (protons and neutrons) and spaces in between. Crush the bag and the oranges dissolve into juice, which fills the reduced space. A pip that once belonged to a particular orange is now free to move anywhere. In the same way, when a nucleus is crushed, it dissolves into a plasma of quarks and gluons. These, once imprisoned inside a particular nucleon, are free to move inside a much larger volume.
In ordinary nuclear matter a nucleus consists of nucleons with a vacuum between them. Each nucleon has a volume of about 2 fm3and contains three valence quarks together with a cloud of gluons – the carriers of the strong nuclear force that binds the quarks in the nucleon and the nucleons in the nucleus.
In physics a phase diagram shows the boundaries between different types of the same substance, such as steam, water and ice, depicting where boiling and freezing occur. Boiling and freezing are very dependent on external conditions, such as pressure and temperature. For nuclear matter the phase diagram shows the boundary between normal nuclear matter, composed of nucleons, and the quark-gluon plasma (QGP).
Normal nuclear matter is situated at temperature zero and a baryochemical potential (a measure of the nucleon density) of about 765 MeV. The nucleon density is about 0.145 fm-3 and the energy density is about 0.135 GeV fm-3.
Compressing nuclear matter, so that the nucleons start to interpenetrate and overlap (by at least 3%), makes the intervening vacuum disappear. Each nucleon dissolves and its constituent quarks and gluons are free to move inside a larger volume, which has become very dense by compression. A new state of matter is now formed – the deconfined QGP – with a critical nucleon density and an energy density of greater than 0.72 fm-3 and 0.7 GeV fm-3 respectively. Liberating quarks at zero temperature therefore requires matter and energy densities at least five times as large as those of normal nuclear matter.
How can this be achieved? The only known way is by compressing and heating nuclear matter, by slamming a very-high-energy beam of nuclei onto fixed-target nuclei, or by bringing two counter-rotating nuclear beams into collision. This was the objective of the heavy-ion fixed-target programme at CERN’s SPS and Brookhaven’s AGS accelerators, and it will also be the aim of the upcoming RHIC and LHC colliders at Brookhaven and CERN respectively.
Theoretical statistical models have been used to analyse and evaluate the data from nucleus-nucleus interactions. Such models produce very satisfactory representation of the experimental data, verifying that the statistical model is applicable. However, little fundamental insight is gained into the actual dynamics of the collision.
The important objective is to ascertain where on the phase diagram the original thermal source (fireball) is situated – in the domain corresponding to a gas of nucleons or in that of the QGP. For this a simple statistical analysis is inadequate. Only if interactions among the multitude of emitted particles are taken into consideration can the description of a possible change of phase into the QGP be envisaged.
Statistical approach
The Statistical Bootstrap Model (SBM), introduced by Rolf Hagedorn at CERN some 35 years ago, is a statistical approach, incorporating at the same time the effects of interactions in a self-consistent way. Recent development and extension of this model, the so-called S (for strangeness) SBM, can define the phase diagram and the limits of nuclear matter, as shown in the diagram (Kapoyannis, Ktorides and Panagiotou, in press). This boundary incorporates the largest possible and physically meaningful value of the critical temperature at zero baryochemical potential (To= 183 MeV) so as to have the largest possible nucleon domain and avoid over-optimistic interpretations.
SSBM-based analysis of data from the NA35 experiment at CERN’s SPS has shown compelling evidence that head-on collisions of even light-nuclei, such as sulphur-32, at 200 GeV/nucleon have attained the critical conditions, thereby allowing us to probe the deconfined quark state.
The overall situation resulting from the data analysis is depicted in the diagram. It is found that the sulphur-sulphur interaction is situated 76% inside the QGP domain, beyond the nucleon phase, while the proton-antiproton collision, measured in the UA5 experiment at CERN, is well within the nucleon region, as expected.
A second indication that the QGP phase has been reached in the sulphur-sulphur collisions is the substantial excess of pion (entropy) production, the explanation for which calls for a contribution of at least 30% from a high-entropy phase, such as that of the QGP.
A third important sign is the achievement of thermal and chemical equilibrium conditions (equilibrium between the different quark species produced) in the initial stage of the nuclear interaction, materializing at a temperature of at least 177 MeV, a baryochemical potential of more than 252 MeV and a strangeness saturation (specifying the relative strange-quark production in chemical equilibrium) of close to unity. Thermal and chemical equilibrium conditions are expected and required for the transition to QGP. Finally, the energy density created in these interactions is at least 2 GeV fm-3– well above the critical value for deconfinement of about 1 GeV fm-3.
These observations, together with several other intriguing clues, give rather definitive indications that the door to the quark-gluon plasma has been opened by the SPS heavy-ion programme at CERN.
Scientific achievements at CERN’s ISOLDE radioactive beam facility, following its move to the laboratory’s PS booster in 1992, were the subject of a weekend meeting at CERN in March. About 100 physicists from around the world discussed the range of ISOLDE research, from exotic nuclei far from stability to nuclear astrophysics, and from fundamental symmetries to solid-state physics and biomedical applications.
The opening presentations covered the technology of radioactive beam production. Orsay’s Michel de Saint Simon and Helge Ravn of CERN each reviewed a high-resolution isotope separator designed to provide good isobar separation and consequently contaminant-free beams of proton-rich or neutron-rich isotopes for experiments starting this year. Jyväskylä’s Arto Nieminen then explained how the cooling and bunching of low energy radioactive ion beams has been achieved using gas-filled radiofrequency quadrupoles, both at ISOLDE and at his home university. This offers a new way of improving beam quality for future experiments.
REX-ISOLDE
One of the most exciting new developments for the near future is REX-ISOLDE, a post-accelerator for the ISOLDE facility that will accelerate radioactive ions up to 2.2 MeV/nucleon. Dieter Habs of Munich, spokesperson of the first experiment at REX-ISOLDE, presented the physics outlook. REX-ISOLDE’s most novel and challenging aspect is “charge breeding”. This is achieved in two stages: the required isobars are separated and bunched in a Penning trap (REXTRAP), before being ejected into an electron-beam ion source (EBIS) for charge-state breeding. The first tests at REXTRAP and EBIS have already demonstrated the expected performance with more than 20% transmission and excellent overall stability. Commissioning experiments are scheduled to begin in the autumn of this year.
The technical aspects having been concluded, the workshop then moved on to research topics, with theoretical overviews from Alfredo Poves of Madrid and Paul-Gerhard Reinhard of Erlangen. Among the topics for investigation are core nuclear physics issues such as neutron halo systems whereby a nucleus consists of a central core with one or more loosely bound neutrons orbiting, and mapping the so-called neutron dripline of neutron rich nuclei at the limit of stability. In astrophysics, for example, ISOLDE’s potential in elucidating the rapid neutron capture (r-process) and rapid proton capture (rp-process) paths of nucleosynthesis was emphasized.
Turning to experimental nuclear structure, Karsten Riisager of Aarhus summarized the current understanding of neutron halos, where many experiments on neutron-rich lithium and beryllium ions at ISOLDE have contributed significantly. He also outlined future opportunities at REX-ISOLDE for looking beyond the driplines using low energy nuclear reactions, extending work already done at ISOLDE. On the proton-rich side of the valley of stability, experiments employing beta-delayed multiparticle decays have probed highly unbound states of light nuclei. On the neutron-rich side, recent decay studies of very neutron-rich sodium and aluminium isotopes, reported by Jyväskylä’s Saara Nummela, have resulted in new evidence for shell inversion at N = 21 and suggested the disappearance of the “magicity” of the neutron number N = 20, which was known for a long time as a good magic number in the valley of stability. This reordering of the nuclear shell structure is believed to result from the strong modification of effective nucleon interactions in nuclei very far from the valley of stability.
The physics of ground-state properties of exotic nuclei was also the subject of much attention. Munich’s Georg Bollen reported on recent progress on mass measurements by the ISOLTRAP and MISTRAL experiments, where a new era in high precision is dawning. Frank Herfurth of CERN described new data on the mass of the short lived argon-33 isotope with a precision of 10-7, exceeding all previous measurements. Rainer Neugart of Mainz gave an extensive review of a highly successful ISOLDE programme investigating ground-state properties of nuclei obtained by optical laser spectroscopy. He cited as an example a recent experiment that has provided information on moments and charge radii of neon isotopes from the proton-halo nucleus neon-17 up to the highly neutron-rich neon-28.
Nathal Severijns of Leuven gave a lively presentation covering fundamental physics beyond the Standard Model. After discussing ongoing work at the low-temperature nuclear orientation facility at ISOLDE (NICOLE), which allows the angular dependence of radioactive decay to be measured, he turned his attention to future opportunities for studying scalar contributions in the weak interaction by employing a special ion trap measuring technique that is under development at Leuven and ISOLDE.
Solid-state physics at ISOLDE was reviewed by Manfred Deicher of Konstanz. Experiments at ISOLDE can be classified into three groups: those exploiting the emission angle of radiation from implanted radioactive isotopes; hyperfine interaction spectroscopy; and labelling with radioactive isotopes. One advantage of the emission channeling method is that suitable isotopes of nearly all chemical elements exist. This has already proved useful for a large number of semiconductors. Information on the lattice sites of copper and erbium in silicon, for example, have been measured, providing a view into their structural and dynamical properties. High Tcsuperconductors of the mercury family have also been investigated with a hyperfine spectroscopy technique. The oxygen content in the charge reservoir layer was found to determine the critical temperature.
In the 1990s the availability of many different radioactive isotopes as pure ion beams at ISOLDE triggered a new era involving methods used for investigating optical and electronic properties of solids, especially in the field of semiconductor physics. Extremely sensitive spectroscopic techniques, like deep level transient spectroscopy, photoluminescence and the Hall effect, gain a new quality by using radioactive isotopes: owing to their decay, the chemical origin of an observed electronic and optical behavior of a specific defect or dopant can be identified unambiguously. Solid-state physicists are now looking forward to the implantation of their probes deeper into the crystals with the help of REX-ISOLDE. This will open up new possibilities for studying certain materials in greater depth than before, and of adding new materials such as ferroelectrics to ISOLDE’s repertoire.
This workshop revealed a rich spectrum of science at CERN’s veteran nuclear physics facility. New ideas in combination with ongoing effort in target and ion-source development promise a rich future for ISOLDE.
The discovery of mysterious gamma-ray bursters in the late 1960s opened up a new chapter in astronomy. Now gamma-ray astronomy has taken another surprising turn with the revelation that many of the previously unidentified high-energy gamma-ray sources in our galaxy, the Milky Way, comprise a whole new class of mysterious objects that “shine” continuously instead of coming in bursts.
The known gamma-ray universe contains many yet-unidentified gamma-ray sources, listed in a 271-source catalogue compiled by the Energetic Gamma Ray Telescope Experiment aboard NASA’s Compton Gamma Ray Observatory spacecraft. Scientists have struggled to associate these unidentified sources with known objects emitting other types of light.
Of the unidentified gamma sources in our galaxy, about half lie in a narrow band along the galactic plane. These may be well known classes of objects that simply shine too faintly in other types of light to be identified – gamma rays pass through intervening material much more easily than other types of radiation.
The other half of the unidentified galactic gamma sources – the new class – are closer to Earth. These lie just off the Milky Way plane and seemingly follow the Gould Belt – a ribbon of nearby massive stars and gas clouds that winds through the Milky Way plane. However the mechanisms of powerful gamma-ray emitters, whether in bursts or more prolonged, remains a mystery.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.