While quantum chromodynamics (QCD) is considered to be the theory of strong interactions, it is very difficult to use it to make predictions of processes over distances of the order of the size of hadrons. The problem is that the coupling, which determines the strength of the interaction, becomes so large at such “large” distances that it is likely many gluons participate and the calculations diverge. So for the time being large-distance QCD remains one area where experiment can play a leading role and where new and unexpected phenomena may be discovered.
To consider some of the experimental options for the future in this area, Fermilab is hosting a workshop on “The Future of QCD at the Tevatron” on 20-22 May 2004. Its purpose is to evaluate the status of QCD in 2009, when the CDF and D0 experiments at the Tevatron are currently scheduled for completion. The Tevatron is at present the world’s highest energy hadron collider, where CDF and D0 are probing QCD at the smallest distances (about 1/10,000th the size of the proton). This frontier will be taken over in 2007 by the Large Hadron Collider (LHC), but many studies, particularly of large-distance QCD, will remain to be done.
The workshop will consider such questions as: what desirable studies are not being done but could be added to the present research programme? What studies will be complementary to the QCD physics that will be performed at the LHC and elsewhere? What can (and cannot) the future Tevatron experiment BTeV, scheduled to start data taking at the Tevatron in 2009, do? All these questions are to be addressed, and it could provide the basis for developing a case for additional experimentation beyond 2009 at the Tevatron, perhaps using CDF or D0 detectors with modest upgrades.
The Standard Model of particle physics is arguably one of the greatest achievements in physics in the 20th century. Within this framework the electroweak interactions, as introduced by Sheldon Glashow, Abdus Salam and Steven Weinberg, are formulated as an SU(2) x U(1) gauge field theory with the masses of the fundamental particles generated by the Higgs mechanism. Both of the first two crucial steps in establishing experimentally the electroweak part of the Standard Model occurred at CERN. These were the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and only a decade later the discovery by the UA1 and UA2 collaborations of the W and Z gauge bosons in proton-antiproton collisions at the converted Super Proton Synchrotron.
Establishing the theory at the quantum level was the next logical step, following the pioneering theoretical work of Gerard ‘t Hooft and Martinus Veltman. Such experimental proof is a necessary requirement for a theory describing phenomena in the microscopic world. At the same time, performing experimental analyses with high precision also opens windows to new physics phenomena at much higher energy scales, which can be accessed indirectly through virtual effects. These goals were achieved at the Large Electron Positron (LEP) collider.
LEP also provided indirect evidence for the fourth step in this process, establishing the Higgs mechanism for generating mass. However, the final word on this must await experimentation in the near future at the Large Hadron Collider.
The beginnings of LEP
Before LEP started operating in 1989, the state of the electroweak sector could be described by a small set of characteristic parameters. The masses of the W and Z bosons had been measured to an accuracy of a few hundred MeV, and the electroweak mixing parameter sin2θW had been determined at the percent level. This accuracy allowed the top-quark mass to be predicted at 130 ± 50 GeV, but no bound could be derived on the Higgs mass.
The idea of building such an e+e– collider in the energy region up to 200 GeV was put forward soon after the first highly successful operation of smaller machines in the early 1970s at energies of a few GeV. The physics potential of such a high-energy facility was outlined in a seminal CERN yellow report (figure 1).
LEP finally started operation in 1989, equipped with four universal detectors, ALEPH, DELPHI, L3 and OPAL. The machine operated in two phases. In the first phase, between 1989 and 1995, 18 million Z bosons were collected, while in the second phase, from 1996 to 2000, some 80,000 W bosons were generated at energies gradually climbing from the W-pair threshold to the maximum of 209 GeV. The machine performance was excellent at all the energy steps.
Phase I: Z physics
The Z boson in the Glashow-Salam-Weinberg model is a mixture of the neutral isospin SU(2) and the hypercharge U(1) gauge fields, with the mixing parameterized by sin2θW. The Z boson interacts with vector and axial-vector currents of matter. The Z-matter couplings, including the mixing angle, are affected by radiative corrections so that high-precision analyses allow both tests at the quantum level and extrapolations to new scales of virtual particles.
The properties of the Z boson and the underlying electroweak theory were studied at LEP by measuring the overall formation cross-section, the forward-backward asymmetries of the leptons and quarks, and the polarization of tau leptons. Outstandingly clear events were observed in each of the four detectors (see figure 2). As a result, the experimental analyses of the Z line-shape (see figure 3) of the decay branching ratios and the asymmetries were performed with a precision unprecedented in high-energy experiments (see equation 1 for all Z data, including SLD).
Thus, the electroweak sector of the Standard Model successfully passed the examination at the per-mille level, as highlighted by global analysis of the electroweak mixing parameter sin2θW. This is truly in the realm where quantum theory is the proper framework for formulating the laws of nature. Figure 4 shows the observables that were precisely measured at LEP. The picture is uniform in all the observables, with deviations from the average line a little above and below 2σ only in the forward-backward asymmetry of the b-quark jets, and the left-right polarization asymmetry measured at the Stanford Linear Collider facility.
However, beyond this most stringent test of the electroweak theory itself, Z physics at LEP allowed important conclusions to be drawn on several other aspects of the Standard Model and potential physics beyond.
The first of these concerned the three families of leptons in the Standard Model. The number of light neutrinos could be determined by comparing the Z width as measured in the Breit-Wigner line-shape with the visible lepton and quark-decay channels. The ensuing difference determines the number of light neutrino species to be three: Nν = 2.985 ± 0.008. Thus, LEP put the lid on the Standard Model with three families of matter particles.
The physics of the top quark was another real success story at LEP. Not only could the existence of this heaviest of all quarks be predicted from LEP data, but the mass could also be pre-determined with amazing accuracy from the analysis of quantum corrections – a textbook example of the fruitful co-operation of theory and experiment. By analysing rate and angular asymmetries in Z decays to b-quark jets at LEP and complementing this set with production rates at the lower energy collider PETRA, the isospin of the b-quark could be uniquely determined (figure 5). From the quantum number I3L = -1/2, the existence of an isospin +1/2 partner to the bottom quark could be derived conclusively – in other words, the top quark.
Z physics at LEP has also contributed to our knowledge of quantum chromodynamics (QCD), the theory of strong interactions in the complete SU(3) x SU(2) x U(1) Standard Model. As was already apparent from the study of PETRA jets at DESY, the clean environment of electron-positron collisions enables these machines to be used as precision tools for studying QCD. At LEP several remarkable observations contributed to putting QCD on a firm experimental basis.
Firstly, with the measurement of the QCD coupling αs = 0.1183 ± 0.0027 at the scale MZ and the jet analysis of the running from low energies at PETRA to high energies at LEP, the validity of asymptotic freedom could be demonstrated in a wonderful way (see figure 6a). Secondly, the observation of the three-gluon self-coupling in four-jet final states of Z-boson decays enabled QCD to be established as a non-abelian gauge theory (see figure 6b). With the measured value CA = 3.02 ± 0.55, the strength of the three-gluon coupling agrees with the predicted value CA = 3 for non-abelian SU(3), and is far from the value of zero in any abelian “QED type” field theory without self-coupling of the gauge bosons. Thirdly, in the same way as couplings run, quark masses change when weighed at different energy scales, induced by the retarded motion of the surrounding gluon cloud. This effect was observed in a unique way by measuring the b-quark mass at the Z scale (see figure 6c).
There is one further triumph of the Z-physics programme. When extrapolating the three couplings associated with the gauge symmetries SU(3) x SU(2) x U(1) in the Standard Model to high energies, they approach each other but do not really meet at the same point. This is different if the particle spectrum of the Standard Model is extended by supersymmetric partners. Independently of the mass values, so long as they are in the TeV region, the new degrees of freedom provided by supersymmetry make the couplings converge to an accuracy close to 2% (see figure 7). This opens up the exciting vista that the electromagnetic, weak and strong forces of the Standard Model may be unified at an energy scale close to 1016 GeV, while at the same time giving support to supersymmetry, a symmetry that may be intimately related to gravity, the fourth of the forces we observe in nature.
Gauge field theories appear to be the theoretical framework within which the three fundamental particle forces can be understood. The gauge symmetry theory was introduced by Hermann Weyl as the basic symmetry principle of quantum electrodynamics; the scheme was later generalized by C N Yang and R L Mills to non-abelian gauge symmetries, before being recognized as the basis of the (electro) weak and strong interactions.
One of the central tasks of the LEP experiments at energies beyond the W-pair threshold was the analysis of the electroweak three-gauge-boson couplings, predicted in form and magnitude by the gauge symmetry. A first glimpse was also caught of the corresponding four-boson couplings.
Charged W+W– pairs are produced in e+e– collisions by three different mechanisms: neutrino exchange, and photon- and Z-boson exchanges. From the steep increase of the excitation curve near the threshold and from the reconstruction of the W bosons in the leptonic and hadronic decay modes, the mass MW and the width ΓW can be reconstructed with high precision (see equation 2).
This value of the directly measured W mass is in excellent agreement with the value extracted indirectly from radiative corrections.
Any of the three production mechanisms for W+W– pairs, if evaluated separately, leads to a cross-section that rises indefinitely with energy. However, the amplitudes interfere destructively as a result of the gauge symmetry, and the final cross-section is damped for large energies. The prediction of gauge cancellations is clearly borne out by the LEP data (see figure 8), thus confirming the crucial impact of gauge symmetries on the dynamics of the electroweak Standard Model sector in a most impressive way.
The role of the gauge symmetries can be quantified by measuring the static electroweak parameters of the charged W bosons, i.e. the monopole charges (gW), the magnetic dipole moments (µW) and the electric quadrupole moments (qW) of the W bosons coupled to the photon and Z boson. For the photon coupling gW = e, µW = 2 x e/2MW, qW = -e/MW2 and for the Z coupling analogously. These predictions have been confirmed experimentally within a margin of a few percent.
Studying the quattro-linear couplings requires three-boson final states. Some first analyses of W+W–γ final states bounded any anomalies to less than a few percent.
The fourth step in establishing the Standard Model experimentally – the search for the Higgs particle – could not be completed by LEP. Nevertheless, two important results could be reported by the experiments. The first of these was to estimate the mass of the Higgs when acting as a virtual particle. By emitting and reabsorbing a virtual Higgs boson, the masses of electroweak bosons are slightly shifted. In parallel to the top quark, this effect can be included in the ρ parameter. With Δρ ~ GFMW2logMH2/MW2, the effect is however only logarithmic in the Higgs mass, so that the sensitivity is reduced considerably. Nevertheless, from the celebrated “blue-band plot”, a most probable value of about 100 GeV in the Standard Model, though with large error, is indicated by evaluating the entire set of established precision data (see figure 9). An upper bound close to 200 GeV has been found in the analysis shown in equation 3a.
Thus, in the framework of the Standard Model and a large class of possible extensions, LEP data point to a Higgs mass in the moderately small, intermediate mass range. This is corroborated by individual analyses of all the observables, except the forward-backward asymmetry of b-jets. (This indirect evidence for a light Higgs sector is complemented by indirect counter-evidence against a large class of models constructed for generating mechanisms of electroweak symmetry breaking by new strong interactions.)
The direct search for the real production of the Higgs particle at LEP through the “Higgs-strahlung” process, e+e–→ZH, set a stringent lower limit on the mass of the particle in the Standard Model (see equation 3b).
However, we have been left with a 1.7σ effect for Higgs masses in excess of 115 GeV, fuelled by the four-jet channel in one experiment. “This deviation, although of low significance, is compatible with a Standard Model Higgs boson in this mass range, while also being in agreement with the background hypothesis.” (LEP Higgs Working Group.)
LEP’s legacy
Based on the high-precision measurements by the four experiments, ALEPH, DELPHI, L3 and OPAL, and in coherent action with a complex corpus of theoretical analyses, LEP achieved an impressive set of fundamental results, the traces of which will be imprinted in the history of physics. LEP firmly established essential elements of the Standard Model at the quantum level. It provided indirect evidence for the existence of a light Higgs boson of the type required by the Standard Model. The extrapolations of the three gauge couplings measured at LEP point to the grand unification of the individual gauge interactions at a high-energy scale – compatible with the supersymmetric extension of the Standard Model in the TeV range.
In addition, the precision analyses performed at LEP probed the many physics scenarios beyond the Standard Model, constraining their parameters in the ranges between the upper LEP energy to the TeV and multi-TeV scales. These studies have led to a large number of bounds on masses of supersymmetric particles, masses and mixings of novel heavy gauge bosons, scales of extra space-time dimensions, radii of leptons and quarks, and many other examples.
•The figures and the experimental numbers are from the four LEP experiments, the LEP Electroweak Working Group, the LEP Higgs Working Group, G Altarelli, S Bethke, D Haidt, W Porod, D Schaile and R Seuster.
This article is based on a talk given by Peter Zerwas at the symposium held at CERN in September 2003 entitled “1973: neutral currents, 1983: W± and Z0 bosons. The anniversary of CERN’s discoveries and a look into the future.” The full proceedings will be published as volume 34 issue 1 of The European Physical Journal C. Hardback ISBN: 3540207503.
Foremost among the many open questions in nuclear physics is the determination of the limits of the basic properties of the nucleus. For example, just what is the heaviest nuclear system possible and into what forms does it distort itself? When it comes to nuclear size, measurements have led to the astonishing discovery that the lightest nuclei have a curious tendency to puff themselves up in importance, imitating their bigger (and heavier) relatives. The most famous example is the nuclide 11Li, which has a radius almost equal to that of 208Pb and yet is 20 times lighter. 11Li has such a surplus of neutrons – eight, for only three protons – that the last two are pushed far from the core, forming a so-called halo. This nuclide, which still defies theoretical description, even by the most advanced nuclear models, has recently come under intense scrutiny at the ISOLDE facility at CERN.
The heaviest elements of the periodic table, now reaching at least to Z = 115, have been discovered recently at JINR in Dubna and at GSI in Darmstadt. They are located well beyond the heaviest known stable elements of lead (Pb, Z = 82) and bismuth (Bi, Z = 83), across the gulf of spontaneously fissioning actinides, which is dotted by a small archipelago consisting of stable thorium (Th, Z = 90) and uranium (U, Z = 92). The new region being explored in Darmstadt and Dubna is known, naturally, as the “island” of superheavy elements. [The naming of these elements is an issue in itself, which is subject to stringent verification by the International Union of Physics and Applied Chemistry (IUPAC). At a recent ceremony the discoverers of Z = 110 were granted their wish to name their foundling darmstadtium (Ds) after the city of their institute, GSI.
Shape shifters
Another example of nuclear extremes concerns shape and is a consequence of nuclear deformation – the departure from a normal, spherical shape. Some nuclides, particularly those with unbalanced proton-to-neutron ratios, are more comfortable assuming some type of contortion, typically a cigar shape (prolate) or a disc shape (oblate). A recent trend in experiments in nuclear physics was to create nuclei with the highest possible angular momentum. The consequence of so much spin was a nucleus that was so distorted it was classed as superdeformed. Such nuclei were studied by a very special type of spectroscopic footprint: a comb-like energy spectrum produced by a series of cascading gamma-ray decays as the whirling nucleus shed its enormous excess of rotational energy (Walker and Dracoulis 1999).
CERN’s online isotope mass-separator facility, ISOLDE, does not have the beam energy necessary to produce either superdeformed or superheavy nuclides. However, ISOLDE does have the capacity to produce nuclides of another superlative character – the superlarge. To do this ISOLDE must be pushed to its limit: the neutron drip line (see figure 1). Imagine producing heavier isotopes of an element by adding neutrons to a given number of protons to the point where the neutron-saturated nucleus cannot hold any more. Like a water-soaked sponge, the nucleus drips neutrons.
Nuclides at the drip line exhibit behaviour that is unlike that of stable or even mildly radioactive species, and as such warrant special study. One interesting example is when loosely bound neutrons stray from a more equilibrated cohesive core to form what are called halo nuclei. These wayward neutrons considerably extend the nuclear radius, making it much larger than a nucleus in which the neutrons do not so greatly outnumber the protons – hence the term “superlarge”.
Superlarge nuclides are a conundrum to theorists as the halo radii are in fact larger than the range of the strong interaction that binds the protons and neutrons within the nucleus. The case of 11Li, where the halo is formed from two neutrons, is particularly curious (see figure 2). Not only is the two-neutron subsystem unbound, but so is the 10Li subsystem, which consists of a 9Li core plus a neutron – an example of a nucleus that drips one neutron. So if one of the halo neutrons is removed from 11Li then the other comes away too. These systems have been dubbed Borromean (of Italian nobility) after the heraldic symbol of three rings which are connected in such a way that the removal of any one ring frees the other two (see figure 3).
In 2003 two experiments at ISOLDE were devoted to the most important fundamental properties of 11Li – its size and its weight. In fact these two quantities are related, as the binding energy (from the mass) determines the radial extent of the system. But given the particular behaviour of the outer, valence neutrons, the recipe is not so straightforward.
Mass measurements and laser spectroscopy
The MISTRAL experiment recently measured the mass of 11Li. MISTRAL is a precision mass spectrometer that is particularly adapted to measurements of very-short-lived nuclei, and with a half-life of only 8.6 ms 11Li was extremely well suited to MISTRAL’s rapid measurement technique. Though previously known, the mass value was considerably improved and in fact slightly modified by about 70 keV. While this is small compared with the total mass of the system of around 11 GeV, it is significant with respect to the two-neutron binding energy of only 300 keV (Bachelet 2004).
The radial extent – and shape – of superlarge nuclei can also be determined by laser spectroscopy. This is an elegant marriage of atomic and nuclear physics techniques that probes the subtle effect of the small but finite nuclear volume on the distant electron orbitals. The measured quantity in this case is the nuclear quadrupole moment which, like the mass, was previously known but with insufficient precision to constrain the ever-increasing sophistication of theoretical models (Borremans 2004).
In the past, the two neutrons forming the 11Li halo were considered with respect to a 9Li core, which was regarded as inert. The results of these new measurements, especially that of the quadrupole moment, will have an important bearing on the polarization of the core by the halo neutrons – an effect that is only now starting to be treated by theory (Jensen et al. 2004).
Given the importance of superlarge nuclides, ISOLDE, in collaboration with the Rutherford Appleton Laboratory in the UK, has invested considerable effort in the development of specialized targets for the production of such nuclei. A prime example is the target constructed by pressing together more than 100 tantalum foils only 2 µm thick. 11Li is produced in the target by fragmentation of the tantalum nuclei by the proton beam from the PS Booster at CERN. The thinner the foils constituting the target matrix, the faster the short-lived drip-line nuclides can diffuse out to be ionized and transported to the experiment (Bennett et al. 2002).
11Li is not the only superlarge nuclide that is produced at ISOLDE. Others include, for example, 14Be, which has a two-neutron halo, and 19C, which has a one-neutron halo. 17Ne has also been the subject of study by laser spectroscopy due to its interest as a nuclide with a one-proton halo, and it will come under scrutiny again in 2004 when its mass is measured by MISTRAL’s sister experiment ISOLTRAP.
The superlarge 11Li has, meanwhile, been the subject of a myriad of experimental and theoretical studies, with two recent reviews (Jonson and Jensen et al. 2004) chronicling the superlarge saga. It will fall to nuclear theory to fit all the pieces into place, hopefully reconciling all the superlative behaviour of the nuclear system after the fashion of supersymmetry, supergravity and superstrings: a veritable supermodel.
Like players on a stage, most forces act in a fixed, pre-existing space, but in Einstein’s classical theory of general relativity gravity is the dynamic shape of space. When classical forces enter the quantum arena the stage plays a new and more visible role: in its usual formulation quantum theory demands a ground state. The ground state is a fixed vacuum, above which excitations – which we can calculate and often measure to astonishing accuracy – propagate and interact. Because the vacuum is fixed it is not a surprise that for decades there was no convincing way to apply quantum principles to gravity.
Attempts were made to use the techniques of quantum field theory, which successfully quantizes light, to quantize Einstein’s theory. In this approach, just as light is described by a particle, the photon, gravity is described by a particle, the graviton. This is already a compromise as the graviton is a quantum ripple in a pre-assumed space that is not quantized. More drastically, the quantum field theory for the graviton fails because what should be small quantum corrections overwhelm the classical approximation, giving uncontrollable infinite modifications of the theory.
String theory solved part of this problem. In string theory the graviton is a string vibrating in one of its possible patterns, or “modes”. As the string moves through space it splits and rejoins itself (figure 1). These bifurcations and recombinations are the “stringy” quantum corrections, which are milder than those in quantum field theory and give rise to a quantum theory of gravity that is well defined and finite as a systematic expansion in the number of splittings. But the strings still move in space rather than being a part of it; they are quantized, while space itself remains stubbornly classical. Furthermore, it is not known if this expansion around Einstein’s classical theory can be summed to give a completely defined quantum theory.
Even before string theory John Wheeler suggested that at the Planck scale, the distance where the quantum corrections to gravity become large, the topology and geometry of space-time are unavoidably subject to quantum fluctuations (Wheeler 1964). This idea of a space-time quantum “foam” was explored by Stephen Hawking but has remained mysterious (Hawking 1978). New developments from an unexpected direction, however, have now given hints of an underlying, fundamentally quantum theory of strings that realizes these ideas: a mapping has been found to the theory of how crystals melt. In this picture classical geometry corresponds to a macroscopic crystal and quantum geometry to its underlying microscopic, atomic description.
These ideas grew out of discussions between a string theorist, Cumrun Vafa of Harvard, and two mathematicians, Nikolai Reshetikhin of UC Berkeley and Andrei Okounkov of the Institute for Advanced Study. They were brought together at the first of a series of interdisciplinary workshops held at the C N Yang Institute for Theoretical Physics and supported by the Simons Foundation at Stony Brook University in the summer of 2003.
The full theory of strings is still not understood well enough to formulate the problem of strings moving in a quantum space in complete generality. Instead, Okounkov, Reshetikhin and Vafa studied a part (or “sector”) of string theory called “topological string theory”. For concreteness, they focused on topological strings moving in a special class of spaces known as Calabi-Yau (CY) spaces.
Topological string theory is a simplification of the full theory of strings in which the motion of strings does not depend on the details of the space through which they move. As such it is mathematically more tractable than the full theory. At the same time, CY spaces are very interesting in string theory. In part this is because they are candidates for the as yet unobserved six dimensions that complement our familiar three dimensions of space and one of time. A widely discussed string-theory scenario is that the familiar four-dimensional space-time and a six-dimensional CY space combine to make up the 10-dimensional space-time that is required for the self-consistency of string theory.
Many interesting properties of topological string theory on CY spaces have already become known through the work of Vafa and collaborators over the past few years. In particular they have shown that quantum corrections can be computed. These corrections are given by the relative likelihoods that strings split and join as they move in the space. The potential for one string to split or join is measured by a number called the “string coupling”. This is actually a measure of the force between strings – in string theory forces are generated between two strings when one splits and one of its parts joins with the other. The larger the string coupling the more likely it is that this will happen and the stronger the force. These calculations are fine as long as the string coupling is small, but they become unmanageable when the coupling gets too large.
The crystal connection
At last summer’s Simons Workshop, Okounkov, Reshetikhin and Vafa realized that a formula describing the topological string splitting on a CY space also has a completely different interpretation involving a crystal composed of a regular array of idealized atoms (Okounkov et al. 2003). When they identified the temperature of the crystal with the inverse of the string coupling, the likelihood of an atom leaving the lattice became the same as that of a string splitting. Once this connection is made, the same formula that describes the splitting of the strings describes the melting of the crystal (figure 2).
At high temperatures the idealized crystal melts into a smooth surface with a well-defined shape. This surface is a two-dimensional portrait of a CY space, called a “projection” of the space (figure 3). At these temperatures the string coupling is small and topological strings can be described in terms of the calculable quantum corrections. However, as the string coupling and hence the force between strings increases, the strings split so often it is unclear how to compute their behaviour in string theory. But increasing string coupling means decreasing temperature, and at low temperatures the crystal theory comes to the rescue. The crystal becomes simple at low temperatures, with most atoms fixed in their positions in the lattice. This means that the smooth surface of the melted crystal is replaced by the discrete structure of the lattice. The CY space naturally becomes discrete.
This led Okounkov, Reshetikhin and Vafa to conclude that topological string theory and crystal theory are “dual” descriptions of a single underlying system valid for the whole range of weak and strong string coupling, or equivalently, high and low temperatures, respectively. In particular when the string coupling is small, quantum fluctuations appear only at scales much smaller than the natural size of the strings themselves, and the picture of smooth strings remains self-consistent.
The new picture that emerges from this duality is that of a “quantum” CY geometry. To understand what this means it is worth recalling that in a classical space of any kind each point is specified by a set of numbers, or co-ordinates. Examples of co-ordinates are the longitude and latitude of the Earth’s surface. In the quantum CY space the co-ordinates are no longer simple numbers to be specified at will. Rather they obey the Heisenberg uncertainty principle, which relates the position and momentum of a quantum particle. For the quantum CY spaces of Okounkov, Reshetikhin and Vafa’s dual description of topological string theory, the long-standing dream of replacing a smooth classical space with a discrete quantum substructure is thus realized. In this system the emergence of a classical geometry out of a quantum system can be clearly controlled and understood. As is shown in further work by Vafa et al., this gives an explicit and controllable picture of the Wheeler-Hawking notion of topological fluctuations – or “foam” – in space-time (Iqbal et al. 2003). The fluctuations of topology and geometry actually become the deep origin of strings. They extend rather than reduce the predictive power of the quantum theory of gravity.
Of course many challenges remain before a full theory of this kind can be realized. Chief among these is the extension of the picture from topological strings to full string theory. A possible path has been identified, however, suggesting that in string theory, as in Einstein’s gravity, the distinction between forces and the space in which they act melts away.
For some years now, the Japanese-European ASACUSA collaboration at CERN has been tightening the limit on the antiproton charge (Q) and mass (M) relative to the values for the proton. Any difference, however small, would indicate that the CPT-theorem, which under certain axiomatic conditions guarantees identical properties and behaviour for matter and antimatter, is in some way deficient. Such an eventuality would be of earth-shattering importance for our understanding of the physical world.
The latest result from ASACUSA is that any proton-antiproton charge or mass difference must be smaller than one part in 108 (Hori et al. 2003). As in the past, this limit was obtained by combining the ratio of the proton and antiproton Q/M values with their Q2M values. The former was obtained from previous measurements by the Harvard group of the proton and antiproton cyclotron frequencies in a Penning trap (Gabrielse et al. 1999), and the latter from the frequencies of laser-stimulated transitions in antiprotonic helium – that is, atomic helium in which an antiproton replaces one of the two orbiting electrons.
The improved precision was made possible through the use of a radiofrequency quadrupole decelerator to reduce the kinetic energy of antiprotons from the Antiproton Decelerator (AD) from 5.3 MeV to 65 keV. The lower energy ensured a much smaller variation in the position at which the antiprotons came to rest in low pressure (1 mb), low temperature (10 K) helium gas. This in turn allowed an adequate number of antiprotonic helium atoms to be formed in a volume small enough to be irradiated by the laser beams. Much of the art of high-precision experimentation lies in accounting for minute systematic errors, and in such a low-density environment systematic shifts in the resonant laser frequencies associated with collisions between “antiprotonic” and “ordinary” helium atoms could be better estimated and corrected for.
Further experiments completed in 2003 are about to bring another exciting new prospect into sight. These experiments showed that it is possible to create antiprotonic helium ions – with a single antiproton rather than a single electron orbiting the helium nucleus – in a state suitable for the kind of laser spectroscopy described above. The important thing here is that this pbarHe++ ion is a two-body system, while the neutral atom pbarHe++e– comprises three bodies. The properties of two-body systems are in principle exactly calculable mathematically, while those of three-body systems can only be solved approximately using extremely complex calculations with powerful computers. The results of these calculations are consequently subject to their own errors, which beyond a certain level of precision can exceed those of the experimentally determined resonant laser frequencies. This sets a practical limit to the precision that is available with the neutral antiprotonic helium atom – which may indeed be reached after ASACUSA returns to the fray in 2004 with a new, higher precision laser system.
By performing experiments on the two-body ion instead of the neutral atom, this calculational roadblock can be circumvented. The experiments will be difficult, not least because the frequencies involved lie in the vacuum ultraviolet spectral region. What makes the game worth playing is that the pbarHe++ ion is the nearest thing to the standard Bohr atom that is used to introduce undergraduate students to the concepts of atomic physics that physicists have ever had at their disposal. In many respects it is even more like hydrogen than the hydrogen atom itself. This is because the antiproton is non-relativistic and its de Broglie wavelength is some 40 times smaller than the Bohr radius, so that the semiclassical approximation, which is rather poor for normal, ground-state hydrogen atoms, turns out to be excellent for antiprotonic helium ions. All this means that the spin-independent contributions to the energy levels can literally be calculated on the back of an envelope to a few parts in 109. For comparison, in the 2p-state of atomic hydrogen, relativistic corrections appear at the 10-5 level and quantum electrodynamic corrections at 10-6.
At the same time, even experiments with the “conventional” neutral antiprotonic atom may lead ASACUSA into a fascinating new regime in 2004. If we assume CPT-invariance, we can relate the mass of the antiproton to the electron mass instead of the proton mass. If ASACUSA is able to achieve a precision some 40 times better than the current 10 ppb, then the antiproton will become an even better known fundamental particle than the proton itself. This seemingly paradoxical situation comes about because no “protonic antihelium” counterpart to the antiprotonic helium atom is available with which the corresponding proton experiments could be made.
In the proton case, the limit on the charge neutrality of bulk matter has to be combined with the ratio of proton and electron cyclotron frequencies, ωP and ωe measured in the same Penning trap. While the charge neutrality limit is phenomenally precise (parts in 1020), small corrections have to be applied to the measured ratio ωP/ωe because the two frequencies differ in magnitude by the large factor of the proton/electron mass ratio. One important consequence of this is that relativistic corrections, negligible for the proton, must be applied to the electron value. Taking this into account severely limits the precision obtained for the proton mass to about 0.5 ppb.
At this point we are confronted by some rather deep questions concerning the meaning of experimental results. In measuring any given quantity, we are really making a comparison with some arbitrarily chosen prototype object. The significance of that measurement, however, depends on the question we are trying to get nature to answer. It should be no surprise that when asking questions about fundamental particles, neither a block of platinum-iridium in Paris (the standard kg) nor a current-carrying wire loop (defining the MeV/c2) is a particularly useful prototype object. Of much greater interest are the values of particle properties with respect to those of other particles, these being the prototypes chosen, in a sense, by nature herself. Thus, choosing the proton as the prototype for the antiproton is clearly meaningful when asking questions about CPT invariance. Choosing the basic leptonic constituent of stable matter, the electron, as a prototype for the mass of the proton – the basic hadronic constituent – evidently has some fundamental significance in the larger picture of particle physics. Plausible though this choice may be, no theoretical basis yet exists within the Standard Model to predict what the hadron/lepton mass ratio should be.
If ASACUSA achieves the expected new precision for this ratio with the antiproton, it will then still be necessary to wait until some theoretical prediction arrives to put the result in the wider general context. We might also suggest that experimentalists find some way of doing a better job on the proton mass!
A team from the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, and the Lawrence Livermore National Laboratory (LLNL) in the US, has published results on the synthesis of two new superheavy elements, element 113 and element 115. In experiments conducted at the JINR U400 cyclotron with the Dubna gas-filled separator, the team observed decay chains that confirm the existence of the two elements, with element 113 produced via the alpha decay of element 115 (Oganessian et al. 2004).
The experiments produced four atoms each of element 115 and element 113 through the fusion reaction of calcium-48 nuclei at an energy of 248 MeV with nuclei in a target of amercium-243. The team observed three similar decay chains, each consisting of five consecutive alpha decays that together took less than 30 seconds and terminated in the spontaneous fission of dubnium-268, an isotope of element 105 (which was named after Dubna) with a half-life of 16 hours. An interesting fourth decay chain ending in dubnium-267 was also observed when the energy of the incident calcium ions was slightly increased.
The discovery was made possible through the use of the intense calcium-48 beam from JINR’s U400 cyclotron. “Twenty years ago no one would have ever thought that this was possible because the technology to produce such an element just wasn’t there,” explained Joshua Patin, LLNL’s primary data analyst on the team. “But with the efficiency of the Russian cyclotron and the ability to run the experiments for long periods of time, we were able to achieve this tremendous accomplishment.” The americium target material was supplied by the LLNL.
Given the apparent absence of antimatter at the cosmic scale, it might seem strange that a recent paper from the ASACUSA collaboration on quantum tunneling effects in collisions between antiprotonic helium atoms and H2 and D2 molecules may be relevant to astrophysics. This is because antiprotonic helium consists of a one-electron “cloud” surrounding a composite, singly charged “nucleus” made up of an alpha particle (two positive charges) and an antiproton (one negative charge), so that it looks rather like a hydrogen atom.
No data exist on the reactions between H and D atoms and their molecules H2 and D2 at the low temperatures, around 30 K, that are characteristic of cold interstellar clouds and cold pre-stellar cores. These are exactly the astrophysical environments where more complex molecules may eventually be formed. For example, in certain regions the abundances of molecules such as H2O, H2S, CH3OH and C2H5OH are so enhanced that surface ice chemistry must be occurring, while the reactants remain in close proximity to one another for 105 years or more!
However, such reactions may not take place at all at these temperatures without tunnelling effects, so anything that provides a greater understanding of quantum tunnelling at low temperatures is of importance in answering the outstanding questions about ice chemistry. This is where the data from the ASACUSA experiment, reported in the paper “Quantum tunnelling effects revealed in collisions of antiprotonic helium with hydrogenic molecules at low temperatures” (Juhasz et al. 2003), may play an important role. The ASACUSA results provide a promising benchmark for theoretical models of such collisions, which could be generalized to more complex systems and may lead to a better understanding of astrophysical ice chemistry.
As announced a year ago now, the Wilkinson Microwave Anisotropy Probe (WMAP) has measured anisotropies in the cosmic microwave background radiation to an unprecedented accuracy of 10-9 K. The vastly improved precision of these data, compared with the groundbreaking results of the earlier Cosmic Background Explorer (COBE) satellite, is clearly shown in figure 1. This is opening up a new era for astroparticle physics, as the accuracy of the WMAP data has allowed a determination of cosmological parameters that are of relevance to particle physicists. Specifically, data from WMAP have significantly constrained the dark-matter content of the universe. This in turn strongly implies model-dependent and stringent constraints on models in particle physics, especially in minimal supersymmetry. In addition, the current evidence for an accelerating universe has revealed a massive component of “dark energy” in the total energy of the universe. One can imagine a pie graph showing the breakdown of the energy budget of the cosmos: 4% ordinary matter, 23% dark matter and 73% dark energy.
Other examples of the interplay between accelerator physics and astroparticle physics are provided by the following areas: extra dimensions and mini-black-hole production; neutrino oscillations; electroweak baryogenesis; dark matter consisting of the lightest supersymmetric particle (LSP); magnetic monopole production; and ultra-high-energy cosmic rays (UHECR).
The new collider experiments, in particular at the Large Hadron Collider (LHC) at CERN, offer the unique possibility of exploiting the significant links between astrophysics and particle physics. Importantly, there are some astrophysical scenarios that can be tested decisively at high-energy colliders. In other cases input from collider experiments is required to sharpen predictions for future astroparticle physics experiments, for example: the LSP detection rate, the UHECR spectrum in “top-down” models, and the understanding of very-high-energy hadronic interactions. Alternatively, cosmic-ray astrophysics may point the way to new physics at accelerators.
In theories with large extra dimensions at sub-millimetre distances, for example, and/or high energies of the order of 1 TeV or more, gravity may become a strong force. Thus, hypothetically, the energy required to produce black holes is well within the range of the LHC, making it a “black-hole factory”. As Stephen Hawking has taught us, these mini black holes would be extremely hot little objects that would dissipate all their energy very rapidly by emitting radiation and particles before they wink out of existence. The properties of the Hawking radiation could tell us about the properties of the extra spatial dimensions, although there are still uncertainties in the theory at this stage. Nevertheless, astroparticle and collider experiments should provide useful input to the theoretical work in this area. Indeed, the signatures are expected to be spectacular, with very high multiplicity events and a large fraction of the beam energy converted into transverse energy, mostly in the form of quarks/gluons (jets) and leptons, with a production rate at the LHC rising as high as 1 Hz. An example of what a typical black-hole event would look like in the ATLAS detector is shown in figure 2.
If mini black holes can be produced in high-energy particle interactions, they may first be observed in high-energy cosmic-ray neutrino interactions in the atmosphere. Jonathan Feng of the University of California at Irvine and MIT, and Alfred Shapere of the University of Kentucky have calculated that the Auger cosmic-ray observatory, which will combine a 6000 km2 extended air-shower array backed up by fluorescence detectors trained on the sky, could record tens to hundreds of showers from black holes before the LHC turns on in 2007.
Crossing the divide
Neutrino astrophysics has also provided us with exciting new results on neutrino masses and has opened up another area of synergy between particle physics, astrophysics and cosmology. The Sudbury Neutrino Observatory and Super-Kamiokande detectors have shown that neutrinos oscillate into other flavours. The result is final: the minimal Standard Model is dead, as it predicted vanishing neutrino masses and thus separately conserved lepton numbers. This is an existence proof that astroparticle-physics experiments can indeed produce results that have a fundamental impact on accelerator-based particle physics.
Another area with important cosmological implications is the violation of discrete symmetries C (charge), P (parity) and T (time reversal), and their combination CPT, which may be violated in some models of quantum gravity. Such issues are associated with explanations of the observed matter-antimatter asymmetry in the cosmos. Neutrino factories could provide answers to such fundamental questions. There is also the possibility for direct detection of massive isosinglet neutrinos at the LHC, the existence of which would have an important astrophysical impact. No doubt the synergy between neutrino astroparticle physics and accelerator-based neutrino physics will continue to yield possibilities for more vital insights.
The current generation of collider experiments and in particular the LHC project at CERN offer the unique possibility to perform precise measurements of the properties of the hadronic interaction. The motivation is that very-high-energy particles will have central importance in future studies of cosmic-ray physics. Measurements that are possible only at the LHC will have the potential to improve significantly the quality of measurements of cosmic-ray air showers both in the “knee” region and especially for the very highest energies at the “ankle” and beyond (see figure 3). The Tevatron collider at Fermilab provides hadron collisions at a centre-of-mass energy approaching 2 TeV, which is equivalent to a cosmic ray with an energy of about 2 PeV (2000 TeV) colliding with a stationary proton. Brookhaven’s Relativistic Heavy Ion Collider using nitrogen beams provides energies equivalent to that of a 5 x 1014 eV nitrogen nucleus incident on the atmosphere. The LHC will provide energies equivalent to roughly 1017 eV incident on a stationary proton. As can be seen from figure 3, these machines cover some of the important features of the cosmic-ray energy spectrum. It is worth noting that the energy flow in cosmic-ray air showers is within a few degrees of the incident particle – in effect the “beamline” – so it is vital that the LHC detectors have adequate forward detector systems.
New physics
Over the years cosmic-ray experiments have reported a remarkable spectrum of anomalies, observed at regions of pseudo-rapidity outside the range of existing accelerator observations. The class of inclusive phenomena include anomalous examples of mean free path or long flying component, heavy flavour production, attenuation of secondary hadrons, and the energy fraction of air showers in emulsion-chamber families. There are also anomalous individual exotic events, which contain unexpected features: Centauro and anti-Centauro events; Chirons and halo events; and muon bundles. While these anomalies could be due to “unrecognized” Standard Model physics or an incorrect interpretation of the measurements, they could also be harbingers of new physics that would be manifest at the LHC and other future colliders. In 1971 K Niu and co-workers at Tokyo University, using balloon-borne emulsion chambers, reported evidence for decaying hadrons with unusual properties. After the discovery of charm in 1974, Tom Gaisser and Francis Halzen showed that the particles were in fact D-mesons; by then accelerator experiments had confirmed Niu’s measurements of mass, lifetimes and other properties.
Another recent example of the use of timely astroparticle experiments to guide our search for new physics at future colliders is provided by the development of new detectors such as the satellite-based Gaseous Antiparticle Spectrometer. Proposed to search for cosmic antimatter, this could also probe for supersymmetric dark matter up to a neutralino mass of approximately 400 GeV. This would extend the range of immediate future terrestrial direct dark-matter searches such as the GENIUS (germanium detectors in liquid nitrogen in an underground setup) experiment at the Gran Sasso Laboratory.
The LHC will make available large underground detectors such as ATLAS and CMS with an unprecedented area of fine-grained detectors and magnetic field volume. Following in the footsteps of the COSMOLEP experiment at CERN’s Large Electron Positron collider, these detectors could be used to determine precisely the direction and momentum of large numbers of penetrating cosmic-ray tracks within a very small area. One benchmark cosmic-ray phenomenon that can be studied is that of muon bundles, and another class of phenomena that can be studied in this way are upward-going showers, presumably from high-energy neutrino interactions in the Earth. In principle, trigger rates from the cosmic-ray phenomena mentioned above are low enough that they can be run in conjunction with standard trigger menus. In this way collider-physics experiments can make a direct contribution to astroparticle-physics experimentation.
In the “no man’s land” just beyond the frontiers of our knowledge nothing is certain, and most of the recent discoveries, which must often be interpreted in a model-dependent way, are subject to interpretation and debate. For instance, the evidence for a dark-energy content of the universe, its origin and precise nature (i.e. is it a cosmological constant, a quintessence field or something else?), the nature of dark matter, the nature of the UHECR, the existence of supersymmetry or other new physics, the possibility for the existence of large extra dimensions, etc, are issues that are still not resolved. The synergies between particle physics, astrophysics and cosmology in the next 10 years should amplify our ability to make faster and deeper inroads in all of these areas. There is no doubt that a new frontier for fruitful collaboration is now before us.
Can data on hadronic cross-sections from e+e– annihilations and τ decays be reconciled? This was one of the main topics discussed at the Workshop on Hadronic Cross Section at Low Energy – SIGHAD03 – which was held in Pisa, Italy, on 8-10 October 2003, and attended by about 60 participants from a variety of countries.
Despite its well known successes, the Standard Model still has a number of weaknesses, one of them being the prediction of the anomalous magnetic moment of the muon, aµ. Participants at SIGHAD03 heard the whole story from the main protagonists: from the pioneering experimental work carried out more than 40 years ago, which was nicely summarized by Francis Farley of Yale, to the impressive accuracy reached in recent years on both the experimental and theoretical sides.
The main issue, however, still concerns the evaluation of hadronic contributions to aµ below 2 GeV. These cannot be calculated by perturbative quantum chromodynamics, and so rely almost entirely on data. In the 1990s data from hadronic decays of the τ (from the ALEPH experiment) were used to add information to that obtained directly from electron-positron annihilations. This method – which was pioneered by Michel Davier of Orsay and presented by him at the workshop – allowed a substantial improvement in the theoretical evaluation of aµ. In the meantime, the CMD-2 experiment at the VEPP-2M collider complex in Novosibirsk was able to improve the measurement of the two-pion annihilation cross-section (leading to a 0.6% systematic error), and both the OPAL and CLEO experiments came up with independent measurements of the τ spectral function.
These improvements led to a comparison between the π+π– spectral functions from e+e– and τ data. However, after including isospin-violating effects, there is still a discrepancy of the order of 10 to 15% above the ρ resonance (see figure 1). The origin of this discrepancy, and more generally the results of the evaluation of aµ using different approaches, was the central theme of one of the sessions at the workshop. Whether it is due to a missing correction in theory (the difference in mass and width between the charged and neutral ρ mesons, as discussed by Davier and Fred Jegerlehner of DESY) or whether it lies in the data, is still controversial.
SIGHAD03 also heard about the status of and the prospects for existing and planned colliders. Within a few years, upgrades of electron-positron colliders in Beijing (BEPCII/BESIII), Novosibirsk (VEPP-2000), Frascati (DAFNE-2) and Cornell (CLEO-C) should become operational and therefore provide new data. Results on τ spectra from B-factories are also expected; indeed, the first results from the Belle collaboration at KEK-B in Japan were presented at the workshop.
An improvement in the current situation will soon come from existing meson factories, with the KLOE, BaBar and Belle experiments. Here the use of the initial state radiation process (ISR), as recently proposed by the group of Johann Kühn in Karlsruhe, allows the whole available energy range to be scanned while working at a fixed energy. In particular, the KLOE collaboration at the DAFNE φ-factory in Frascati presented results on the hadronic cross-section below the φ peak, which agree with the Novosibirsk e+e– data and thus confirm the 2σ discrepancy with the τ approach. Preliminary data were also presented by the BaBar collaboration (at SLAC), showing the feasibility of the ISR method at B-factories, where there is the advantage that a much wider energy range can be covered. Recent theoretical developments on ISR were also reviewed, and during the workshop a round table was organized in order to discuss the status of radiative corrections for luminosity measurements in an attempt to provide a unified picture of the current situation.
Precise measurements of R, the ratio of hadronic to muon-pair cross-sections in e+e– , at low energy have a strong influence not only on the anomalous magnetic moment of the muon but also on the running electromagnetic coupling constant, whose uncertainty limits the prediction of the Higgs mass. The measurements also provide a test of the perturbative behaviour of the strong interaction and of low-energy effective field theories. The running of the electromagnetic and strong coupling constants, and the determination of charm and bottom masses, are two examples that were reviewed at the workshop, and whose progress has benefited from the latest experimental results from the electron-positron colliders in Beijing (BES) and Novosibirsk.
In summary, this was a short but very intensive workshop. However, there were also two moments of relaxation, with a visit to the Piazza dei Miracoli, where the leaning tower is located, and a delicious dinner in the lovely ancient Villa Toscana. During the dinner, Simon Eidelman proposed organizing the next workshop in Novosibirsk two years from now. By then, new theoretical and experimental results, expected in particular from the g-2 experiment at Brookhaven, as anticipated by Lee Roberts will clarify whether the discrepancy observed in aµ will vanish, or whether it will remain, so requiring new physics.
Held every 18 months, Quark Matter is the major conference covering relativistic heavy-ion collisions. The latest in the series – Quark Matter 2004 – took place on 10–17 January at the Oakland Convention Center in California and was co-chaired by Hans Georg Ritter and Xin-Nian Wang from Lawrence Berkeley National Laboratory (LBNL). The meeting featured a flood of new data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory, along with continuing analyses from experiments at CERN’s Super Proton Synchrotron (SPS) and reports from the HERA-B and HERMES experiments at DESY.
The conference – which was preceded by two well attended and well received workshops, one for graduate students and the other for local high-school teachers – was officially opened by Jerry Brown, Oakland’s mayor and ex-governor of California. Reinhard Stock of IFK Frankfurt opened the scientific programme with a historical overview of heavy-ion collisions, which was followed by theoretical and experimental introductions by Urs Wiedemann of CERN and Tom Hemmick of SUNY, Stony Brook, respectively. So began five days of parallel and plenary sessions, and while everything cannot be described here in detail, the following aims to provide a flavour.
The biggest question at recent Quark Matter conferences has been: have we found the quark-gluon plasma? Although the collaborations working at RHIC have made no definitive statements, the sense among the participants this year was that for the first time the answer is “yes”. A broad range of measurements painted a picture that most attendees found convincing: strong suppression of high transverse momentum (pT) particles, the absence of back-to-back jets, anisotropies consistent with strong, hydrodynamic flow, and many other observations. The data show that a dense medium is produced in the collisions and that particles interact strongly and lose energy as they traverse it. The observed energy loss requires a very dense medium, which seems incompatible with the presence of hadrons. The observed anisotropies (in flow) for different particle species indicate that the medium behaves like an almost perfect fluid, as expected from a quark-gluon plasma (QGP). A comparison of the anisotropies of different species suggests that equilibration is rapid and thus likely occurs during the partonic stage.
Many of these effects were already seen at the Quark Matter 2002 conference, held in Nantes, France. Since then RHIC has taken and analysed data on deuterium-gold (dAu) collisions, providing a key control. In dAu collisions the number of nucleon-nucleon collisions is small and the energy density is expected to be too low to form a QGP. The putative QGP signatures were absent in the dAu data, showing that the phenomena are final-state effects. These effects were also absent (in the case of high-pT particle suppression) or greatly reduced (for flow) in lower energy collisions.
Hard probes of ion collisions
Perhaps the most striking result to come out of RHIC so far is the strong suppression of mesons with high pT observed in central nucleus–nucleus collisions (i.e. collisions where the nuclei fully overlap). The production of hadrons with pT above approximately 2-3 GeV/c in proton-proton collisions is well described by perturbative quantum chromodynamics (QCD). If heavy-ion collisions were mere superpositions of individual nucleon–nucleon collisions, the particle yield at high pT would scale with the number of binary collisions. David d’Enterria of Columbia reviewed RHIC data showing that particle production is suppressed by about a factor of five relative to a nucleon–nucleon superposition. This suppression is not seen in dAu or lower energy ion-ion collisions. Carsten Greiner of Frankfurt discussed hadronic final-state interactions and concluded that hadronic interactions alone could not explain the data.
Julia Velkovska of Vanderbilt reviewed measurements of high-pT particle suppression for different particle species. Above 6 GeV/c all particle species show comparable suppression. However, at intermediate momenta (2 < pT < 5 GeV/c) only mesons are suppressed. In hydrodynamic models bulk radial expansion equalizes velocities, boosting the number of heavy particles at intermediate pT. In this model the φ meson and the proton would therefore have the same suppression. Rainer Fries of Minnesota presented an alternate scenario, where the baryons are produced by recombination of already existing quarks. This recombination may explain the intermediate-pT data.
With the dAu data ruling out initial-state effects, the study of high-pT particles in nucleus-nucleus collisions has largely evolved into “jet tomography”, where the jets probe the matter produced in the collisions. The experimental and theoretical aspects of this technology were reviewed by Mike Miller of Yale and Ivan Vitev of Iowa State, respectively. In proton-proton collisions jets are usually created back-to-back; this back-to-back topology is observable as correlations between high-pT particles. In gold–gold collisions the correlations disappear, perhaps because one of the produced jets is absorbed in the medium. Transverse-momentum conservation requires that the energy in the quenched jet does not disappear but is redistributed. This redistributed energy may have been seen in low-momentum particles.
Both the STAR and PHENIX collaborations at RHIC have studied the quenching of the away-side jet relative to the reaction plane – the plane spanned by the beam axis and the impact parameter (see figure 1). This study used data from glancing collisions, where some back-to-back correlations remained. As expected, if the suppression is caused by energy loss in the dense nuclear medium, the quenching is strongest when the jet is perpendicular to the reaction plane (long path through medium) and weaker when the jet is in the reaction plane (short path through medium). The results from STAR are shown in figure 2.
The dense matter produced in the collisions can also be studied through direct photons, which are emitted as the system cools. There is, however, a huge background from photons produced in hadronic decays, particularly from π0 and η mesons. This background must be carefully measured and subtracted. Justin Frantz of Columbia presented results from PHENIX on direct photons (see figure 3). For pT > 5 GeV/c the yield agrees with a calculation based on perturbative QCD, another strong indication that high-pT parton production is well understood, and that the suppression of high-pT hadrons is indeed due to interactions in the dense matter. Direct photons at intermediate pT may also be emitted from the QGP, as was discussed by Guy Moore of McGill, but current results are not precise enough to probe this region.
Suppression of J/Ψ production was one of the first proposed signals for the QGP, and has long been studied at CERN’s SPS. Goncalo Borges of LIP, Lisbon, presented the latest results from NA50. Again proton-ion or deuteron-ion data are important references to measure both the hadronic absorption in ordinary nuclear matter and the effects of modifications to the gluon distribution function in heavy nuclei (shadowing). The PHENIX collaboration presented data on J/Ψ production in dAu collisions at RHIC, supporting moderate gluon shadowing and little hadronic absorption. An Tai of the University of California, Los Angeles, presented the first measurements of fully reconstructed open charm at RHIC made by the STAR collaboration, and Melynda Brooks of Los Alamos National Laboratory also reviewed charm measurements at RHIC and the SPS.
Soft particle production
Most of the particles observed in a detector are produced very late in the collision, at a time known as “freezeout”. Federico Antinori of INFN Padova showed that the particle abundances are generally well described by a thermal model, with no strangeness enhancement or suppression. The particle ratios depend only on the particle mass and temperature. The enhanced strangeness production compared with proton–proton collisions could be due to the difference between global strangeness conservation in a large system (i.e. a grand canonical distribution) and local conservation in a small system (the canonical distribution). Global conservation allows single strange-quark production (with strangeness conservation imposed on the final state), while in a small system strange quarks are produced in pairs. The properties of the final state were discussed by Gunther Roland of MIT and Andrzej Rybicki of Krakow.
The spectra of the short-lived resonances (K*, ρ, f0(975), Δ, Σ*(1385), and Λ(1520)) were presented by Patricia Fachini of Brookhaven. Some short-lived resonances may be less abundant than the thermal-model predictions. This could happen when daughter particles rescatter during the period between chemical freezeout, when particle production stops, and thermal freezeout, when elastic scattering stops. The PHENIX collaboration has compared the hadronic (K+K–) and leptonic (e+e–) decays of the φ-meson and sees no evidence for a shift in mass or branching ratio.
One striking phenomenon observed at RHIC is non-isotropic flow. The overlap between two colliding ions forms an elliptical region (see figure 1). Tetsufumi Hirano of the Riken BNL Research Center explained how high pressure turns this spatial anisotropy into an anisotropy in momentum space. Some years ago it was expected that this flow would be negligible at high energies; it is in fact stronger at RHIC than at lower energies. Fabrice Retiere of LBNL compared data from all four RHIC experiments, showing that the observed flow of different particle species is consistent with thermodynamic models that treat the system like an almost perfect fluid. The Λ and kaon flow are particularly interesting: the Λ flow is the same as the kaon flow at two-thirds of the momentum; an observation clearly pointing to interactions involving partons. Much time was spent discussing how to reconcile these macroscopic approaches with microscopic pictures based on hadronic or partonic shower simulations.
The size of the interacting system can be measured by studying correlations among identical particles. The enhanced production of bosons with similar momenta can be used to measure the source size via Hanbury-Brown Twiss (HBT) interferometry. Dan Magestro of Ohio State noted that HBT studies are perhaps the least-understood measurement made at RHIC. The experiments at RHIC have found that the system is small, with a Gaussian radius of around 6 fm and a lifetime of 8–10 fm/c. This small size and short lifetime is difficult to reconcile with the observed collective flow. Measurements by the STAR collaboration of HBT parameters with respect to the reaction plane show that the initial eccentricity of the reaction volume survives to the final-state HBT measurements.
Jeff Mitchell of BNL reviewed studies of the event-to-event variations of a variety of observables. Most of the fluctuations can be understood as normal statistical variations, but non-statistical fluctuations have also been confirmed, e.g. in mean pT. These may be due to jet production.
Do gluons condense?
A secondary theme at the conference was the study of parton distributions at low-x (fractional parton momentum) and a postulated new state of matter, the “coloured glass condensate” or CGC. Jamal Jalilian-Marian from the University of Washington explained that the condensate may form at very high gluon densities, i.e. at low-x, especially in heavy nuclei. When gluons saturate the available transverse phase space, they recombine and their density is reduced. These gluon fields might be describable classically. Measurements at HERA of low-x gluon densities and the surprisingly low multiplicities seen at RHIC were previously cited as evidence for the CGC. The BRAHMS collaboration presented two pieces of evidence regarding the CGC. Their charged-particle rapidity distribution for dAu collisions does not match the CGC predictions. Ramiro Debbe of Brookhaven showed that forward production of high-pT particles in dAu collisions is suppressed, as expected for a CGC. However, despite strong advocacy by proponents of the CGC, many at the conference felt that more quantitative studies are needed before alternative explanations can be ruled out.
And now for something completely different
Although the main goal of colliding heavy ions is to detect and study the QGP, many other aspects of high-energy physics can be studied in these collisions. One example concerns the observations of a new particle with a mass of about 1540 MeV and a narrow width, which has been reported by several experiments. Robert Jaffe of MIT explained that the results are consistent with the expectations for a pentaquark state (a bound state of five valence quarks), known as the Θ+.
Chris Pinkenburg of Brookhaven reported that the PHENIX collaboration has observed an enhancement consistent with an anti-Θ particle decaying into an anti-neutron and a K–. The anti-neutrons were detected with the PHENIX electromagnetic calorimeters and the flight time was used to determine their energy. If confirmed, this would be the first observation of an anti-pentaquark. The NA49 experiment at CERN’s SPS has found a signal for the doubly strange Ξ— pentaquark in the decay Ξ— → Ξ– + π–. However, the HERA-B experiment at DESY has searched for the Θ+ particle in proton-nucleus collisions, but found no signal.
In addition to studying the hadronic reactions, the STAR collaboration has also investigated the photoproduction of ρ0 mesons in gold–gold collisions at RHIC. In contrast to the electron-nucleus reactions, the photoproduction in a heavy-ion reaction can occur at either of the two beam nuclei. For very small transverse momenta the amplitudes from the two sources interfere destructively. Since the ρ0 lifetime is short when compared with the typical ion-ion separation, the observation of this interference indicates that the ρ0 wave function is preserved long after the decay into two pions took place.
Looking to the future
With fairly broad agreement that we have finally seen QGP, future studies will focus on measuring its properties. Yves Schutz of the E´cole des Mines de Nantes previewed the future in his talk on heavy ions at the Large Hadron Collider. Groups in the ALICE, CMS and ATLAS collaborations are all interested in heavy ions. With regard to RHIC, Axel Drees of SUNY, Stony Brook, showed plans for major detector upgrades and an electron cooling ring to increase the luminosity by a factor of 40. Around 2015–2020 RHIC could add an electron ring to study electron-ion collisions. In the nearer future, however, we can look forward to Quark Matter 2005, which will be held on 1–6 August 2005, in Budapest, Hungary.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.