Comsol -leaderboard other pages

Topics

BES collaboration observes new narrow state

cernnews5_9-03

The BES collaboration at the Beijing Electron Positron Collider (BEPC) has observed a clear signal for a narrow enhancement in the ppbar mass distribution near the 2mp threshold in the process J/Ψ→γppbar. The peak, which has a statistical significance of more than 16 σ, was found in the analysis of 58 million J/Ψ events. No corresponding enhancement has been seen in J/Ψ→ppbarπ0 decays.

The peak can be fit with either a S-wave (0-+) or a P-wave (0++) Breit-Wigner resonance function. For the S-wave fit, the mass is below 2mp at 1859+3-10 (stat)+5-25 (sys) MeV/c2 and the width is less than 30 MeV/c2 at 90% CL. For the P-wave fit, the mass is 1876.4±0.9 MeV/c2 with a width of 4.6±1.8(stat) MeV/c2. The acceptance-corrected photon angular distribution is consistent with that expected for a resonance with JPC= 0-+ or 0++.

Although there were indications of ppbar mass threshold enhancements in the J/Ψ→γppbar process from earlier experiments, including MARK-III at SLAC and DM2 at Orsay, the limited statistics in the data samples meant no firm conclusions could be drawn. The high statistics acquired with BES-II, however, has excluded the interpretation of the effect as being due to known particles, such as the η(1760). The high statistics also allow the mass and width of the state to be well determined, that is, below 2mp (S-wave) or at 2mp (P-wave). The mass and unexpectedly narrow width of the new resonance suggest it could be interpreted as a “deuteron-like” spin 0 proton-antiproton bound state (baryonium), with a zero baryon number.

Laser spectroscopy tests for inconstant constants

Recent astrophysical measurements of distant quasar spectra indicate that the fundamental constants may be changing with time. The dimensionless fine structure constant α, which scales the energy in electromagnetic interactions, might have been smaller at early times in the universe: the difference compared with today’s value is a fraction of 10-5. Assuming that the drift is linear, this would be a change of around 10-15 per year.

In general, such astrophysical measurements probe a drift of constants over extremely long time periods. Laboratory experiments on the other hand are limited to short timescales of some years. However, this can be compensated for by a higher accuracy. The recent dramatic evolution of techniques for measuring the frequency of light with a precision of a few parts in 1015 means that laser spectroscopy of atoms and ions has now reached a level of accuracy where a search for a drift of constants is feasible.

In particular, there is an effective magnification between a drift of α and the drift of an atomic transition frequency compared with the SI second provided by a caesium clock, leading to a frequency drift at an estimated level of 10-14 per year. A remeasurement of atomic transition frequencies previously measured some time ago is therefore of great interest, and earlier this year researchers at MPQ in Munich carried out such an experiment, measuring the 1S-2S transition in atomic hydrogen.

In 1999 the absolute frequency of this transition had been phase coherently compared with a caesium fountain clock, as a primary frequency standard, using a novel frequency comb technique. The second harmonic of laser light from an ultrastable continuous-wave dye laser emitting near 486 nm was used to excite cold hydrogen atoms, Doppler-free, from the ground state to the metastable 2S state. A selection of slow atoms by time-resolved spectroscopy allowed the reduction of systematic effects, mainly the second-order Doppler shift (A Huber et al. 1999). This measurement has demonstrated a precision of 1.9 parts in 1014 (M Niering et al. 2000).

Since then, the techniques used for frequency measurement, as well as the hydrogen spectrometer, have been improved, and data gathered in 12 days of measurement in February this year are now being evaluated. So far, eight days of data have been analysed, and the preliminary result for the difference of the two measurements is -48(60) Hz. Assuming that the measurements performed in 1999 and 2003 are equivalent, this implies a possible drift of the 1S-2S frequency of -5.4(6.8) x 10-15 Hz per year.

Gamma-ray burst supports hypernova hypothesis

cernastro1_9-03

“He who seeks finds.” After decades of speculation on the nature of gamma-ray bursts (GRBs), the very bright event known as GRB030329 is finally unveiling their origin. This burst is a “Rosetta stone” for scientists, revealing for the first time that a GRB and a supernova – the two most energetic explosions known in the universe – occur simultaneously. Thanks to this telling burst, a team of astronomers has pieced together the key elements of the so-called “collapsar” model of long-duration GRBs, from star death to the dramatic birth of a black hole.

NASA’s High-Energy Transient Explorer satellite (HETE-II) initially detected the burst on 29 March 2003 (at 11:37:14 UTC) in the constellation Leo. For more than 30 seconds the burst outshone the entire universe in gamma rays. Its unusual brightness triggered an unprecedented hunt for optical observations around the world. Within 24 hours a first, very detailed, spectrum of the burst’s optical afterglow was obtained by the Very Large Telescope (VLT) of the European Southern Observatory (ESO) at the Paranal Observatory in Chile. The spectrum displays a redshift of 0.1685, corresponding to a distance of 2650 million light-years. This is near for a GRB and explains why this burst is among the 0.2% brightest bursts ever recorded. It provides the long-awaited opportunity to test the many hypotheses and models proposed since the discovery of the first GRBs in the late 1960s.

The afterglow of GRB030329 lingered for weeks in lower energy X-ray and visible light, allowing continued spectral observations with the VLT over a period of one month. The results were published in Nature (J Hjorth et al. 2003) by members of the Gamma-Ray Burst Afterglow Collaboration (GRACE) at ESO. According to the group of 27 researchers from 17 institutes, the spectral changes of the fading source give irrefutable evidence of a direct connection between the GRB and “hypernova” explosion of a very massive, highly evolved star. This is based on the gradual emergence of a supernova-type spectrum, revealing the extremely violent explosion of a star. With velocities well in excess of 30,000 km/sec (10% the velocity of light), the ejected material is moving at record speed, testifying to the enormous power of the explosion.

cernastro2_9-03

Hypernovae are rare events and probably caused by the explosion of stars of the “Wolf-Rayet” type. These WR-stars were originally formed with a mass greater than 25 solar masses and consisted mostly of hydrogen. During their WR-phase, having stripped themselves of their outer layers, they consist almost purely of helium, oxygen and heavier elements, produced by intense nuclear burning during the preceding phase of their short life. Such a dense star of about 10 solar masses will rapidly deplete its fuel, triggering a Type Ic supernova/GRB event. The core collapses, without the star’s outer part “knowing”. A black hole forms inside, surrounded by a disc of accreting matter, and within a few seconds launches a jet of matter away from the black hole that ultimately makes the GRB. The jet passes through the outer shell of the star and, in conjunction with vigorous winds of newly forged radioactive nickel-56 blowing off the disc inside, shatters the star. The GRACE team say this “collapsar” model, introduced by Stan Woosley of the University of California in 1993, best explains the observation of GRB030329.

“We’ve been waiting for this for a long, long time,” said lead author Jens Hjorth. “This GRB gave us the missing information. From these detailed spectra we can now confirm that this burst, and probably other long GRBs, are created through the core collapse of massive stars. Most other leading theories are now unlikely.”

As Stan Woosley, one of the co-authors, points out, this does not mean the GRB mystery is completely solved because, “we cannot reach any conclusion yet on what causes short GRBs.” Many astronomers think these bursts, lasting less than two seconds, might be caused by neutron star mergers, but they are still waiting for their “Rosetta stone” burst.

Has the deconfinement phase transition been seen?

cernqgp1_9-03

In the mid-1990s a study of results from experiments at CERN (with a collision energy in the centre of mass of the nucleon pair of √sNN = < 20 GeV) and the Alternating Gradient Synchrotron (AGS) at Brookhaven (√sNN = 5.5 GeV) indicated some intriguing changes in the energy dependence of hadron production between top AGS and SPS energies (M Gazdzicki and D Röhrich 1996). Within a statistical model of the early stage of the collision process, these changes could be attributed to the onset of the deconfinement phase transition, where quarks and gluons are no longer confined within hadrons (M Gazdzicki and M Gorenstein 1999). The model predicted a sharp maximum in the multiplicity ratio of strange hadrons (hadrons that contain strange and anti-strange quarks) to pions (the lightest hadron) at the beginning of the transition region, at about √sNN ~ 7.5 GeV. This prediction triggered a new experimental programme at the SPS – the energy scan programme – in which the NA49 experiment recorded head-on (central) collisions of two lead nuclei (Pb+Pb) at several energies, √sNN = 6.3, 7.6, 8.7 and 12.3 GeV. Other heavy-ion experiments at the SPS (NA45, NA50, NA57 and NA60) participated in selected runs of the programme.

Recently published results from the energy scan, obtained mainly by the NA49 collaboration, have confirmed expectations. They indicate that rapid changes of hadron production properties occur within a narrow energy range of √sNN = 7-12 GeV (V Friese et al. 2003). The “Collision energy dependence” figure shows these latest results, together with earlier data from the SPS and the AGS, and data from the Relativistic Heavy-Ion Collider (RHIC). Data from proton_proton collisions are also included for comparison. The top panel of the figure shows that the number of pions produced per nucleon participating in the collision increases with energy as expected in both proton-proton and nucleus-nucleus reactions. However, the rate of increase in nucleus-nucleus collisions becomes larger within the SPS energy range and then stays constant up to the RHIC domain.

The most dramatic effect, shown in the middle panel of the figure, is seen in the energy dependence of the ratio <K+>/<π+> of the mean multiplicities of K+ and π+ produced in central Pb+Pb collisions. Following a fast threshold rise, the ratio passes through a sharp maximum in the SPS range and then seems to settle to a lower plateau value at higher energies. Kaons are the lightest strange hadrons and <K+> count for about half of all the anti-strange quarks produced in the collisions. Thus, the relative strangeness content of the produced matter passes through a sharp maximum at the SPS in nucleus-nucleus collisions. This feature is not observed for proton-proton reactions.

A third important result is the constant value of the apparent temperature of K+ mesons in central Pb+Pb collisions at SPS energies, as shown in the bottom panel of the figure. The plateau at SPS energies is preceeded by a steep rise of the apparent temperature measured at the AGS and followed by a further increase indicated by the RHIC data. Very different behaviour is measured in proton-proton interactions.

So far, only the statistical model of the early stage reproduces the sharp maximum and the following plateau in the energy dependence of the <K+>/<π+> ratio. In this model, the spike reflects the decrease in the number ratio of strange to non-strange degrees of freedom and changes in their masses when deconfinement sets in. Moreover, the observed steepening of the increase in pion production is consistent with the expected excitation of the quark and gluon degrees of freedom.

Finally, in the fireball of particles created in the collision, the apparent temperature is related to the thermal motion of the particles and their collective expansion velocity. Collective expansion effects are expected to be important only in heavy-ion collisions, as they result from the pressure generated in the dense interacting matter. The stationary value of the apparent temperature of K+ mesons may thus indicate an approximate constancy of the early stage temperature and pressure in the SPS energy range due to the coexistence of hadronic and deconfined phases.

These results suggest the deconfinement phase transition exists in nature (and thus the quark-gluon plasma) and that in Pb+Pb collisions it begins to occur in the SPS energy range. From the composition of hadrons resulting from the decay of the fireball, the temperature at which the transition takes place can be estimated to be T ≅ 2 x 1012 K (170 MeV), coinciding with the limiting temperature of hadrons suggested at CERN many years ago by Rolf Hagedorn.

The observation of anomalies in the energy dependence of hadron production in Pb+Pb collisions in the SPS energy range requires further study. Analysis of data taken last year continues in search of further phenomena caused by the deconfinement phase transition, such as anomalies in the event-by-event fluctuations expected in the vicinity of the second-order critical end-point (M Stephanov, K Rajagopal and E Shuryak 1999). In future, it would be interesting to extend measurements of the energy dependence to central collisions of light nuclei as well as to proton-proton and proton-nucleus interactions. Such measurements should significantly constrain models of the collision process and, in particular, help us to understand the role played by the volume of the droplet of strongly interacting matter in determining the onset of the deconfinement phase transition.

Deuteron-gold collisions clarify ‘jet quenching’ results

Since it began operation three years ago, the US Department of Energy’s Relativistic Heavy-Ion Collider (RHIC) has produced an array of data that are rapidly shedding new light on unexplored territory in high-energy nuclear collisions. Results from the first gold-gold collisions at the new collider, recorded during the summer of 2000, immediately showed that the essential trend seen in fixed-target experiments at the Brookhaven AGS and CERN SPS continues as the collision energy is increased by an order of magnitude. Specifically, the concentration of energy deposited in the volume of space occupied by the colliding nuclei (the energy density) steadily increases with increasing collision energy. As a result, the multiplicity of particles produced in the most violent of the RHIC collisions is larger than any previously seen in subatomic interactions.

cernqgp2_9-03

The early results have given clear indications that the origin of these particles involves extremes of density and temperature that are well into the range where the relevant degrees of freedom for nuclear interactions are expected to be those of quarks and gluons, not nucleons and mesons. Now it appears that measurements of high-energy phenomena due to the scattering of quarks and gluons in collisions of heavy nuclei have provided an important new means for probing the realm of the predicted quark-gluon plasma.

The RHIC collision energy is high enough to produce direct scattering of quarks and gluons from the incoming nuclei. In this “hard scattering” – in the parlance of quantum chromodynamics (QCD) – a single pair of partons (quark, anti-quark, gluon) from the incoming nuclei strike each other directly with such force that they scatter with high momentum away from the initial beam direction. These interactions, which are relatively rare even in the highest energy collisions, give rise to localized sprays of energetic particles called “jets”. These jets of hadrons are highly collimated along the axis of the initially scattered parton, and characteristically carry large components of momentum transverse to the axis of the colliding nuclei. Thus, while the average transverse momentum (pT) of hadrons produced in nuclear collisions is a few hundred MeV/c, hard-scattering processes in very high-energy collisions give rise to a small tail in the pT distribution that can extend out to tens of GeV/c.

Hard-scattering processes are well known in high-energy collisions of elementary particles, such as proton-proton collisions. Their observation was one of the early, compelling arguments for the existence of quark sub-structure in hadrons. By measuring the properties and momenta of the particles in a jet, one can reconstruct the kinematic and quantum properties of the initially scattered parton, and the measurements can be compared with readily calculable predictions of QCD.

cernqgp3_9-03

These processes can now be seen at RHIC for the first time in nuclear collisions. They provide a direct signal of high-energy quarks or gluons emerging from the initial collision stage. Significantly, the early RHIC data from gold-gold collisions showed a deficit of high transverse-momentum particles from jets in collisions where the highest total number of particles is produced – that is, in the most violent collisions, where the evidence indicates that hot matter is formed. This effect, dubbed “jet quenching”, is one of the most striking indicators of possible new physics in these collisions.

It may be that the observed deficit of high-energy jets in these collisions is the result of a slowing down, or quenching, of the most energetic quarks as they propagate through a newly formed medium consisting of a dense quark-gluon plasma. If this is the case, then these measurements can provide a quantitative means of determining the properties of the primordial matter, in effect providing a direct probe of the plasma with beams of energetic partons.

First, however, it is important to verify this energy-loss interpretation of the observed jet quenching in gold-gold collisions. Recent theoretical work has conjectured that in very high-energy nuclear interactions the initial-state density of partons (mostly gluons) becomes so high that the effective number of interacting particles in the collision saturates, limiting the number of hard-scattering events. Thus, another possible interpretation of the paucity of jets might simply be that the wavefunction of a nucleus during a high-energy collision is significantly different from that of a superposition of nucleons.

The question of whether the observed jet quenching is the result of initial-state saturation effects or energy loss due to a dense final-state medium, can be checked experimentally by colliding a nucleon with a nucleus and seeing if there is a difference relative to nucleon-nucleon collisions. Effects due to initial-state saturation effects, which are intrinsic to the properties of the nucleus, will appear in these collisions of a small probe with a heavy nucleus, whereas those due to energy loss in a dense medium, which should only be produced after the collision of two heavy nuclei, will not appear. To provide this comparison, RHIC carried out a two-month programme of deuteron-gold collisions, beginning in March 2003, with each beam accelerated to 100 GeV/ nucleon (as in the gold-gold collisions).

In the first results from this run, all four of the RHIC experiments (BRAHMS, PHENIX, PHOBOS and STAR) produced data showing no indication of suppression at large transverse momenta for deuteron-gold collisions, clearly indicating that the initial-state effects are small, and the suppression effect observed at large transverse momentum in gold-gold collisions is indeed due to jet energy loss. This result is strikingly illustrated by the back-to-back correlation data from STAR (see figure). A recoil jet peak is present in deuteron-gold collisions, as it is in proton-proton collisions, but is suppressed in the gold-gold data.

The data analysed so far at RHIC give convincing evidence that high-energy collisions of heavy nuclei do indeed trigger the production of a hot, dense medium of final-state particles that is characterized by strong collective interactions at very high-energy densities. More needs to be done to determine the essential properties of this matter, but these latest results provide a major step toward unveiling the long-sought quark-gluon plasma.

On the trail of dark energy

Cosmology has recently achieved its version of a standard model, called the “cosmic concordance”. This gives a broad picture of the components in the universe within the strongly tested framework of the hot Big Bang model. Of these components, only about 4% amount to the familiar baryons of the Standard Model of particle physics, and even some of these are “dark” or not evident directly from the light of distant objects. Another 20-25% is nonbaryonic dark matter, presumably either weakly interacting massive particles or axions, theorized elements of high-energy physics. But the majority of the energy density, some 70-75%, is detected only through its effect of accelerating the global expansion of the universe. This background energy, which is smooth out to scales larger than that of any matter structures such as clusters of galaxies, is named “dark energy”.

cerndar1_9-03

Dark energy was first discovered in 1998 by two groups using supernovae as markers of cosmological distance as a function of time – the Supernova Cosmology Project led by Saul Perlmutter at Lawrence Berkeley National Laboratory and the High-z Supernova Search Team led by Brian Schmidt at Australian National University. Measurements indicated that distant supernovae were dimmer than expected from the cosmological inverse square law in a universe dominated by matter (S Perlmutter et al. 1999, A Riess et al. 1998). That is, they appeared to be further away than expected from the expansion rate of the universe if gravitation due to the matter contents were the main force. Some form of dark energy was required at the 99% confidence level, and in amounts sufficient to counteract, on cosmic scales, the gravitational attraction from the clustered matter.

Since then, deeper and more precise supernova measurements and further lines of evidence confirm this conclusion (J Tonry et al. 2003, R Knop et al. 2003, D Spergel et al. 2003). Detailed measurements of the cosmic microwave background power spectrum, by the Wilkinson Microwave Anisotropy Probe satellite and by ground-based experiments, imply the presence of dark energy too. They also show that the spatial geometry of the universe is consistent with the flatness prediction of inflation. But observations of galaxy clusters tell us that the matter contribution to the total energy density can amount to only 20-30% of the needed critical density. Any two of the three lines of evidence imply that the dark energy composes roughly three-quarters of the energy density of the universe, while the third method provides a crosscheck. Such an amount of dark energy acts to accelerate the cosmic expansion.

The nature of dark energy

While gravitation due to matter or radiation is attractive, a sufficiently negative pressure p would offset a positive energy density ρ to give repulsive gravity under Einstein’s equations (the gravitating density depends on ρ+3p), pulling on space to accelerate the expansion of the universe. Researchers often discuss this in terms of the equation of state ratio of the pressure to energy density: w = p/ρ.

Negative pressures are not a wholly exotic phenomenon. After all, one of the equations of expansion of the universe, the Friedmann equation, looks remarkably similar to the first law of thermodynamics: dV) = -pdV, where V is the volume considered. Negative pressure leads to an overall plus sign, turning this equation into something that looks like the tension in a spring or rubber band. Such a “springiness” of space was postulated soon after Einstein developed the general theory of relativity in his cosmological constant term, and Hermann Weyl attempted to link such a background energy to the quantum vacuum. If the vacuum is a true ground state then all observers must agree on its form. But the only Lorentz invariant energy-momentum tensor is the diagonal Minkowski tensor that has negative pressure equal and opposite to its energy density, that is, the cosmological constant has equation of state ratio w = -1. This would cause an accelerating universe.

cerndar2_9-03

So why are cosmologists not satisfied with identifying the cosmological constant with dark energy? In The Hunting of the Snark – the poem by Lewis Carroll, who was in fact Charles Dodgson, a mathematician at Oxford – when the explorers set sail to find the mysterious snark, the captain “had bought a large map representing the sea, without the least vestige of land: and the crew were much pleased when they found it to be a map they could all understand.” The cosmological constant term is such a featureless sea, but there are two problems with using it to describe our universe. The expected sea level for the quantum vacuum is much higher than we observe: naively one should indeed have a featureless universe, with matter drowned by 120 orders of magnitude below the energy density of the cosmological constant. But the cosmic concordance measures only a factor of a few difference. Furthermore, the matter and radiation we see in the universe evolves with the expansion of the universe, while the cosmological constant does not. Even an order of magnitude equality between them occurs in only one characteristic timescale (e-folding), out of the 23 in the expansion of the universe since the well-understood epoch of nuclei formation in the early universe. (See S Weinberg 1989 and S Carroll 2001 for more on these fine-tuning and coincidence puzzles.)

Hunting the dark energy

Researchers are thus driven to consider other explanations for the dark energy. Models with dynamical high-energy physics fields, often called “quintessence” when involving a simple scalar field, go some way toward alleviating the timing or coincidence puzzle, though there is still no clear underlying theory explaining the current effective energy density. Such a field would need an effective mass of 10-33 eV, that is, with a Compton wavelength of the order of the radius of the universe. However, there are rich attempts at phenomenology stretching back two decades (longer if scalar-tensor theories of gravitation are included). An early high-energy physics model was proposed by Andrei Linde in 1986, demonstrating how a linear potential could give rise to accelerating expansion. On the cosmology side, Robert Wagoner in 1986 examined how a general equation of state component would not only affect the expansion, but could be observationally probed with cosmological distance and age measurements.

Both the modelling of and the investigation of the observational consequences of dark energy are now active industries within research in cosmology, covering a wide variety of the physics of the early and late universe. In general, the dark-energy equation of state will vary with time and so needs to be probed with observations over a range of epochs, or astronomical redshifts z (the fractional difference in the scale of the universe today relative to an earlier time). The major challenge over the next few years in cosmology will be to characterize the equation of state function w (z ). On the phenomenology front, one might hope for a natural, robust model to emerge, but the theorists’ prolificness seems too great for this to settle the question. Indeed, models beyond scalar fields involving modifications of general relativity, extra dimensions, or quantum-phase transitions have also been proposed. Fortunately these can be written in terms of an effective w (z ) (E Linder and A Jenkins 2003) and subjected to cosmological measurements.

cerndar3_9-03

Three main routes to probing dark energy exist in cosmology. The first, and currently most favoured, involves mapping the expansion history of the universe. The second seeks to measure the growth rate of the formation of large-scale structures such as clusters of galaxies. The third involves the cosmic microwave background radiation – looking not for the time variation of the dark energy (since the cosmic microwave background photons effectively all come from the same redshift), but for the subtle spatial fluctuations in the dark-energy distribution on cosmic scales. Observations of Type Ia supernovae, which first discovered the dark energy in 1998, fall in the first category and seem the most promising. The second and third approaches are likely to run into limits imposed, respectively, by uncertainties involving entangled astrophysics and cosmic variance (intrinsic uncertainty due to observing only one universe). However, new methods and cross correlations between probes may eventually be practical.

In mapping the expansion history, cosmologists probe the deceleration due to the gravitation of matter and the acceleration due to dark energy at various epochs. Variations in the growth of distances reveal a picture of the cosmic environment, and hence the dynamic influence of dark energy, in the way that the width of tree rings indicates the Earth’s climatic environment over time. Type Ia supernovae can be seen to great distances and calibrated in luminosity (made “standard candles” through detailed observations). Thus the measurement of the received flux directly indicates their distance, and hence the time in the past they exploded, while the redshift of the photons is simply the ratio of the size of the universe now relative to then. Together these give the exact expansion history.

Future endeavours

The best current supernova data extend out only to redshift z ≅ 1 (when the scale of the universe was 1/(1+z ) = 1/2 its current size) with any reasonable statistics, but they already constrain the averaged equation of state ratio to w = -1.05+0.15-0.20 (R Knop et al. 2003) or w = -1.0+0.14-0.24 (J Tonry et al. 2003). Clues to the underlying physical theory, however, reside in the dynamics, the time-varying function w (z ). A dedicated dark-energy mission, the Supernova/Acceleration Probe (SNAP) satellite, is being designed to determine the present value w0 to 7% and derivative w ‘ = dw/dz to ±0.15 (1 σ, including both statistical and systematic uncertainties). Led by Michael Levi and Saul Perlmutter of the Lawrence Berkeley National Laboratory, the project involves over 100 scientists and engineers from more than 15 institutions, including France and Sweden. Launch is proposed for 2010.

Meanwhile, an intense research effort continues. One example is the European Dark Energy Network (EDEN), a proposed European Union research training network of 13 nodes (including CERN, led by Gabriele Veneziano), coordinated by Pedro Ferreira of Oxford. Models attempt to link dark energy to dark matter, extra dimensions, modifications of gravity and a zoo of simple and non-minimally coupled scalar fields. These predict a range of values for the equation of state ratio w0, within the current constraints, and a wholly open variety of w ‘, both positive and negative. Some even lead to an eventual reversal of the acceleration and a collapse of the universe. It is amusing that the first dark-energy model, the linear potential, possesses this quality. Future data will constrain the allowed parameters of classes of high-energy physics models and the fate of the universe, including how long we have left until a cosmic doomsday! (See R Kallosh et al. 2003 for the linear potential case leading to a Big Crunch and R Caldwell et al. 2003 for a Big Rip.)

cerndar4_9-03

Can signs of the nature of dark energy be uncovered at particle accelerators? It is difficult to see how. The energy scale of the physics is presumably of the order 1016 GeV, and by its “dark” nature the coupling to matter is vanishingly small. On scales smaller than the universe, the dynamical effect of dark energy is negligible. The entire dark-energy content within the solar system equals that of three hours of solar luminosity. Perhaps if the physics involves the modification of gravity or extra dimensions, a precise laboratory test could see a signature (see E Adelberger et al. 2003 for a current experiment). But the true hunting grounds for the nature of dark energy and the physics causing the acceleration of the universe lie in cosmology. Just as advances have been made in the past two decades in theory and observations beyond the simplistic view of early universe inflation as a pure deSitter phase – “sea without the least vestige of land” – so too will dark-energy studies delve deeper into fundamental physics. Instruments now being designed could tell us within the next decade whether we must come to grips with a minuscule but finite cosmological constant or some exciting new dynamical physics.

mSUGRA celebrates its 20th year

The invention of minimal supergravity grand unification – mSUGRA – had a profound influence on the phenomenology of supersymmetry, and now mSUGRA is a leading candidate for yielding new physics beyond the Standard Model. A current assessment of mSUGRA in the search for unification and supersymmetry was the focus of the SUGRA20 conference held on 17-20 March at Northeastern University in Boston, where mSUGRA first evolved 20 years ago.

cernsug1_9-03

In supersymmetry, each particle has a superpartner – a sparticle – with a spin that differs by half a unit. The particles and sparticles should have the same mass, for example the mass of a quark should be equal to that of its superpartner, the squark, but this is contrary to observation. A mechanism for breaking supersymmetry is therefore crucial if theories that include supersymmetry are to confront experiment.

Models based on so-called global supersymmetry or rigid supersymmetry lead to a pattern of sparticle masses that are also in contradiction with experiment – for example, a squark mass may lie below the quark mass. They also yield a cosmological constant that is in gross violation of observation. However, both these obstacles are removed in supergravity grand unification and its minimal version, mSUGRA, which was first formulated by Ali Chamseddine, Richard Arnowitt and Pran Nath at Northeastern University in 1982 (Chamseddine et al. 1982).

The framework of supergravity grand unification is the so-called applied supergravity, where matter (quarks, leptons and Higgs particles) is coupled with supergravity and the potential of the theory is not positive definite. The breaking of supersymmetry in mSUGRA takes place through a “super Higgs” effect where the massless gravitino, which is the spin 3/2 partner of the graviton, becomes massive by “eating” the spin 1/2 component of a chiral super Higgs multiplet. This is a phenomenon akin to the Higgs_Kibble mechanism through which the W boson gains mass by absorbing the charged component of a Higgs doublet in the Glashow-Salam-Weinberg model.

mSUGRA has an ingenious mechanism to protect the electroweak scale from “pollution” by the high-energy scales of the Planck mass MPlanck (2.4 x 1018 GeV) and the grand unification mass MGUT (2 x 1016 GeV). In mSUGRA, supersymmetry breaking occurs in the hidden sector and is communicated by gravitational interactions to the physical sector, where physical fields such as leptons, quarks, Higgs and their superpartners reside (see figure 1). Since the vacuum energy of the theory is not positive definite, it is possible to fine-tune the vacuum energy to zero (or nearly zero) after the spontaneous breaking of supersymmetry, and so avoid any contradiction with experiment. Further, as a consequence of the communication between the hidden and physical sectors, soft breaking terms arise in the physical sector. These give masses to sparticles and generate non-vanishing trilinear couplings among scalar fields. Thus, for example, the squarks and selectons gain masses of the size of the electroweak scale and fall within reach of colliders such as the Tevatron at Fermilab and the Large Hadron Collider (LHC) at CERN.

A remarkable aspect of the hidden-sector/physical-sector mechanism is that the mass generation in the physical sector does not involve terms of the size of MPlanck – which is fortunate given the large size of MPlanck. A similar result was found by Riccardo Barbieri of Pisa, Sergio Ferrara of CERN and Carlos Savoy of Saclay, who also achieved soft breaking through the hidden-sector mechanism (Barbieri et al. 1982). Equally remarkable is the result found by Chamseddine, Arnowitt and Nath that the grand unification scale MGUT cancels in the computation of soft parameters (Chamseddine et al. 1982, Nath et al. 1983). The soft parameters are thus shielded effectively from the high-energy scales of MPlanck and MGUT. There are many later analyses where grand unification within supergravity has been discussed in further detail (Hall et al. 1983, Nilles 1984). In mSUGRA, universality of the soft parameters leads to a suppression of the flavour-changing neutral currents that is compatible with experiment. Furthermore, the mSUGRA model can be easily generalized to include non-universalities in certain sectors of the theory, maintaining consistency with experiment.

mSUGRA provides a dynamical explanation of the electroweak symmetry breaking that splits the weak nuclear force from electromagnetism and gives mass to the W and Z bosons. In the Standard Model this is done by giving a negative squared mass to the Higgs field, which can be considered contrived. In mSUGRA the breaking of supersymmetry naturally triggers the breaking of electroweak symmetry and leads to predictions of masses of sparticles lying in the 100 GeV-TeV energy range.

cernsug2_9-03

The SUGRA20 conference opened with talks that looked at the current and future prospects for experimental tests of mSUGRA. Xerxes Tata of Hawaii discussed the constraints on the sparticle masses from various experiments including the recent Brookhaven experiment on gµ-2. Speakers in several other talks pointed out that the most direct test of mSUGRA and other competing models will come in accelerator experiments at Run II of the Tevatron, at the LHC and at the Next Linear Collider (NLC). Such tests for the Tevatron were outlined by Michael Schmitt of Northwestern, while Frank Paige from Brookhaven National Laboratory and Stephno Villa of California, Irvine, discussed the possibilities for the ATLAS and CMS detectors at the LHC. Richard Arnowitt from Texas A&M discussed similar tests for the NLC.

mSUGRA also possesses the remarkable feature that it provides a natural candidate – the so-called neutralino – for cold dark matter in the universe. The talks by Howard Baer of Florida and Keith Olive of Minnesota revealed that the predictions of cold dark matter in mSUGRA and its extensions are consistent with the most recent data from the satellite experiment, the Wilkinson Microwave Anisotropy Probe. David Cline from UCLA later outlined future dark-matter experiments (GENIUS, ZEPLIN) to test mSUGRA and other competing models.

There were also talks in several areas complementary to the main theme of the conference. Mary K Gaillard of Berkeley discussed the connection of SUGRA models to strings, while the idea of conformal quiver gauge theories with a novel type of grand unification at about 4 TeV was explained by Paul Frampton of North Carolina. Other more theoretical ideas included talks on strong gravity by Ali Chamseddine from Beirut, on M theory by Michael Duff of Michigan, and on non-commutative geometry by Bruno Zumino from Berkeley.

Northeastern University, as a key player in the birth of mSUGRA 20 years ago, provided an ideal location for SUGRA20. While mSUGRA remains only a model, more than 100 participants at the conference expressed optimism that future experimental data may convert it from a theoretical model to an established theory.

The tale of the Hagedorn temperature

cernhag1_9-03

Collisions of particles at very high energies generally result in the production of many secondary particles. When first observed in cosmic-ray interactions this effect was unexpected, but it led to the idea of applying the wide body of knowledge of statistical thermodynamics to multiparticle production processes. Prominent physicists such as Enrico Fermi, Lev Landau and Isaak Pomeranchuk made pioneering contributions to this approach, but because difficulties soon arose it did not initially become the mainstream for the study of particle production. However, it was natural for Rolf Hagedorn to turn to the problem.

Hagedorn, who died earlier this year, had an unusually varied educational and research background, which included thermal, solid-state, particle and nuclear physics. His initial work on statistical particle production led to his prediction, in the 1960s, of particle yields at the highest accelerator energies at the time at CERN’s proton synchrotron. Though there were few clues on how to proceed, he began by making the most of the “fireball” concept, which was then supported by cosmic-ray studies. In this approach all the energy of the collision was regarded to be contained within a small space-time volume from which particles were radiated, as in a burning fireball.

Several key ingredients brought from early experiments helped him to refine the approach. Among these observations, the most noticeable was the limited transverse momentum of the overwhelming majority of the secondary particles. Also, the elastic scattering cross-section at large angles was found to drop exponentially as a function of incident energy. Such behaviour strongly suggested an inherently thermal momentum distribution.

However, many objections were raised in these pioneering days of the early 1960s. What might actually be “thermalized” in a high-energy collision? Applying straightforward statistical mechanics gave too small a yield of pions. Moreover, even if there was a thermalized system in the first place, why was the apparent temperature constant? Should it not rise with incident beam energy?

It is to Hagedorn’s great credit that he stayed with his thermal interpretation, solving the problems one after the other. His particle-production models turned out to be remarkably accurate at predicting yields for the many different types of secondaries that originate in high-energy collisions. He understood that the temperature governing particle spectra does not increase, because as more and more energy is poured into the system new particles are produced. It is the entropy that increases with the collision energy. If the number of particles of a given mass (or mass spectrum) increases exponentially, the temperature becomes stuck at a limiting value. This is the Hagedorn temperature TH. It is nearly 160 MeV, about 15% above the mass of the lightest hadron, the pion.

cernhag2_9-03

Since the more massive resonances eventually fragment into less massive ones to yield the observed secondary particles as the “bottom line”, this solved the problem of the pion yield. The factor 1/n!, which originated in the quantum indistinguishability of identical particles, had plagued the statistical calculations that focussed on pions only. Now it had become unimportant as each one of the many states was unlikely to have a population, n, exceeding 1. At long last agreement between experiment and statistical calculations prevailed.

The statistical bootstrap model

Once the physical facts had been assembled, Hagedorn turned his attention to improving their theoretical interpretation, and in considering the experimental finding that the formation of resonances dominates the scattering cross-section, he proposed the statistical bootstrap model (SBM). In a nutshell, in the SBM each of the many resonant states into which hadrons can be excited through a collision is itself a constituent of a still heavier resonance, whilst also being composed of lighter ones. In this way, when compressed to its natural volume, a matter cluster consisting of hadron resonances becomes a more massive resonance with lighter resonances as constituents, as shown in figure 1.

One day in 1964, one of us (TE) ran into Hagedorn, who was bubbling over to a degree we had not seen before. His eyes were lit up as he described all these fireballs: fireballs going into fireballs living on fireballs forever and all in a logically very consistent way. This must have been soon after he had invented the statistical bootstrap. He gave the impression of a man who had just found the famous philosophers’ stone, and that must have been exactly how he felt about it. Clearly Hagedorn immediately recognized the importance of the novel idea he had introduced. It was very interesting to observe how deeply he felt about it from the very beginning.

Using the SBM approach for a strongly interacting system, Hagedorn obtained an exponentially rising mass spectrum of resonant states. Today, experimental results on hadronic level counting reveal up to almost 5000 catalogued resonances. They agree beautifully with theoretical expectations from the SBM, and as our knowledge has increased, the observed mass spectrum has become a better exponential, as illustrated in figure 2. The solid blue line in figure 2 is the exponential fit to the smoothed hadron mass spectrum of the present day, which is represented by the short-dashed red line. Note that Hagedorn’s long-dashed green line of 1967 was already a remarkably good exponential. One can imagine that the remaining deviation at high mass in the top right corner of the figure originates in the experimental difficulties of discovering all these states.

cernhag3_9-03

The important physics message of figure 2 is that the rising slope in the mass spectrum is the same as the falling slope of the particle momentum spectra. The momentum spectra originate in the thermalization process and thus in reaction dynamics; the mass spectrum is an elementary property of strong interactions. The SBM provides an explanation of the relationship between these slopes, and explains why the temperature is bounded from above. Moreover, since the smallest building block of all hadronic resonances is the pion, within the SBM one can also understand why the limiting temperature is of the same magnitude as the smallest hadron mass TH ≈ mπ.

Today the Hagedorn temperature TH is like a brand name, and the concept of an exponentially rising mass spectrum is part of our understanding of hadron phenomena, which can be understood using approaches different from the SBM, such as that offered by dual models. However, when first proposed the SBM was looked upon with considerable scepticism, even within the CERN Theory Division where Hagedorn worked. As time has gone by, the understanding of the particle-production process that Hagedorn brought about has grown in significance, such is the sign of truly original work, of something that really had influence on our thinking. Hagedorn’s article (Hagedorn 1965), which introduced the statistical bootstrap model of particle production and placed the maximum temperature in the vocabulary of particle physics, has found a place among the most cited physics papers.

The accurate description of particle production, through the conversion of energy into matter, has numerous practical implications. Even in the very early days, Hagedorn’s insight into the yields and spectra of the produced secondaries showed that neutrino beams would have sufficient flux to allow a fruitful experimental program, and this gave a theoretical basis for the planning of the first neutrino beams constructed at CERN.

Quark-gluon plasma

At the same time that the SBM was being developed, the newly discovered quarks were gaining acceptance as the building blocks of hadrons. While Hagedorn saw a compressed gas of hadrons as another hadron, in the quark picture it became a drop of quark matter. In quark matter at high temperatures gluons should also be present and as the temperature is increased asymptotic freedom ensures that all constituents are interacting relatively weakly. There seems to be nothing to stop a dense assembly of hadrons from deconfining into a plasma of quarks and gluons. It also seems that this new state of matter could be heated to a very high temperature, with no limit in sight. So what is the meaning of the Hagedorn temperature in this context?

cernhag4_9-03

In the SBM as conceived before quarks, hadrons were point particles. A subtle modification is required when considering quarks as building blocks. Hadrons made of quarks need a finite volume that grows with hadron mass. One of us (JR) worked on this extension of the SBM with Hagedorn at the end of the 1970s and in the early 1980s. We discovered that at the Hagedorn temperature, finite-size hadrons dissolve into a quark-gluon liquid. Both a phase transition and a smoother transformation are possible, depending on the precise nature of the mass spectrum. The most physically attractive alternative was a first-order phase transition. In this case the latent heat is delivered to the hadron phase at a constant Hagedorn temperature TH. A new phase is then reached wherein the hadron constituents – the quarks and the gluons – are no longer confined. The system temperature can now rise again.

Within the study of hot hadronic matter today, the Hagedorn temperature is understood as the phase boundary temperature between the hadron gas phase and the deconfined state of mobile quarks and gluons (see figure 3). Several experiments involving high-energy nuclear collisions at CERN’s Super Proton Synchrotron (SPS) and at RHIC at the Brookhaven National Laboratory are testing these new concepts. Nuclei, rather than protons, are used in these experiments in order to maximize the volume of quark deconfinement. This allows a clearer study of the signature of the formation of a new phase of matter, the quark-gluon plasma (QGP).

The current experimental objective is the discovery of the deconfined QGP state in which the hadron constituents are dissolved. This requires the use of novel probes, which respond to a change in the nature of the state of matter within the short time available. More precisely, the heating of hadronic matter beyond the Hagedorn temperature is accompanied by a large collision compression pressure, which is the same in magnitude as the pressure in the very early universe. In the subsequent expansion, a collective flow velocity as large as 60% of the velocity of light is exceeded at RHIC. The expansion occurs on a timescale similar to that needed for light to transverse the interacting nuclei.

In the expansion-cooling process of QGP formed in nuclear collisions, the Hagedorn temperature is again reached after a time that corresponds to the lifespan of a short-lived hadron. A break up – that is, hadronization – then occurs and final-state hadrons emerge. Hagedorn was particularly interested in understanding the hadronic probes of QGP produced in hadronization. He participated in the initial exploration of the strangeness flavour as a signal of QGP formation.

In February 2000 the totality of intriguing experimental results obtained at the SPS over several years was folded into a public announcement stating that the formation of a new phase of matter was their best explanation. The key experimental results, including, in particular, strangeness and strange antibaryon enhancement, agreed with the theoretical expectations that were arrived at when one assumes that the QGP state was formed.

cernhag5_9-03

In mid-June 2003 the researchers at RHIC announced results that show that this new phase of matter is highly non-transparent to fast quarks, which is once more along the lines of what is expected for QGP. Many researchers believe that the deconfined phase has therefore been formed both at the SPS and at RHIC. The thrust of current research is to identify the conditions that are necessary for the onset of QGP, and to understand the initial reaction conditions in dense matter. In 2007 when a new domain of collision energy becomes accessible at CERN’s Large Hadron Collider, hot QGP in conditions similar to those present in the early universe will be studied.

In the next few years, the study of hadronic matter near the Hagedorn temperature will also dominate experimental efforts in the field of nuclear collisions, in particular at the new international experimental facility to be built at the GSI laboratory in Darmstadt, Germany. The richness of the physics at hand over the coming years is illustrated in the phase diagram in figure 3, which was obtained from the study of the SBM. Here, the domain is spanned by the temperature, T, and the baryochemical potential, µ, which regulates the baryon density.

In almost 50 years the understanding of the physics related to the Hagedorn temperature has changed. In the beginning it was merely the maximum temperature seen in proton-proton collisions. It then became the SBM inverse slope of the mass spectrum. Today, it denotes the phase boundary between hadron and quark matter. Moreover, as recent work in string theory has shown, Rolf Hagedorn will not only be remembered for the physics of hot hadronic matter: his name is already attached to a more general family of elementary phenomena that originate in the methods he developed in the study of strong interaction physics.

CLEO discovers second DsJ particle

The CLEO collaboration has discovered a new particle, tentatively named the DsJ(2463), which decays to D*sπ0 with a decay width less than 7 MeV. The search that led to this discovery was motivated by BaBar’s discovery of an unexpected new narrow state called D*sJ(2317), which decays to Dsπ0. CLEO has also confirmed the existence of the D*sJ(2317).

cernnews8_7-03

In the simplest interpretation of these results both particles are excited bound states of a charm quark, c, and a strange antiquark, sbar. It was thought that such states would be massive enough to decay to a D or D* meson and a K meson. The biggest surprise of the D*sJ is that it is too light to decay via any of these modes. The decay of D*sJ to Dsπ0 is suppressed because it violates isospin symmetry, leading to the small decay width observed. Such suppressed isospin violating decays are not unknown in the c-sbar system – in 1995 CLEO found that about 6% of the decays of the D*s are to Dsπ0, instead of the dominant Dsγ mode. On the other hand, the DsJ is above the thresholds for decay to DK and Dsπ0, but apparently does not decay through either mode. If the spin parity of the DsJ is 1+, these modes would be forbidden and D*sπ0 would be allowed but also suppressed by isospin, again leading to a small decay width.

The analysis that determined the existence of the second state was complicated by an unusual kinematic property of the two states – the two new particles and the D* are narrow and the mass difference between the DsJ and D*s is essentially identical to the mass difference between the D*sJ and the Ds (see figure). Since the dominant decay mode of D*s is Dsγ, a real D*s decay can appear as a Ds if the photon from the D*s decay is lost, and a real Ds and random photon can appear as a D*s. This means that the two states can “feed into each other” as photons are missed or randomly acquired. The observed Dsπ0 signal thus has a background from real D*sπ0 events and vice-versa, so separating the two sources requires careful and subtle analysis. CLEO found that multiple analysis techniques applied to the Dsπ0 and D*sπ0 signals led to the conclusion that both states exist and resulted in consistent measurements of their masses and widths.

The preferred spin parity of the D*sJ is 0+ because it decays into Dsπ0 not D*sπ0. Predating the discovery of the D*sJ and DsJ, there were at least two theoretical models that coupled heavy quark effective theory with chiral symmetry and predicted light c-sbar states. In these models the mass difference between a 1+DsJ and the 1D*s would be equal to the difference between a 0+D*sJ and the 0Ds, in accord with CLEO’s observation.

Testing times for strings

cernstr1_7-03

Look in front of you. Now to your side. Next, up above. These are the known spatial dimensions of the universe: there are just three. Have you ever wondered about the origin of this number? Have you ever thought there might be new dimensions that escape our observation? In all physical theories, the number of dimensions is a free parameter that is fixed to three by observation – with one exception. This exception is string theory, which predicts the existence of six new spatial dimensions. At present, it is the only known theory that unifies the two great discoveries of the 20th century: quantum mechanics, which describes the behaviour of elementary particles, and Einstein’s general relativity, which describes gravitational phenomena in our universe (M B Green et al. 1987).

String theory replaces all the elementary point particles that form matter and mediate interactions with a single extended object of vanishing width: a tiny “string”. Thus, every known elementary particle, such as the electron, quark, photon or neutrino, corresponds to a particular vibration mode of the string, as shown in figure 1. The diversity of these particles is due to the different properties of the corresponding string vibrations. Until now there has been no experimental confirmation of string theory, and no-one has ever observed strings, not even indirectly, or the space of extra dimensions where they live. The main arguments in favour of the idea are theoretical, because this provides a coherent framework for the unification of all fundamental interactions.

For a long time string physicists thought that strings were extremely thin, having the smallest possible size in physics, associated with the Planck length, of 10-35 m. Recently, however, the situation has changed dramatically. It seems that the “hidden” dimensions of string theory may be much larger than we previously thought, and they may come within experimental reach in the near future, together with the strings themselves (I Antoniadis 1990, J D Lykken 1996, N Arkani-Hamed et al. 1998, I Antoniadis et al. 1998). These ideas are leading towards the experimental testing of string theory, which can be performed at Fermilab’s Tevatron and the future Large Hadron Collider (LHC) at CERN.

The main motivation for these ideas came from the considerations of the so-called mass hierarchy problem: why does the gravitational force remain much weaker than the other fundamental forces (electromagnetic, strong and weak), at least up to the energies currently reached in high-energy physics? In quantum theory the masses of elementary particles receive important quantum corrections that are of the order of the higher energy scale present in the theory. So in the presence of gravity the Planck mass (1019 GeV) pulls all the familiar particles of the Standard Model to be 1016 times heavier than we observe. To avoid this catastrophe the parameters of the theory must be adjusted by up to 32 decimal places, resulting in a very ugly “fine tuning”.

A possible solution is provided by the introduction of a new fundamental symmetry of matter, called supersymmetry, at energies where electromagnetic and weak effects unite into electroweak interactions. One of the main predictions of supersymmetry is that every known elementary particle has a partner, called a superparticle. However, as none of these superparticles has ever been produced at an accelerator, they must be heavier than the observed particles. Supersymmetry must therefore be broken. On the other hand, protection of the mass hierarchy requires that its breaking scale – that is, the mass splitting between the masses of ordinary particles and their partners – cannot be larger than a few TeV. Such particles could therefore be produced at the LHC, which will test the idea of supersymmetry.

Alternatively, an idea proposed in the past few years solves the problem if the fundamental string length is fixed to 10-18 to 10-19 m (I Antoniadis et al. 1998). In this case, the quantum corrections are controlled by the string scale, which is in the TeV region, so they do not destabilize the masses of elementary particles. Moreover, the new idea offers the remarkable possibility that string physics may soon be testable at particle colliders.

The universe as a braneworld

How is it possible to lower the string scale from the Planck scale of traditional quantum gravity to the TeV region without contradicting observations? In particular, why does gravity interact much more weakly with our world, what happens to the extra dimensions of string theory and why have they not been observed?

cernstr2_7-03

String theory has a long history. It was introduced about 40 years ago in order to describe strong interactions, and it took a decade to understand that it was a natural candidate for quantum gravity. Ten years later, it was realized that it can unify all fundamental forces, while in the past decade there has been a real breakthrough in understanding several aspects of its non-perturbative dynamics.

This breakthrough was not realized earlier because prior to 1994 most of the research was done in the context of the so-called Heterotic string theory, which initially looked more promising for physics and more attractive theoretically. In this theory, the string scale is fixed by the Planck mass and cannot be lowered. However, there were five consistent string theories in total, which created a problem as string theory was supposed to provide a unified framework of all physical theories. We now believe that every known string theory describes a particular limit of an underlying more general fundamental theory that can be defined in 11 dimensions of space, called M-theory, as illustrated in figure 2 (E Witten 1995).

A crucial role in these developments was played by the discovery of “p-branes”, which are higher dimensional objects extended in p spatial dimensions, so generalizing the notion of a point particle (p = 0) or a string (p = 1). One of the main consequences of this discovery is that the string scale is, in general, a free parameter that can be dissociated from the Planck mass if the universe is localized on a p-brane and does not feel all the extra dimensions of string theory. The braneworld description of our universe separates the dimensions of space into two groups: those that extend along our p-braneworld, called parallel dimensions, and those transverse to it. Obviously the parallel ones must contain at least the three known dimensions of space, but they may contain more. If our universe has additional dimensions, we should observe new phenomena related to their existence. So why has nobody detected them until now?

cernstr3_7-03

A possible answer was given at the beginning of the 20th century by Theodore Kaluza and Oskar Klein (T Kaluza 1921, O Klein 1926), who said that we cannot detect the new dimensions because their size is very small, in contrast to the infinitely large size of the other three that we know. An infinite and narrow cylinder, for example, is a two-dimensional space, with one dimension forming a very small circle. While you can move infinitely far away along the axis, you return back to the same point when moving along the orthogonal direction (see figure 3).

If one of the three known dimensions of space was small, say of millimetre size, we would be flat, and while we could move freely towards left or right, forward or backward, it would be impossible to go more than a few millimetres up or down where space would end. So extra dimensions along our universe escape observation if their size is less than 10-18 m, as they require energies higher than those we currently have at our disposal (I Antoniadis and K Benakli 1994, I Antoniadis et al. 1994, 1999).

The next question is how could we detect the existence of these extra dimensions if we did have sufficient energy to probe their size? (The minimum energy required is given by their inverse size and is called the “compactification scale”.) The answer was again given by Kaluza and Klein, who stated that the motion of a particle in extra dimensions of finite size manifests itself to us as a tower of massive particles, called “Kaluza-Klein excitations”. If for instance the photon propagates along an extra-compact dimension, we would observe a tower of massive particles with the same properties as the photon but with a mass that becomes larger as the size of the extra dimension decreases. It follows that for a size of the order of 10-18 m, an energy of the order of a few TeV would be sufficient to produce them.

The above analysis and bound on sizes does not apply, however, for dimensions transverse to our universe, as we cannot send light or any form of observable matter to probe their existence. The only way to communicate in this case is through gravity, which couples to any kind of energy density. However, our knowledge of gravity at short distances is much less than for the other interactions, allowing the sizes of such “hidden” dimensions to be as large as a millimetre, which is roughly the shortest distance at which Newton’s law is tested in the laboratory.

The string scale at TeV energies

cernstr4_7-03

An attractive and calculable braneworld framework that allows the dissociation of the string and Planck scales without contradicting observations is provided by the so-called type I string theory. In this theory, gravity is described by closed strings, which propagate in all nine dimensions of space, while matter and all other Standard Model interactions are described by open strings that end on a particular type of p-brane, called a D-brane (where D stands for Dirichlet), as shown in figure 4 (J Polchinski 1995).

In the framework of type I string theory, the string scale can be lowered in the TeV region at the expense of introducing large transverse dimensions that are much bigger than the string length. Actually, the string scale fixes the energy at which gravity becomes coupled with a strength comparable to the other three interactions, thus realizing the unification of all fundamental forces at energies lower, by a factor 1016, than we have previously thought. However, gravity appears very weak at macroscopic distances because its intensity is spread in the large extra dimensions known as the “bulk” (N Arkani-Hamed et al. 1998).

The basic relation between the fundamental (string) scale and the observed gravitational strength is: total force = observed force x transverse volume, thus expressing Gauss’s law for higher dimensional gravity. In order to increase the gravitational force to the desired magnitude without contradicting current observations, one has to introduce at least two extra dimensions of a size that can be as large as a fraction of a millimetre. In the case of one transverse dimension, the required size is of astronomical distances, which is obviously excluded, while for more than two dimensions it should be smaller, down to the fermi scale (10-14 m) in the case of six dimensions. At distances smaller than the size of the extra dimensions, gravity should start to deviate from Newton’s law, which it may be possible to explore in laboratory table-top experiments (see figure 5).

cernstr5_7-03

Type I string theory provides a realization of this idea in a coherent theoretical framework, where the string scale is fixed in the TeV region as required for the stability of the mass hierarchy, corresponding to a size of around 10-18 m. For the theory to be calculable, parallel dimensions should not be much bigger than the string length, while the size of transverse dimensions is fixed by the observed value of Newton’s constant. This size should therefore vary from the fermi scale to a fraction of a millimetre, depending on the number of dimensions (varying from six to two, respectively). It is remarkable that this possibility is not only consistent with present observations, but also presents a viable and theoretically well motivated alternative to low-energy supersymmetry. It simultaneously offers a plethora of spectacular new phenomena, which can be tested in laboratory experiments and may provide surprises at the LHC and other particle accelerators.

String theory under experimental test

There are several tests of these new ideas, either in laboratory experiments that look for deviations of Newton’s law at submillimetre distances, or at particle colliders. In microgravity experiments it is only possible to explore the case of two extra dimensions, because only in this case are deviations expected to appear at distances close to present limits. In fact the inverse square law of gravitational attraction, 1/r2, between two masses at a distance r should change to 1/r2+n if there are n large extra dimensions. However, at distances of the order of the size of the extra dimensions, only the first Kaluza-Klein excitations of the graviton are probed, generating an extra Yukawa force of strength comparable to ordinary gravity and of a range equal to the size of the dimensions. The present experimental bounds on such forces are displayed in figure 6 as a function of their range λ (horizontal axis) and their strength relative to gravity α (vertical axis) (C D Hoyle et al. 2001, J C Long et al. 2002).

cernstr6_7-03

Besides the violation of Newton’s law due to the presence of extra dimensions, there may be additional sources of new forces in a large class of models with supersymmetric bulk. In these models, motivated mainly from vacuum stability and model building, supersymmetry is not realized in our world because our brane universe is not supersymmetric, but it is present a millimetre away in the transverse dimensions of the closed string bulk. These models predict new forces at short distances mediated by superlight fields in the bulk, such as scalar or vector fields. The fields are massless in the absence of branes and acquire tiny masses due to non-supersymmetric radiative corrections from the branes, of the order of TeV2/MPlanck ~ 10-4 eV, corresponding to wavelengths in the submillimetre range. Such forces can be observable in microgravity experiments for any number of extra dimensions, in contrast to the deviation from Newton’s law, which is testable only in the two-dimensional case. As an example, figure 6 shows the prediction for a hypothetical scalar universal force mediated by a particle known as the radion.

cernstr7_7-03

At particle colliders, there are generically three types of new phenomena associated with the existence of transverse and parallel dimensions, as well as with the string substructure of matter. Transverse dimensions are responsible for making gravity strong at TeV energies, and their main manifestation is through gravitational radiation in the bulk from any physical process that escapes detection and leads to events with missing energy (I Antoniadis et al. 1998, G F Giudice et al. 1999, E A Mirabelli et al. 1999). In contrast to microgravity experiments, high-energy particle accelerators, such as the LHC, are expected to produce a quasi-continuum of Kaluza-Klein excitations describing the propagation of the graviton to extra dimensions. Figure 7 shows the expected cross-section (number of events) at the LHC for the production of a single hadronic jet accompanied by missing energy due to graviton emission. Analysis of the angular distribution allows the spin of the unobserved graviton to be deduced, and these events to be differentiated from other possible sources of missing energy, such as the production of the lightest superparticle in supersymmetry.

Parallel dimensions of much smaller size, comparable to the string length, are manifest through the production of heavy Kaluza-Klein excitations for the photon and the mediators of the other Standard Model interactions. The LHC cannot miss these if their mass is below around 6 TeV, as figure 8 indicates (I Antoniadis and K Benakli 1994, I Antoniadis et al. 1994, 1999).

cernstr8_7-03

Finally, the string substructure of matter leads to spectacular new phenomena if the LHC centre-of-mass energy happens to be above the string scale. Some examples are the production of higher string excitations or even of micro-black holes weighing a few TeV. It is certain that in this case particle accelerators will become the best tools for studying quantum gravity in the laboratory.

Clearly, these theories exist only in our imagination at present. However, we look forward to the next generation of high-energy experiments and in particular to the most powerful machine, the LHC. I am convinced, as are the majority of my colleagues, that the LHC will play a very important role in the future of the high-energy physics of fundamental interactions. In fact, the LHC is designed to explore the origin of mass of elementary particles and in particular is designed to test the idea of supersymmetry, looking for the production of superparticles. We now hope that this accelerator may discover more spectacular and “exotic” phenomena, such as the existence of large extra dimensions of space and of fundamental strings.

bright-rec iop pub iop-science physcis connect