Comsol -leaderboard other pages

Topics

High-energy conference highlights precision results

More than 900 physicists participated in the 31st International Conference on High-Energy Physics (ICHEP2002), held in Amsterdam on 24-31 July. The Dutch National Institute for Nuclear Physics and High Energy Physics (NIKHEF) organized the conference, and almost all of its graduate students took part as assistants to the chairpersons, convenors or speakers. The participants stayed all over the city, and converged on the RAI conference centre each morning – easily distinguished by their bright orange conference bags. In the tradition of the Rochester conferences, the subjects were wide-ranging, with a mixture of new results and review talks providing a comprehensive overview of high-energy physics today and directions for the future. Social events were organized in several beautiful old and new buildings, including the conference dinner in the Amsterdam opera house, and a public lecture in the auditorium of the University of Amsterdam, delivered by the winner of the 1999 Nobel Prize for Physics, Professor Gerard ‘t Hooft of Utrecht University.

Collider physics

cernichep1_11-02

The conference began with three days of parallel sessions. After a day off on Sunday, another three days of plenary sessions followed. Highlights were the results on CP violation in systems composed of bottom quarks, neutrino oscillations, and the anomalous magnetic moment of the muon.

The plenary sessions started off with reports from the first year of Run IIa operation at Fermilab’s Tevatron. The CDF and D0 detectors have undergone major renovations to take full advantage of the new data set, and the Tevatron is playing its part, with luminosity steadily climbing towards its design value. One exciting development for CDF has been the successful commissioning of a new impact parameter trigger, which uses the newly installed silicon vertex tracker to tag hadronic B-decays, with impressive online reconstruction. Franco Bedeschi of Pisa, reporting in the plenary session, demonstrated the effectiveness of this trigger, showing a signal peak of 33 B-candidates decaying to two charged hadrons. Boston University’s Meenakshi Narain presented the upgraded D0 detector, which has also replaced major system components and is operating with completely new trigger configurations. Both detectors were able to showcase first results at the new centre of mass energy of 1.96 TeV. The results on W- and Z-physics, B-physics, charm and jet physics show that everything is in working order, and we can look forward to exciting new data in the next few years. This includes looking for possible experimental signatures of physics beyond the Standard Model, as discussed by Robert McPherson from the University of Victoria, Canada. This talk drew together data from CERN’s Large Electron Positron (LEP) collider, the Tevatron and DESY’s HERA collider to outline a roadmap towards possible signs of new physics. One new and unusual form this could take is in the form of so-called “little Higgs theories”, as discussed by Martin Schmaltz of Boston. These theories aim to solve the famous hierarchy problem (whereby the forces of nature appear to operate at vastly different and seemingly arbitrary energy scales) and produce a consistent extension of the Standard Model valid up to the 10 TeV range. This new approach is gaining fans in the theoretical community, and is an exciting development to watch out for.

For the field of heavy-ion collisions, John Harris of Yale presented an overview of the various experimental signals for the quark-gluon plasma, while Glasgow’s Mark Alford discussed quark matter at high density and temperature, emphasizing the possible phases of the theory of strong interactions, quantum chromodynamics (QCD). He also reviewed the status of theoretical approaches such as lattice gauge calculations, and technical issues such as the necessary resummation of so-called hard thermal loops in finite temperature field theory. Several other technical issues, results of experiments at CERN, and results and plans for the RHIC accelerator at Brookhaven, as well as the outlook for experiments at CERN’s Large Hadron Collider (LHC) were discussed in parallel sessions convened by Paolo Giubellino of Turin and Raimond Snellings of NIKHEF.

Particle symmetries

The parallel session on CP violation kicked off with a presentation of the result from the final analysis of data from the NA48 experiment at CERN, and recent results from Fermilab’s KTeV, before moving on to the latest results from the B-factory experiments BaBar at SLAC in the US and BELLE at KEK in Japan. This reflects the culmination of great achievements in the Kaon sector of CP violation – marked by increasingly precise experimental results yet with interpretations blurred by strong interaction effects in the K-meson system – and the opening of the exciting beauty era, where many effects are large and have a cleaner interpretation.

The electron-positron colliders at SLAC and KEK have delivered between them around 180 million b quark-antiquark pairs. To put this in perspective, this is already over a factor 40 more than had been seen in total at LEP – and the data are being put to good use. In the plenary session, Masanori Yamauchi of KEK presented the new results from BELLE, and Jean Karyotakis of LAPP in Annecy, France, showcased the work from BaBar. Both experiments have updated measurements of sin2b (obtained from the asymmetry between neutral B and anti-B meson decay rates), and in addition are starting to explore the more challenging channels, such as B-decays into charged pion pairs where the two experiments currently show intriguingly different asymmetries, and new data are eagerly awaited.

Data on rare decay modes were also presented. As Yossi Nir of the Weizmann Institute in Rehovot, Israel, summarizing for the plenary session, pointed out, the study of CP violation is now firmly experiment driven. He placed the results in theoretical context, illustrating that the constraints on the CKM quark-mixing matrix from CP-violating experiments are more powerful than those from CP-conserving measurements. However, the Standard Model description of CP violation in the B and K sectors still looks healthy. Nir emphasized the importance of future measurements in processes where the contribution from new physics mechanisms is expected to be enhanced, such as the forthcoming electric dipole moment experiments.

In the parallel session on heavy quarks, convened by Sinéad Ryan of Dublin’s Trinity College and Elisabetta Barberio of the Southern Methodist University, the need for precision lattice calculations in heavy-quark physics was mentioned in many experimentalists’ talks. This is being addressed by the lattice community with calculations of a range of important quantities such as quark masses, unitarity triangle parameters, decays and meson-mixing parameters. New lattice calculations were discussed, and a new method for determining heavy-quark masses was reported. In spectroscopy, and for many other quantities, the largest uncertainty is now quenching (in the quenched approximation valence quarks are treated exactly, and the sea quarks are treated as a mean field). The first steps at removing this approximation have already been taken, and the next few years will see remarkable progress in lattice calculations and a new era of precision B-physics from the lattice.

In the plenary sessions, the lattice calculations were summarized by Laurent Lellouch of Marseilles, who discussed the various approximations, treatment of heavy quarks and the use of chiral perturbation theory for extrapolations in the light-quark sector. Achille Stocchi of Orsay summarized experimental results on heavy-hadron physics, including spectroscopy, lifetimes and decay modes. He stressed the richness of charmed baryon spectroscopy, where Cornell’s CLEO experiment dominates and 22 charmed baryons have already been found. A further result from CLEO was the observation of the upsilon-1D state, which is the first new narrow b-b state to be observed in 19 years. He also emphasized how the BaBar and BELLE results in the B-sector have led to a situation where one now has to look for precision effects.
<textbreak=Neutrinos>

cernichep3_11-02

In the parallel session on neutrinos, as well as in the plenary talk by Sussex University’s David Wark, the Sudbury Neutrino Observatory (SNO) results were highlighted. The data show convincingly, and without reference to other experiments, evidence for solar neutrino oscillations, in particular those of electron neutrinos into other types. The data can be combined with new Superkamiokande results on spectral distortions and day-night asymmetries of elastic neutrino interactions to constrain the oscillation parameters and mass-squared difference between the neutrino types. The strongly favoured region lies close to maximal mixing, with maximal mixing itself disfavoured. The possibility of oscillation to a fourth (purely sterile) neutrino is excluded (at the 5 sigma level). SNO has entered a new phase of data-taking where two tonnes of salt have been added to the heavy water to increase the neutron capture efficiency. The precious heavy water will be purified in 2003 via a process of reverse osmosis, and SNO will enter a third phase of data-taking with discrete neutron counters. Many forthcoming experiments that will further probe this region were presented. These included KamLAND, designed to detect neutrinos originating from commercial nuclear reactors, and the forthcoming BOREXINO experiment.

Another oscillation regime is that of atmospheric muon neutrinos, probably oscillating into tau neutrinos. Results were presented from the KEK to Kamioka (K2K) long baseline experiment, which together with most other experiments currently points at a global picture with maximal mixing in the muon-to-tau neutrino oscillation sector. The outstanding issue of the discrepant results from the LSND experiment at Los Alamos will be investigated by MiniBooNE (MiniBOONE goes live at Fermilab), which showed pictures from first data-taking. Speakers looked forward to future results from experiments such as MINOS, OPERA and Hyper-K in discussions of this very active field. The current status was reviewed in the talk of Concha Gonzalez-Garcia of Stony Brook, Valencia and CERN, who reminded us how much has changed since the assumptions of 10 years ago, when the solar neutrino solution was believed to be naturally small mixing angle, and the atmospheric neutrino anomaly was seen as a possible experimental problem. She proceeded to summarize the beautiful advances since then in both experiment and theory, and also discussed the implications – the most direct and yet striking one being the existence of physics beyond the Standard Model. Among possible scenarios, is the idea that leptogenesis coming from CP violation in heavy-lepton decays in the early universe may be transformed into a baryon-antibaryon asymmetry via sphalerons (“lumps” in the field energy where matter and antimatter can be created) at the electroweak energy scale.

Strong interactions

Naomi Makins of Illinois summarized QCD at low momentum transfer (Q2), discussing current electron-nucleon deep-inelastic scattering experiments at DESY, including the spin programme and future programmes at CERN (with the COMPASS experiment) and Brookhaven. In very lively discussions in the soft QCD parallel sessions, many (mainly experimental) results were submitted corresponding to a variety of topics such as Bose-Einstein condensation, colour flow, deep-inelastic spin physics and diffraction in high-energy processes. These represent activities that investigate QCD dynamics at the confinement scale in many different ways.

Theoretical and experimental views of QCD at high energy were discussed by Stefano Frixione of CERN and Ken Long of Imperial College, London. From the theoretical side, considerable progress has been made in implementing higher-order computations in the analysis of jets and heavy-flavour production. The QCD analyses of the experiments are consistent, leading to an accurate determination of the strong coupling constant, as.

Special parallel and plenary sessions were devoted to computational methods in quantum field theory. While Zvi Bern of UCLA emphasized new methods and developments for computational efforts, Matthias Kasemann of Fermilab discussed computing and data analysis for future high-energy experiments. Starting with the present situation at BaBar, BELLE, CDF and D0, he moved on to technological developments such as the Grid, which must form the basis of the LHC computing.

Martin Grünewald of University College, Dublin, presented the major electroweak developments such as the anomalous magnetic moment of the muon, the weak mixing angle from NuTeV, asymmetries at the Z-peak, triple gauge couplings and W-boson parameters. He also discussed the search for Higgs bosons, and the opportunity available for the Tevatron in this search. In the electroweak parallel session, convened by John Hobbs of Stony Brook and Dmitri Bardin of the Joint Institute for Nuclear Research, many precision results were presented by LEP collaborations, who are still carefully analysing data two years after the collider was shut down.

Paris Sphicas of CERN looked at the physics potential of the LHC covering Higgs searches, supersymmetry, other extensions of the Standard Model, extra dimensions and TeV-scale gravity effects such as black hole production.

In the crowded astrophysics and cosmology parallel sessions, an important topic was the ultra-high-energy rays, in particular those with energies above the Greisen-Zatsepin-Kuzmin (GZK) cut-off due to collisions of protons with cosmic microwave background photons. Another topic was weakly interacting massive particles (WIMPs), for which new results from the Edelweiss experiment narrow down the windows in a cross-section – mass plot. In his plenary talk, Thomas Gaisser of the Bartol Research Institute also discussed what he called multi-messenger astronomy provided by galactic
protons, photons, neutrinos and gravitons. Marc Kamionkowski of Caltech discussed how astrophysical experiments now indicate that we have a flat universe with an energy density of which 70% is in the form of a negative pressure (cosmological constant), 25% is in the form of dark, as yet unknown, matter, and only 5% is the familiar luminous matter.

A highlight of the plenary session was a presentation from Yannis Semertzidis of Brookhaven of the new result for the muon’s anomalous magnetic moment (g = 2.0023318406 ± 0.0000000016). This incredibly precise number comes from a measurement of the decays of 4 billion positive muons delivered from Brookhaven’s Alternating Gradient Synchrotron. It represents a challenge to theory, which must calculate the expected value taking into account tiny electroweak corrections. The new measurement implies a 2-3 standard deviation discrepancy with theory and may open a window to new physics interpretations.

Among other notable talks was that of Jan de Boer of Amsterdam University, who summarized developments in string theory and mathematical quantum field theory, with many attempts to get realistic models from string theory. Greg Loew of SLAC discussed future accelerators, emphasizing the recent R&D for linear collider technology, and the future milestones of the International Linear Collider Technical Review Committee. Paula Collins of CERN reviewed the developments in detector technology that will provide the foundation for experiments at future facilities, in particular the increasingly precise and robust silicon-based devices.

Frank Wilczek of MIT gave the conference summary. He emphasized the triumph of quantum field theory both in QCD and electroweak physics, the importance of precision measurements, and the possibilities of extending our knowledge beyond the Standard Model by using and completing the CP violation experiments, the neutrino oscillation experiments and astrophysical experiments.

Immediately after the close, conference chair Ger van Middelkoop was presented with the Dutch royal decoration of Officier in de orde van Oranje-Nassau for his numerous contributions to nuclear and particle physics, marking a fitting conclusion to both a stimulating conference and a long and distinguished scientific career.
All presentations are available at http://www.ichep02.nl/.

Proceedings to be published by Elsevier Science BV.

Neutrino discoveries lead to precision measurements

cernneutrino1_11-02

“We return from this conference in the firm belief that this novel phenomenon indeed does occur,” Friedrich Dydak of CERN told a journalist following the XXth International Conference on Neutrino Physics and Astrophysics (Neutrino 2002), held at Munich Technical University. He was referring to neutrino oscillations, the periodic transformation between neutrinos of different flavour.

The Sudbury Neutrino Observatory’s (SNO) confirmation of oscillations was a highlight in an unprecedented wealth of fundamentally important new results. These results, along with their far-reaching implications, attracted some 450 scientists from all over the world to attend Neutrino 2002. The conference, held on 24-30 May, was jointly organized by Munich Technical University and the Max Planck Institute of Physics (MPI). Presentations and discussions underlined neutrino physics as currently being among the most exciting and rewarding fields of research.

With the discoveries of neutrino masses and lepton-flavour mixing having been established in recent years in studies of solar and atmospheric neutrinos, the field is currently in the transition to a new era. There is a general consensus that the next natural step is precision studies of the neutrinos’ intrinsic parameters. This will put additional emphasis on terrestrial experiments. However, neutrino astronomy is also rapidly evolving – neutrinos are being used as unique tools for astrophysical observations, promising insights into long-standing mysteries such as the origin and acceleration mechanisms of ultra-high-energy cosmic rays.

The conference started with SNO’s confirmation of neutrino oscillations in their measurement of solar boron-8 neutrinos, detailed by Aksel Hallin of Queen’s University, Canada. Neutrino interactions in SNO’s 1000 tonne heavy-water target allow separate determination of the fluxes of solar electron neutrinos (F(ne)), and of all active flavours together (F(ne) + (F(nm) + (F(nt)). While determination of the latter shows perfect agreement with predictions from solar model computations, a substantial ne deficit is observed. This constitutes the first direct observation of neutrino flavour transition, thus confirming the indirect evidence gathered by earlier solar neutrino experiments. Among the processes that could account for the observed phenomena, neutrino oscillation is the favoured explanation. Taking the SNO result together with those of the GALLEX/GNO experiment at Italy’s Gran Sasso laboratory; the Russian-American SAGE experiment; Japan’s Superkamiokande; and the pioneer of them all, Ray Davis’ experiment in the Homestake mine (all of which presented their latest results), possible values for the neutrino mass squared difference Dm2 and mixing angle tan2f can be restricted to a few well defined areas.

Global analyses tend to favour the so-called large mixing angle (LMA) region with parameters in the range 3 ¥ 10-5 eV2 £ Dm2 £ 3 ¥ 10-4 eV2 and 0.25 £ tan2f £ 0.8. This region, which was first identified as a possible solution to the solar neutrino puzzle in 1992 with the first results from GALLEX, offers an attractive feature – it can be tested and further scrutinized by entirely terrestrial experiments. A key role will be played by the Japanese experiment KamLAND, which aims to detect electron antineutrinos emitted by nuclear power reactors situated within distances of a few hundred kilometres.

Low-energy neutrinos

cernneutrino3_11-02

For parameters in the LMA region, neutrino oscillations will lead to a substantial, energy-dependent suppression of the recorded electron antineutrino flux. First data from KamLAND are expected this autumn. However, KamLAND is not particularly sensitive in investigating the high Dm2 part of the LMA region. To achieve this goal, a substantially shorter baseline of around 20 km would be necessary. Stefan Schönert of MPI Heidelberg presented such a concept in his talk on future projects in the field of low-energy oscillation physics.

Schönert also gave a summary of the Third International Workshop on Low Energy Solar Neutrinos (LowNu 2002), which had taken place at Heidelberg as a topical satellite meeting prior to the conference. With an attendance of about 80, and some 30 presentations comprehensively covering ongoing activities, the workshop focused on experimental projects, common techniques and challenges, as well as future physics impact. It identified as the main goal for the forthcoming years the exploration of the primary pp, berylium-7, pep, and, if possible, the CNO solar neutrino fluxes, in real time and with energy information via both elastic ne scattering (es) and charged-current (cc) interaction. With its measurements of high-energy boron-8 neutrinos, SNO has already pioneered such techniques and impressively illustrated their potential for determining intrinsic neutrino parameters and testing our understanding of stellar energy-generation mechanisms. Extending the es versus cc measurement to the medium and low-energy branches of the solar neutrino spectrum is yielding increased precision for both these issues. A first step will be the es measurement of solar berylium-7 neutrinos in the BOREXINO experiment, currently being prepared at Gran Sasso.

Atmospheric neutrinos

cernneutrino2_11-02

At the 1998 neutrino conference, the Superkamiokande collaboration reported evidence for oscillations of neutrinos produced by the interaction of cosmic rays with the Earth’s atmosphere. This year, Masato Shiozawa of the University of Tokyo presented a new analysis comprising about five years of data. The collaboration has identified tau-like events at multi-GeV energies, supporting the channel nm to nt as the predominant oscillation mode of atmospheric muon neutrinos. Moreover, oscillations into a hypothetical sterile neutrino are strongly disfavoured by this analysis. Superkamiokande’s preferred parameter range is 1.6 x 10-3 eV2 £ Dm2 £ 3.9 x 10-3 at essentially maximal mixing.

Though the analysis of atmospheric neutrino data generally relies on a ratio determination of electron to muon neutrinos, and is therefore rather insensitive to uncertainties in the absolute fluxes, a reliable understanding of neutrino generation in the atmosphere is of interest nonetheless. Thomas Gaisser of the University of Delaware reported the substantial progress achieved over recent years in modelling the processes involved. Meanwhile, the first reliable 3-D computations have become available. Among other phenomena, which still have to be incorporated into the analysis of the experimental data, predictions for the high-energy range above around 100 GeV will turn out to be useful for calibrating future neutrino telescopes.

As for solar neutrinos, the parameters giving rise to oscillations of atmospheric neutrinos are accessible for test in terrestrial experiments due to the LMA. In the KEK to Kamioka (K2K) project, a neutrino beam produced at Japan’s KEK proton synchrotron is directed over a distance of 250 km towards the Superkamiokande detector. For the currently available data set (recorded before the detector’s devastating accident of November last year) some 80 events were expected in the absence of oscillations, whereas only 56 events were observed. This number is in good agreement with the neutrino parameter set favoured by Superkamiokande’s atmospheric neutrino measurements.

Future long-baseline experiments

To further substantiate the evidence for oscillations gathered with atmospheric neutrinos, future experiments are aimed at directly mapping the oscillatory signature and at performing tau appearance studies. Two new projects are currently being prepared in the US and Europe to further investigate the parameter region favoured by current data. Both are situated at a baseline of 732 km. The American MINOS detector in Minnesota’s Soudan mine will receive a muon neutrino beam from Fermilab, whereas the European CERN neutrinos to Gran Sasso (CNGS) project will send a beam from CERN to the ICARUS and OPERA detectors, to be installed at Gran Sasso.

In contrast to K2K, which operates at an energy below the tau production threshold, these new projects will use beams of much higher energy. MINOS is a massive 5.4 kt steel/scintillator calorimeter, whereas the ICARUS collaboration chose a novel approach with their liquid argon Time Projection Chamber. OPERA will identify taus in a large-scale sandwiched lead/emulsion detector. MINOS is scheduled to start data-taking by 2005. OPERA and ICARUS will be operational from 2006.

These projects are only the next step in accelerator-based long-baseline studies.

However, these projects are only the next step in accelerator-based long-baseline studies. Plans for much more ambitious next-to-next-generation beams and detectors in Japan, Europe and the US were presented and extensively discussed. Such superbeams, and ultimately a neutrino factory, will open the door to precision physics with accelerator-produced neutrinos. If LMA is indeed the parameter region realized in nature, a determination of Dm2 and of elements of the lepton-mixing matrix to an accuracy in the few percent range seems feasible. Even matter effects and leptonic charge-parity (CP) violation could be experimentally accessible. However, the challenges for producing neutrino beams with the required intensity and quality are enormous, and in view of the technical complexity it is probably not unrealistic to estimate the need for at least another decade of R&D before a neutrino factory can be built.

Current accelerator experiments

The conclusions to be drawn from the current short-baseline accelerator-based experiments are still unclear. On one hand, the positive effect detected by the Los Alamos LSND experiment in its search for neutrino oscillations persists, whereas the Karlsruhe-Rutherford Laboratory experiment KARMEN sees no indication for oscillations. However, KARMEN cannot exclude the entire parameter space favoured by LSND, so the final answer will have to wait for Fermilab’s MiniBoone experiment (MiniBOONE goes live at Fermilab). The result from MiniBoone is eagerly awaited, particularly because it could be decisive for the fundamental question of whether a fourth neutrino flavour exists. If LSND is correct, the existence of such a fourth generation, which has to be sterile with respect to weak interaction, is an inevitable consequence.

Intrinsic neutrino properties

Neutrino oscillations imply that neutrinos are massive. However, oscillation experiments only give information on the mass difference, not on the mass eigenvalues themselves. Knowledge of the absolute masses is crucial for judging the role neutrinos play in astrophysical processes and cosmology. A degenerate neutrino mass scale in the few-eV range, for example, would imply that neutrinos contribute substantially to the mass of the universe. Absolute neutrino masses can be tested by direct kinematical methods. With the present Mainz and Troitsk experiments (which investigate the spectral shape of tritium decay close to the endpoint) having reached their sensitivity limit of 2.2 eV for the mass of the electron neutrino, the large-scale Karlsruhe tritium neutrino experiment KATRIN project was put forward as a follow-up. KATRIN will start data-taking in 2007, and intends to achieve sub-eV sensitivity.

Another long-standing fundamental question is whether neutrinos are of Majorana type (their own antiparticles). A recent claim of possible evidence for neutrinoless double-beta decay, a phenomenon that can occur only for Majorana-type neutrinos, has attracted considerable attention. Following a presentation of Oliviero Cremonesi from Milan University, who reviewed the various experiments, there was a vigorous discussion on the statistical significance and trustworthiness of the analysis leading to the claimed evidence. No consensus has yet been reached, and it remains for future experiments and improved statistics to settle this question.

Neutrinos in astrophysics and cosmology

Massive neutrinos play an important role during core collapse supernova explosions, as John Beacom of Fermilab pointed out. However, both the energy released in the form of neutrinos of the second and third flavour and their mean temperature are still uncertain. They have not yet been measured, and the models depend on many assumptions. The new KamLAND and BOREXINO scintillator detectors, however, can determine these parameters to an accuracy of 10%, thereby offering the potential to substantially improve our knowledge of supernova physics.

Precision measurements of the large-scale structure of our universe and the cosmic microwave background radiation can be used to probe intrinsic neutrino properties, most notably absolute neutrino masses and the hypothesis of sterile neutrinos. Combining cosmic microwave background data from NASA’s MAP and ESA’s Planck missions, and the Sloan Digital Sky Survey, should allow a limit of under 0.3 eV to be established – a sensitivity comparable to that of the terrestrial KATRIN experiment.

Cosmology sometimes teaches us lessons about neutrino properties, but sometimes the inverse is also true. Tsutomu Yanagida of Tokyo showed how massive neutrinos could lead to baryon asymmetry by means of leptogenesis. Inevitable consequences of the proposed mechanism, however, are neutrinoless double-beta decay and leptonic CP violation. Steve King of Southampton pointed out that these two phenomena also turn out to be decisive among the many different models proposed to explain neutrino masses (and their obvious smallness) and mixing angles.
Many of the theories beyond the Standard Model not only explain massive neutrinos, but also predict additional, exotic particles that constitute appealing candidates to contribute to the universe’s dark matter. A prime candidate among them is non-relativistic weakly interacting massive particles (WIMPs) such as the neutralino. Oxford’s Yorck Ramachers presented the large number of experiments searching for WIMPs that are operational or being prepared, along with the different experimental techniques they employ.

For several years the Italian DAMA experiment, a 100 kg sodium iodide detector, has been observing evidence for a WIMP signal, a claim that could have enormous consequences for particle physics, astrophysics and cosmology. Clearly, such a claim needs confirmation by independent experiments. The French EDELWEISS group, which operated a 320 g cryogenic detector, reported that they have now reached a sensitivity comparable to that of DAMA, an achievement made possible by the active background rejection their detector is capable of. However, the EDELWEISS data exclude most of the parameter region favoured by DAMA. With the upgrade phases of the German-British-Italian CRESST experiment, the US CDMS search and EDELWEISS that are currently being installed, the sensitivity region will be expanded by at least two orders of magnitude towards lower coupling strengths within a few years. This will not only give the final answer to whether the DAMA evidence is real, but also probe a substantial part of the parameter region predicted by the most economical supersymmetric extension to the Standard Model (the minimal supersymmetric Standard Model).

Neutrino telescopes

Eli Waxman of Israel’s Weizmann Institute explained that many different objects and processes in the universe are expected to emit high-energy neutrinos, among them gamma-ray bursts, active galactic nuclei and supernova remnants. So far, however, the first-generation neutrino telescope at Lake Baikal and the AMANDA detector at the south pole, limited in sensitivity by their size, have not identified any point sources, but rather posed flux limits. To explore the region up to a distance of 1010 light-years and to investigate the expected diffuse flux of high-energy neutrinos, detector masses in the gigatonne range, corresponding to instrumented volumes of about a cubic kilometre, are required. Experience with Baikal and AMANDA has demonstrated that such huge detectors are feasible. In fact, with the IceCube project in Antarctica and the prototype installations of NESTOR and ANTARES in the Mediterranean, construction of these next-generation detectors has already begun. Undoubtedly, they will open up new avenues in neutrino astronomy.
<textbreak=Outlook>”Where do we stand and where are we going?” This was the question addressed by the two concluding talks. Michel Spiro of Saclay and Boris Kayser of Fermilab summarized in turn the most striking neutrino topics from an experimental and a theoretical perspective. Though the establishment of massive neutrinos is a fundamental breakthrough, many open questions and problems for further experimental and theoretical investigations remain. Mass eigenvalues and mixing angles are still to be determined. The questions of leptonic CP violation, dipole moments, mass hierarchy schemes and decay modes remain open. And the fundamental issue of whether the neutrino is of Dirac or Majorana nature still needs to be resolved. Moreover, there are many new applications of neutrinos as tools and messengers in astronomy, astrophysics and cosmology on the horizon to keep neutrino physics and neutrino astrophysics at the forefront of modern research. We may look forward to new fundamental results and discoveries to be discussed at the next neutrino conference, to be held in Paris in 2004.

Quark matter conference highlights RHIC results

cernquark1_10-02

At a seminar in February 2000, spokespersons from CERN’s heavy-ion experiments presented compelling evidence for the existence of a new state of matter in which quarks, instead of being bound up into more complex particles such as protons and neutrons, are liberated to roam freely. The seminar marked a turning point in heavy-ion research, with the spotlight turning to the other side of the Atlantic, where the Brookhaven Laboratory’s Relativistic Heavy-Ion Collider (RHIC) was about to switch on. Would the RHIC data corroborate or contradict CERN’s findings at the lower-energy programme? Most of the hadronic signals measured at RHIC so far have confirmed the CERN data and their interpretation, but new findings on high transverse momentum (pT) physics at RHIC have opened up a new avenue of enquiry. The impressively rich harvest of data from the first running periods of RHIC made QM 2002 a major milestone in the quark matter (QM) conference series.

The French town of Nantes was chosen to host QM 2002 last July in recognition of its importance as a focus for French heavy-ion research. As the home of the SUBATECH laboratory, founded in 1995, Nantes leads French participation in the ALICE heavy-ion experiment in preparation for the Large Hadron Collider (LHC) at CERN. The six-day conference programme saw 46 plenary talks and debates, 100 parallel session talks and more than 150 posters. Many of the speakers were appearing for the first time at a QM conference, the vanguard of a new generation to carry forward the physics of this field. The 700 participants came from all over the world.

Claude Détraz of CERN formally opened the conference, in the presence of the president of the Pays de la Loire regional council, with a look back at more than 20 years of heavy-ion history at CERN. It all began with the proposal of a US-German collaboration to bring a heavy-ion injector to CERN to do physics at the laboratory’s proton synchrotron. This idea evolved into a physics programme at the super proton synchrotron (SPS), leading to a lead-ion beam being built by a consortium from France, Germany, Holland, India, Italy and Sweden, together with CERN. The result was a fully-fledged research activity at CERN that will continue with the LHC.

Results from CERN

The search for a new state of strongly interacting matter at CERN started with beams of oxygen in 1986, and has continued with lead beams since 1994. The SPS fixed-target programme still has several experiments taking data, and many of those that have completed data-taking are still delivering physics results. Measurements of hadronic observables (strange and non-strange particles, flow and interferometry) are now becoming complete thanks to the measurements of the NA49, CERES and NA57 experiments, which were discussed by Christoph Blume of the German GSI laboratory, Marco van Leeuwen of Holland’s NIKHEF, Johannes Wessels of GSI, and Vito Manzari of the Italian INFN laboratory in Bari. Many of these studies have revealed interesting phenomena, which are also observed at RHIC.

New measurements at lower SPS energies allow excitation functions to be studied. These reveal interesting features, such as an indication of a maximum strangeness enhancement in an energy range between that achieved by Brookhaven’s Alternating Gradient Synchrotron and the top SPS energies. Statistical effects dominate fluctuations to a large extent, but detailed studies have revealed small non-statistical fluctuations. This was discussed by Bedanga Mohanty of Calcutta. Stefan Bathe of Münster presented results on high pT studies from the WA98 experiment. These do not exclude small jet-quenching effects (suppression of jets through medium-induced energy loss of hard-scattered partons) for very central lead-lead collisions compared with peripheral collisions. However, the clear suppression seen at RHIC is not observed at the SPS. Jet-quenching appears to be a signal that will be clearly accessible only at the colliders.

The most striking observation relevant to quark-gluon plasma formation is the suppression of J/y particles as measured by the NA38 and NA50 experiments at CERN. Luciano Ramello of the University of Piedmont Orientale presented improved data on J/y from NA50, with much better quality for peripheral reactions. Moreover, detailed momentum distributions of J/y are now available to constrain theoretical calculations. Philippe Crochet of Clermont Ferrand pointed out that data on J/y deserve a fresh look to understand all the details. Final results from the improved CERES experiment and the measurements of open charm, which will be performed by NA60, are eagerly awaited.

Results from RHIC

While the SPS heavy-ion programme has reaped most of its harvest, results from the RHIC experiments only started to appear at last year’s QM conference at Stony Brook, US, and became plentiful at this conference. Results on hadronic observables were presented by all four RHIC experiments in talks by Ian Bearden of the Niels Bohr Institute (NBI) for BRAHMS, Tatsuya Chujo of Tsukuba for PHENIX, Mark Baker of Brookhaven for PHOBOS, and by Lenny Ray of the University of Texas and Gene van Buren of Brookhaven for STAR. Their contributions took in global observables; spectra and yields of identified hadrons; interferometry; momentum, isospin and charge-fluctuations; and elliptic flow.

Full justice can only be done to all the new results from RHIC in the conference proceedings, but it is worthwhile drawing attention to some highlights. The ratios of produced particles can be described very accurately by statistical models of hadronization as discussed by van Buren, Chujo and Bearden for the experiments, and from the theory side by Andrzej Bialas of Krakow’s Jagellonian University, Johann Rafelski of Arizona, and Volker Koch of Berkeley. This was already seen at the SPS, and can also be successfully applied to lower-energy heavy-ion reactions and to proton-proton and electron-positron reactions. Whether these statistical distributions are of thermal origin, or whether they are merely related to phase space dominance is not at all clear. Thermal concepts may be applicable to high-energy heavy-ion reactions, but this appears much more unlikely for electron-positron reactions. Interestingly, the associated “chemical temperature” for high-energy nuclear reactions is very close to the postulated phase-transition temperature.

Interferometry of charged pions leaves us with a few unresolved puzzles, as discussed by Ray and Chujo. The duration of emission of the most abundant particles appears to be very short. The apparent size of connected regions of the expanding fireball has similar dimensions in the radial and the tangential directions, unlike the predictions of most models for the expansion. The phase space density of pions is apparently very high, which can be interpreted in terms of very low entropy per pion.

The asymmetry of particle emission in the transverse plane (known as elliptic flow), which showed up in non-central heavy-ion reactions at lower energies, has not only survived at RHIC, but is even stronger as was demonstrated in the talks by Ray, Chujo and Baker. Since hydrodynamical models are so far the most successful in describing the features of this asymmetry, this is one of the strongest hints of an early local equilibration of the system. The strength of the asymmetry is still finite for very large transverse momenta, where one would expect production from hard scattering to dominate over any equilibrated contribution. This may be seen as a hint of asymmetric jet-quenching in the reaction zone; the quantitative interpretation is, however, not yet settled.

Fluctuations and multiparticle correlations may help in finding hints of new physics. Results on various different approaches were presented, with Ray giving one example from STAR. He proposed an interpretation in terms of very late hadronization in central collisions.

Jet-quenching

cernquark2_10-02

At QM 2002, it became even clearer than at last year’s QM conference that RHIC has truly opened up the domain of hard scattering. This is particularly interesting, as it should allow the predicted jet-quenching to be used as a signature of the hot, dense medium. Rudolf Baier of Bielefeld discussed jet-quenching in detail. Hadron spectra have been measured by all experiments and were presented in the talks by Saskia Mioduszewski of Brookhaven for PHENIX, Gerd Kunde of Yale for STAR, Christof Roland of MIT for PHOBOS, and Claus Jørgensen of NBI for BRAHMS. Only PHENIX and STAR have measured out to truly high momenta. Their spectra are compared to the expectation from proton-proton reactions. This is commonly done by calculating a so-called nuclear modification factor (RAA), the ratio of spectra in nucleus-nucleus collisions to those in proton-proton collisions scaled by the number of possible binary nucleon-nucleon collisions within a heavy-ion reaction. Of particular importance was the measurement of neutral pions out to pT = 10 GeV/c by PHENIX. This measurement is important, because the neutral pion can be well identified out to such high pT and because PHENIX has also measured the spectra for proton-proton reactions within the same apparatus. The nuclear modification factor, which should be equal to 1 without any nuclear effects, is significantly below 1 up to the highest momenta analyzed (figure 1). Also shown in figure 1 are results of theoretical calculations with and without energy loss (jet-quenching). Calculations without energy loss miss the data completely. Calculations including energy loss provide the right order of magnitude for RAA, but still fail to describe the data exactly.

While the suppression in inclusive spectra shown in figure 1 is quantitatively the best-established indication of jet-quenching, it is much more fascinating to see hints of jet structures themselves. Indications of such angular correlations relative to a high-momentum trigger particle, both at small angles and back to back, were discussed by David Hardtke of Berkeley for STAR and by Mioduszewski for PHENIX. STAR presented the most advanced analysis of these correlations, and was able to demonstrate modifications of the jet structures for central gold-gold collisions (figure 2). The trigger jet appears to be almost unaltered. The counter jet, however, is completely suppressed in central collisions. The quenching of jets relative to the expectation from the proton-proton case appears to be established. Nevertheless, the interpretation of this phenomenon is not completely clear. One of the major open questions remaining is that of the composition of the high pT distributions of different particle species. Results from PHENIX seem to indicate that pions make up only around half of all charged hadrons even at very high pT, in contradiction to expectations from perturbative quantum chromodynamics (QCD).

The study of rare probes at RHIC, such as J/y and direct photon production, is still under way. PHENIX collaboration members James Nagle of Columbia and Klaus Reygers of Münster presented the status of these analyses. Such results are likely to contribute to the experimental highlights of the next QM conference.

Looking ahead

cernquark3_10-02

The experiments have provided theorists with a huge amount of data to contemplate. Currently there is no comprehensive theory in sight that could describe all the data or even a larger fraction of them, as Brookhaven’s Robert Pisarski pointed out. Theories or models can describe limited sets of data, but for some observables even such attempts fail. For example, the precise shape of the nuclear modification factor observed at RHIC cannot be described. A lot of effort is needed in the coming years to close the gap between theory and experiment. Progress has so far been made in areas of limited scope. Pasi Huovinen of Minnesota pointed out how hydrodynamical models are successful in describing certain bulk properties of the reactions. Direct photon calculations have been stimulated by the available measurement of WA98, which apparently requires a hot initial state to be explained. François Gélis of Orsay discussed new ideas, including the precise treatment of the Landau-Pomeranchuk-Migdal effect, which predicts a suppression of low-energy photon production in a dense medium. Frithjof Karsch of Bielefeld said that the possible use of the maximum entropy method to calculate production rates is very promising. The idea of saturation has been developed to very fruitful theoretical recipes, as presented by Edmond Iancu of Saclay. Lattice calculations approach realistic parameter settings, but as Tsukuba’s Kazuyuki Kanaya pointed out, the question of the nature of the phase transition remains open. Zoltan Fodor of Eötvös University discussed the possibility of calculating properties of QCD at finite baryochemical potential on the lattice.

Future directions were the subject of the final sessions. The major avenue of quark matter research leads to the LHC heavy-ion programme, which was presented by Helsinki’s Keijo Kajantie and Paolo Giubellino of INFN Turin, where still higher-energy density and temperature at negligible net baryon density are expected. An important follow-up on the lower-energy front was presented by Jochen Wambach of Illinois, who discussed a new facility at GSI that aims to study the highest net baryon densities.

Workshops focus on photon-hadron collisions

cernwork1_10-02

In 1924 Enrico Fermi travelled to Leiden, the Netherlands, to visit Paul Ehrenfest on a three-month fellowship from the International Education Board, which had been founded the year before by John D Rockefeller Jr for the “promotion and advancement of education throughout the world”. Fermi was just 23 years old. During his stay in Leiden, he developed a method to calculate the interactions of charged particles with matter, which he called “äquivalente strahlung”. This was extended to the relativistic case in the 1930s by Carl Friedrich von Weizsäcker and E J Williams, who realized that charged particles in cosmic rays could deliver the highest-energy (virtual) photon beams to study pair production of the positron. Today, the technique is known as the equivalent photon approximation, and it provides a powerful tool for fundamental physics.

In an echo of the insight of von Weizsäcker and Williams, a growing community of physicists is looking to CERN’s forthcoming Large Hadron Collider (LHC) as a new source of very-high-energy photon-hadron interactions. When the LHC stores colliding lead beams at a centre of mass energy of 5.4 TeV per nucleon, for example, lead ions in grazing collisions will be bombarded with photons of energy extending to hundreds of tera-electron-volts in their rest frame. Similarly, photon-photon invariant masses in the range of 100 GeV will be created in this way.

Such collisions are known as ultraperipheral, and have the general feature of an absence of particles produced along at least one beam direction (Bjorken’s “rapidity gap”), because the photon is neutral. In many cases the produced system can be measured in a rather clean way in any of the LHC’s detectors. The strong field of these “quasi-real photons” has found many interesting applications in atomic, nuclear and particle physics.

In October of Fermi’s centenary year, 2001, a workshop in Erice, Sicily, focused on ultraperipheral collisions. It was followed by a second meeting at CERN last March. The Erice workshop – Electromagnetic Probes of Fundamental Physics – was organized by Bill Marciano and Sebastian White from the US Brookhaven National Laboratory, and featured sessions on the muon g-2, ultraperipheral collisions at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) and the LHC, as well as topics in strong field research.

Francis Farley of Yale University, US, gave a review of 44 years of g-2, while David Hertzog of Illinois, US, presented the latest results from Brookhaven. Andreas Höcker of Orsay, France, Simon Eidelman of Novosibirsk, Russia, and Marciano discussed the theoretical side, with an emphasis on hadronic corrections. Since the meeting, higher-precision results on g-2 have been announced. Teams from Brookhaven, Novosibirsk and the ALEPH experiment at CERN’s large electron-position collider presented results on the related analyses of electron-positron annihilation to hadrons and semi-leptonic tau lepton decays. An intriguing discrepancy between the electron-positron and tau analyses is now the focus of the interpretation of g-2 as a potential indicator of physics beyond the Standard Model. New data from the Belle, BaBar and Kloe experiments at Japan’s KEK laboratory, SLAC in the US and Frascati in Italy are eagerly awaited.

Ilya Ginzburg and Valery Serbo of Novosibirsk presented the related topic of future photon-photon colliders. The spontaneous breakdown of the vacuum by intense lasers was discussed by Adrian Melissinos of Rochester University, US, while Andreas Ringwald of Germany’s DESY laboratory talked about the science that will be accessible with the X-ray free-electron lasers being developed by the DESY-led TESLA project. The sources and interactions of ultra-high-energy cosmic rays were the subject of talks from Alex Kusenko of UCLA and Steve Reucroft of Boston’s Northeastern University, US.

Carlos Bertulani of Michigan State University, US, and Spencer Klein of Berkeley, US, along with Kai Hencken of Basel University, Switzerland, and Gerhard Baur of Jülich, Germany, reviewed the theoretical background of ultraperipheral collisions, while also pointing out the opportunities in both photon-photon and photon-hadron physics.

After presentations on recent experimental results from RHIC by Jim Thomas and White of Brookhaven, and Pablo Yepes of Rice University, US, the discussion turned to a prioritized list of the opportunities at RHIC and the LHC. These are summarized in an Erice white paper on hot topics in ultraperipheral collisions, which is included in the workshop proceedings (see Further reading).

The white paper was adopted as the point of departure for an exploratory meeting on ultraperipheral relativistic heavy-ion collisions that was held at CERN in March. This meeting, organized by Hencken and Yepes, was well attended not only by theoreticians, but also by experimentalists from the LHC collaborations interested in ultraperipheral collisions.

cernwork2_10-02

SLAC’s Stan Brodsky gave the introductory talk, followed by Baur. Both presented a range of interesting applications from a theoretical point of view for photon-photon and photon-hadron collisions at RHIC and the LHC. Leonid Frankfurt of Tel Aviv University, Israel, discussed photoproduction of vector mesons on protons and ions. The first results from the STAR experiment at RHIC for coherent rho-meson production were presented by Falk Meissner of Berkeley. Joakim Nystrand of Lund University, Sweden, showed that interesting interference phenomena, due to the fact that each ion can either be the source of the photon or the target, can be studied in these collisions. Highlights from the rich photon-hadron programme at DESY’s HERA electron-proton collider were presented by Giuseppe Iacobucci of the INFN in Bologna, Italy.

The strong electromagnetic excitation of heavy ions in ultraperipheral collisions is largely due to the strong source of low-energy photons. As they subsequently decay, heavy-ion fragments can be used for luminosity monitoring by detecting the neutrons in a zero-degree calorimeter, as discussed by White, Igor Pshenichnov of Moscow’s Institute for Nuclear Research and Vladimir Korotkikh from Moscow State University, Russia.

Physics opportunities

A range of physics opportunities was discussed at the CERN workshop. Berkeley’s Ramona Vogt pointed out the possibility of deducing the gluon structure function within the nuclear medium using heavy-quark production in photon-gluon fusion. Krzysztof Piotrzkowski from Louvaine-la-Neuve, Belgium, described how the possibility of tagging protons in a forward detector after photon emission permits the study of photon-photon collisions at invariant masses beyond the Z mass. This would allow researchers to look at the electromagnetic coupling of W bosons and also to search for new physics. Brookhaven’s Anthony Baltz and Hencken, and Frank Krauss of Cambridge University, UK, discussed aspects of lepton pair production. These include bound-free pair production and the production of muonium.

Studies on detecting ultraperipheral collisions within the LHC detectors were presented by Serguei Sadovsky of Protvino, Russia, for ALICE and Yepes for CMS, both of whom emphasized the importance of having a trigger for these events. Daniel Brandt of CERN discussed the possibilities of different ion beams at the LHC, where electromagnetic processes can limit the maximum beam luminosity.

One tangible outcome of the workshop is the formation of a working group on ultraperipheral collisions, paying particular attention to the potential for this physics at the LHC. Another meeting is scheduled to take place at CERN on 11-12 October. Its goal will be to start work on a report giving full details of the physics of ultraperipheral heavy-ion collisions accessible within the LHC detectors as currently conceived. With such an intense source of photons, the conclusion of the workshops was that the future is bright for ultraperipheral collisions at the LHC.

PASI studies new states of matter

cernmatter1_10-02

The search for a phase transformation between hadronic matter and a state of deconfined colour charge, generically known as quark-gluon plasma (QGP), began in the mid-1980s with experiments at CERN’s Super Proton Synchrotron (SPS) in Europe, and Brookhaven’s Alternating Gradient Synchrotron in the US. In 2000, the search moved onto the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. A deconfined quark phase is predicted by numerical solutions of quantum chromodynamics (QCD) at finite temperature, and so its identification would be a dramatic manifestation of the theory that governs strong interactions.

The idea of organizing a Pan American advanced study institute (PASI) under the auspices of the US National Science Foundation’s Americas programme was first advanced in early 2000. A number of heavy-ion researchers recognized that such an institute could help emerging research groups in Latin America, and would serve as a focus for potential new participants from widely scattered and distant locations in the region. Planned as an advanced school rather than a conference or workshop, the objective of the institute was primarily pedagogical, although the presentation of the most recent results in both theory and experiment was also on the agenda. Of equal importance, however, was PASI’s role as a meeting place where participants could forge new working relationships, and the selection of the small Brazilian mountain resort of Campos do Jordão as the venue was made with this in mind.

The participation reflected a good regional balance, with 20 US and European physicists, and 28 Latin Americans. There were 5 US/European and 16 Latin American postdocs, and 23 US/European and 36 Latin American students. The primary lecturers delivered hot-off-the-press scientific material on all aspects relevant to heavy-ion research, including the theory of strong interactions and experimental heavy-ion collision results. Lectures ranged from elementary tutorial-style introductions to technical seminars at the forefront of the subject. The study of the properties of QGP and the confining vacuum structure was at the heart of the meeting. Daily discussion sessions allowed the principal lecturers and the participants to interact. One afternoon was devoted to a survey of research activities in Latin America. The school was supported by the US National Science Foundation and Department of Energy, Brazilian federal and state agencies (CNPq, FAPERJ, FAPESP, the Federal University of Rio de Janeiro and the University of Sao Paulo), and by Germany’s GSI laboratory.

Broad programme

cernmatter2_10-02

Wit Busza of MIT presented a general introduction course to the physics of heavy-ion collisions. His lectures were complemented by several brilliant tutorials on the foundations of this interdisciplinary field. These were given by Guido Altarelli of CERN on QCD, Takeshi Kodama of Rio de Janeiro on relativistic gases, Berndt Muller of Duke University on the properties of quark matter, Johann Rafelski of Arizona on hadrochemistry, and Columbia’s Bill Zajc on the correlation observables such as Hanbury-Brown and Twiss (HBT) intensity interferometry.

The experimental programmes were thoroughly covered in several courses. John Harris of Yale and Bill Zajc gave a survey of the RHIC physics programme and surveyed the latest results from the Brookhaven collider’s STAR and PHENIX experiments. Christof Roland of MIT presented the latest work of another RHIC experiment, PHOBOS. CERN’s SPS experimental programme was reported by Federico Antinori of Padova, while Karel Safarik of CERN took the students into the instrumental realm of particle tracking in a detailed and fascinating lecture. In his presentation, Seattle’s Tom Trainor clearly showed the complexity of some experimental results.

The study of vacuum structure, confinement, lattice gauge theories and quantum transport was addressed by Dima Kharzeev of Brookhaven, Adriano Di Giacomo of Pisa, Frithjof Karsch of Bielefeld, and Hans-Thomas Elze of Rio de Janeiro. Bira van Kolck of Arizona and Brookhaven introduced the power of effective theories in the study of strong interactions, while Jörg Aichelin of Nantes showed how this method can be used in the study of phase transition dynamics.

These hard-core theoretical topics were accompanied by more gentle phenomenological courses by Bob Thews of Arizona on charm, Klaus Werner of Nantes on event generators, and Laszlo Csernai of Bergen on hydrodynamics and flow. Frankfurt’s Horst Stöckers’ survey of QGP signals relied on exotica such as stable black hole formation in heavy-ion collisions, quite in contrast to the more traditional discussion of strangeness and entropy evidence for the formation of deconfined states, offered by Arizona’s Johann Rafelski.

Among more than 15 seminars and lectures reporting the latest progress were many notable theoretical and experimental contributions. Frederique Grassi of Sao Paulo introduced open issues in particle emission, US PASI student Paul Sorensen from UCLA surveyed azimuthal anisotropy in strange particle production, and Latin American PASI student Javier Castillo discussed multistrange particle production at RHIC.

Constantino Tsallis of Rio de Janeiro gave the institute’s final lecture on non-extensive statistics and its applications, leading to a discussion which carried on well into dinner. Rainy weather hampered the programme of excursions to the local countryside, but ensured that the lecture rooms were always full of enthusiastic students, which was both stimulating for the lecturers and very promising for the future of the field.

Muon magnetism study reveals new twist

cernnews6_9-02

In the latest twist to the US Brookhaven laboratory’s high-precision muon magnetism experiment, researchers announced in July a result slightly at odds with the Standard Model of particle physics.
The Brookhaven experiment measures the quantity g-2 for the muon, where g is the particle’s so-called gyromagnetic ratio – the ratio of its magnetic moment to its spin angular momentum. According to classical Dirac quantum theory, this should be exactly two. However, additional small effects can change this value, leading to a non-zero g-2.

The quantity g-2 is a very sensitive probe for testing the Standard Model, so when in 2001 the Brookhaven team announced a result at odds with Standard Model calculations, the particle physics world took notice. However, when the theoretical calculations were redone, a small error was found and agreement between theory and experiment was restored.

The original Brookhaven measurement was based on an analysis of 109 muon decays accumulated in 1999. The result announced in July includes a second data sample, accumulated in 2000, which contains four times as much data as the first. The new measured value of g-2 reinforces the earlier result with a total error of 0.7 ppm compared with the 1.3 ppm for the 1999 data measurement. Along with further refinement of the theoretical calculations, it shows a slight discrepancy with the current Standard Model value, differing by between 1.6 and 2.6 times the estimated error of the measurement. This is too small a discrepancy to claim new physics. Given the precision achieved by the Brookhaven experiment, however, it is bound to attract speculation.

Supergravity celebrates quarter of a century

cernsuper1_9-02

The development of supergravity is a landmark in the intertwined histories of gauge field theory and quantum gravity. Culminating the drive towards higher symmetry within quantum field theory, it opens a door to the unification of all forces, and the possibility for extra dimensions. As a candidate theory of everything, supergravity was eclipsed by strings, yet in recent years it has re-emerged as a key component of “modern” string theory, playing an essential role in connecting field theories with string theories, and string theories with each other. Supergravity is as much on theorists’ minds now as it was in the mid-1970s. On 3-4 December 2001, an international meeting, “Supergravity at 25”, was hosted by the C N Yang Institute for Theoretical Physics at the Stony Brook campus of the State University of New York, to commemorate the anniversary of this protean theory, and to assess its ongoing role today.

The modern era of field theory began with the discovery of nonabelian gauge invariance by Chen Ning Yang and Robert L Mills in 1954. A decade and a half later, Gerard ‘t Hooft and Martinus Veltman famously proved that a large class of nonabelian gauge theories can be quantized and renormalized consistently.

Before long, the elements of the Standard Model had fallen into place, and theorists hurried forward toward a “grand” unification of the strong and electroweak interactions.

Charmed circle

Gravity, however, remained outside this charmed quantum circle. The force in quantum gravity is carried by the spin-2 graviton, and could not immediately be unified with Standard Model forces, carried by the spin-1 photon, weak bosons and gluons. The same spin-2 graviton leads to particularly aggressive high-energy behaviour. This results in a hopelessly ambiguous theory, through the proliferation of new infinities at each level of calculation. In technical terms, Einstein’s gravity is not renormalizable. The first inspirations for supergravity were to address these two problems. The novel element was another form of unification, supersymmetry, which relates bosons and fermions. According to this theory, there exists for every boson in nature a fermionic partner, and vice versa. The fermionic partner of the graviton is the gravitino. CERN will search for superpartners of Standard Model particles at the Large Hadron Collider. Within supersymmetry, it was hoped that gravity would be united with other forces, and, through a special combination of particles and interactions, would even turn out to be finite.

cernsuper2_9-02

In this context, the progression of field theory from quantum electrodynamics (QED) to supergravity is illustrated in figure 1. The basic interaction of QED is the emission of a photon (g) from a charged particle, like a quark (q), with an amplitude proportional to the quark’s electric charge, (Qq in figure 1a). We say that the photon is the gauge particle of the electric current. The photon itself, however, is electrically neutral. At the next level of complexity, nonabelian gauge theories like quantum chromodynamics (QCD) introduce an array of “colour” charges, each of which is conserved. In this case, the gluon (G) is the gauge particle of the colour currents, and carries them as well. Thus, when a gluon is emitted (figure 1b), it connects quarks of different colour, with an amplitude proportional to the strong coupling (gs). All of the colour charges are conserved in the full system of quarks and gluons, and the gluon is represented as a double line to emphasize its colour structure.

In supergravity, currents that describe the flow of energy, momentum and spin combine into a set, called the supercurrent multiplet, analogous to the currents of electric and colour charges in QED and QCD. Part of this multiplet is the energy-momentum tensor, but part is the “supercurrent” itself, a hybrid field with a vector and a spinor index, related to the energy-momentum tensor by a supersymmetry transformation. The spin-2 graviton is the gauge particle of the energy-momentum tensor, while its “superpartner” the spin-3/2 gravitino (ym) is the gauge particle of the supercurrent. The gravitino is emitted with an amplitude proportional to the energy carried by this current, multiplied by the square root of Newton’s constant (figure 1c). The gravitino (with an extra arrow in the figure to emphasize its spin structure) carries no electric and colour charges, but connects particles with different spin, in this case a quark and scalar supersymmetric partner “squark” (~q). It thus reveals the underlying unity of the quark and squark within their own supermultiplet, in much the same way that the gluon connects quarks that are unified within a multiplet of three colours.

The development of supergravity 25 years ago may be thought of as the exercise of identifying a minimal set of interactions between gravitons and gravitinos that respects general co-ordinate invariance and makes supersymmetry a gauge symmetry. Today this is as routine as writing down the Lagrangian for a Yang-Mills theory. In 1976, however, it was not even clear that it was possible. The task of formulating the minimal supergravity theory was accomplished by Sergio Ferrara, then at the Ecole Normale Supérieure, and Daniel Freedmann and Peter van Nieuwenhuizen of Stony Brook (see further reading list). At the December meeting, they recalled days of alternating hope and despair, which reached a climax one evening in the spring of 1976, when 2000 terms generated by an infinitesimal supersymmetry transformation were miraculously cancelled by computer. With this result, supergravity moved from conjecture to consistency. Their approach, which they called the “Noether method”, was based on building the correct transformation laws by retracing the reasoning of Emmy Noether’s famous theorem connecting symmetries and conservation laws. Shortly afterwards, Stanley Deser of Brandeis University and Bruno Zumino of CERN gave a useful reformulation (see further reading list), and in the early months and years of supergravity, other approaches gave further insights and dramatic simplifications. A systematization of what quickly became a veritable zoo of supergravity theories was provided by the superspace approach of Julius Wess and Zumino, developed by S James Gates, Jr and Warren Siegel, and the tensor calculus developed by Ferrara and van Nieuwenhuizen, and independently by Kellogg S Stelle of Imperial College, London, and Peter West of King’s College, London.

Supergravity today

Participants at the symposium included the original authors, along with others, such as Marc Grisaru of Brandeis University, who recalled classic investigations into the new theory. Many presentations, however, addressed the role of supergravity today. It fell to string theory to provide a finite quantum theory of gravity, matter and other forces. Our grasp of string theory is incomplete, however, because we lack an understanding of its ground (vacuum) state, and in the larger sense, its non-perturbative spectrum of states. This is one area where supergravity is central to the study of string theory. At the meeting, aspects of the dualities between different string theories were discussed by Bernard Julia of the Ecole Normale Supérieure, Igor Klebanov of Princeton, and West. These dualities have led to a compelling conjecture that all the consistent (10-dimensional) string theories are actually different vacua of a single underlying theory, M-theory, whose low-energy limit is 11-dimensional supergravity. Properties of 11-dimensional supergravity do indeed shed light on string theories in 10 dimensions. In addition, many special solutions to the supergravity equations of motion can be identified with objects in string theory called D-branes, a theme discussed at the conference by Gary Gibbons of Cambridge University, Pietro Fré of Turin, and Kostas Skenderis of Princeton. D-branes are central to a program described by Ashoke Sen of India’s Harish-Chandra Research Institute, for a closed-string theory based on open strings, and aspects of D-brane dynamics were discussed by Michigan’s Michael Duff, Stockholm’s Ulf Lindström, and John Schwarz of Caltech. Bernard de Wit of Utrecht and Ferrara discussed recent developments in supergravities with more than one supersymmetry.

Supergravity is also central to a remarkable discovery called the AdS/CFT correspondence, which relates supergravity in higher-dimensional anti-de Sitter (AdS) space-time (a space-time with constantly negative curvature) to strongly coupled gauge field theories (CFT). This correspondence, which relates quantum correlations in the field theory to classical solutions of supergravity, was discussed by Freedman, Klebanov, Emery Sokatchev from CERN, Ergin Sezgin from Texas A&M, Arkady Tseytlin from Ohio State and Nicholas Warner from the University of Southern California (USC). Supergravity’s possible role in cosmology was discussed by Renata Kallosh of Stanford.

In part, the meeting celebrated the influence of supergravity (thousands of papers have the word in their title, and thousands more list it as a keyword). Even more impressively, it demonstrated its vitality. Though supergravity is 25 years old, the conference had the excitement and energy characteristic of a recent discovery. Only half in jest, some participants looked forward to new and unexpected developments to be celebrated on supergravity’s 50th birthday.

Testing models for quantum gravity

The theory of general relativity provides an appealing way of understanding gravitational dynamics, by perceiving the nature of the gravitational force as being due to a non-trivial geometry of space-time.

The predictions of this revolutionary theory were verified experimentally almost immediately after its proposition, making Einstein an instant success. However, general relativity is a purely classical theory. Quantizing it has so far proved to be a formidable task, which is far from complete. Due to the theory’s unconventional form, as compared with the rest of the fundamental interactions in nature, a mathematically consistent and complete theory of quantum gravity remains elusive.

Theoretical approaches

The discovery of string theory opened up novel and unconventional ways of attacking the problem. By viewing gravitons, the hypothesized carriers of the gravitational interaction, as one of the excitations of the (closed) superstring ground state, the first mathematically consistent framework of unification of gravity with the rest of the fundamental interactions (strong and electroweak) could be achieved. However, this approach is also incomplete. The last decade has revealed a much richer structure of the theory, consisting not only of one-spatial-dimensional objects (strings), but also of higher-dimensional membrane solitonic structures (such as D-branes) whose dynamics are still not well understood. String theory itself, however, has not yet given a complete answer to the question of what happens when quantum matter interacts with singular space-time backgrounds such as black holes.

cernquant1_9-02

In the framework of local field theory, John Wheeler and Stephen Hawking have suggested that microscopic quantum space-time fluctuations of black hole type, with size of the order of the Planck length (10-35 m), characterize the quantum-gravity vacuum, giving it a “space-time foamy” nature (figure 1). Interaction of matter with such backgrounds may result in the loss of quantum coherence. This in turn could lead to significant deviations from the standard quantum-mechanical behaviour of matter particles, even at scales much lower than the characteristic quantum-gravity (Planck) energy scale of 1019 GeV. This is because gravity is a non-renormalizable interaction, with a dimensionful coupling constant, and so it may manifest itself at much lower scales. An example of such a phenomenon is provided by the so-called charge-parity (CP) discrete symmetry, whose violation manifests itself at scales much lower than the characteristic scale of the underlying weak interactions responsible for the effect.

However, the existence of space-time foam effects has been questioned, in particular by string theorists. In superstrings, for certain specific classes of black holes, it is possible to count precisely the microstates that constitute the internal black hole structure by virtue of string dualities (certain discrete gauge symmetries of strings). This has prompted the conjecture that in string theory there is no loss of information during the interaction of string matter with singular space-time backgrounds, and therefore no loss of quantum coherence. If this is true for every singular space-time background, it would be a manifestation of the so-called holographic conjecture of ‘t Hooft and Susskind, according to which any information that enters the horizon of a black hole (a sort of space-time boundary) is encoded on the boundary and no information is lost.

Unfortunately, the situation may not be that simple. Physicists have so far been unable to demonstrate the holographic principle rigorously outside certain restricted classes of stringy black hole backgrounds. Moreover, within the modern context of brane theory there have been theoretical arguments demonstrating that it is impossible to describe the formation of a black hole by collapsing string matter, or the final stage of its evaporation by purely unitary quantum methods.

Liouville strings

It is at this point that alternative paths within the string framework have been proposed. One is the so-called Liouville (non-critical) string, a sort of non-equilibrium string theory allowing for a stochastic description of space-time foam backgrounds. In such models, gravitational degrees of freedom, unobservable by low-energy observers, constitute an “environment” that results in the non-equilibrium nature of the underlying string theory. Technically speaking, the foam background is not conformal (its world-sheet dynamics are not invariant under special local-scale transformations). Nevertheless, with certain restrictions such theories are still mathematically consistent.

Time plays a particularly special role in Liouville strings, where it is identified with a world-sheet renormalization-group scale, the Liouville mode itself. In such a picture, the irreversibility of the temporal flow is guaranteed by powerful remormalization-group-flow theorems of the 2D world-sheet geometry. Such an irreversible flow has immediate consequences for entropy production, associated with the non-equilibrium nature of the Liouville string.

On the other hand, it has been argued that a Liouville-string irreversible time flow may play an important role in cosmological scenarios, in particular in relaxation models of the universe, allowing for the possibility of an exit from an accelerating universe (de Sitter) phase, as well as a relaxing-to-zero vacuum energy. Such models provide a natural explanation for the smallness of the present-era value of the cosmological constant, consistent with recent astrophysical claims. It is therefore interesting to note that the same mathematical framework of the Liouville string, which provides a picture for the space-time foam, can also describe a cosmological model of the universe with correct phenomenology.

The fundamental issue of the nature of time probably holds the key to unravelling a consistent theory of quantum gravity. The relevant phenomenology, therefore, which has already enjoyed almost two decades of intense research, may also contribute significantly to an understanding of such fundamental questions.

For the time being, however, string theories seem to suffer from an important drawback, which probably prevents a complete understanding of quantum gravity issues within this framework. This is their background dependence – the fact that the whole formalism of strings, at least so far, is based on specific space-time backgrounds. A satisfactory theory of quantum gravity is expected to be capable of describing the generation of the space-time structure in a dynamical way from more fundamental units. This may be possible in string field theory, but the subject is at present in its infancy.

This property of background independence seems to characterize another major approach towards the quantization of gravity, the so-called loop quantum gravity. This is a way of canonically quantizing gravity starting from a theory of abstract “spin-foam networks”. These are the fundamental units from which the fabric of space-time is generated dynamically in a mathematically self-consistent and elegant formalism, and hence the approach is background independent by design. At present it is not known whether the theory is related to (Liouville) string theory. Moreover, no significant progress has been made towards an understanding of issues relating to information loss in quantum black hole backgrounds or the unification of gravity with the rest of the interactions. There is intense ongoing research in this field, and such questions may soon be tackled. For our purposes it is sufficient to mention that loop quantum gravity, like Liouville strings, appears to provide a theoretical framework for a discussion of stochastic space-time foam dynamics.

Experimentally falsifiable predictions

It is of primary importance to attempt to make physical, experimentally falsifiable predictions from these different frameworks of quantum gravity. At first sight this may seem wishful thinking for any theory of quantum gravity, whose fundamental length scale is far too small to be directly visible. However, since gravity is a non-renormalizable interaction, effects associated with it might be visible at energy scales much lower than the Planck scale.

cernquant2_9-02

One possible imprint of quantum gravity at lower scales might be the loss of quantum coherence that characterizes certain models of quantum gravity. The imprint would be a violation of CPT symmetry (T=time reversal), whose conservation is a theorem of any local unitary field theory without gravity, but whose failure is almost guaranteed in a theory with highly curved space-time structure such as a space-time foam. The violation of such symmetry occurs through a modification of the standard quantum-mechanical evolution of matter, which is somewhat similar in nature to the behaviour characterizing open systems in stochastic media. One of the most sensitive probes for such effects is neutral kaons and other mesons. The recently completed CPLEAR experiment at CERN has sensitivity not far from the theoretically estimated magnitude of such CPT violating effects in minimal-suppression models. If the Planck energy scale minimally suppresses quantum-gravity effects, then such effects could be falsified at new experimental meson facilities, for example at Frascati in Italy and Fermilab in the US.

A second effect, which may also be falsified in the immediate future, is a sort of mean-field effect of quantum-gravity foamy situations, according to which the propagation of ordinary particles in this medium results in a modification of the particle’s dispersion relation. This effect was first predicted in the framework of Liouville strings. A similar effect has since been shown to characterize the loop-gravity approach. Modified dispersion relations have also been postulated independently at a phenomenological level by researchers proposing a thermal-like nature of the quantum gravity environment or inspired by condensed matter situations. A modification of the dispersion relation for photons or other particles with no mass implies a non-trivial refractive index in vacuo, in other words a frequency-dependent velocity of light. The origin of this effect can be traced back to the spontaneous violation of Lorentz invariance that may characterize the ground state of such theories.

Such effects are also known to describe local field theory situations in non-trivial vacua, such as thermal vacua, or quantum field theories in the vicinity of black hole backgrounds, even allowing for superluminal signals without violations of causality. However, the effect predicted by the stringy (Liouville) approach to quantum gravity has two distinctive features: the effect always leads to subluminal signals, and it is enhanced with increasing energy of the particle probe, in contradistinction to the field-theoretical cases where the effect decreases with energy. In contrast to the stringy case, the loop gravity approach predicts superluminal signals as well.

The Liouville approach to quantum gravity also predicts stochastic light-cone fluctuations. The latter have also been conjectured to characterize some non-trivial theories of local quantum gravity involving coherent gravitons. Light-cone fluctuations imply stochastic fluctuations of the speed of light in vacuo, which will imply stochastic fluctuations in the arrival times of photons of the same frequency in contrast to the refractive index effect, which implies arrival-time fluctuations for photons of different frequencies. The stochastic effect in string theory is suppressed (as compared with the refractive index effect) by powers of the (weak) string coupling expressing the strength of the interactions of string matter.

Astrophysical probes

cernquant3_9-02

One of the first probes suggested for the experimental falsification of the refractive index and the stochastic light-cone fluctuation effects was gamma-ray bursts (GRBs). Experimental tests of such effects would look for microstructure in the arrival time of light beams from a GRB (figure 2) and a correlation with distance (redshift). If the emission of photons at various energies were simultaneous within the standard quantum mechanical limits, then a refractive index effect would imply differences in the arrival times of photons at different energies. In Liouville string models the subluminal nature of the effect implies that the higher energies would be delayed more. In contrast, loop-gravity-inspired models are characterized by both superluminal as well as subluminal propagation, and as a result there would be birefringence effects, which are absent in the stringy models. On the other hand, as far as stochastic light-cone fluctuations are concerned, there would be fluctuations in the arrival times of photons at the same energy. A systematic theoretical error analysis is needed, however, given that it is possible that photons at different energies are not emitted simultaneously.

Some of the GRBs observed so far are characterized by microstructures in their light curves of less than a millisecond, and are likely to emit gamma rays in the GeV or even TeV energy regions. Since many of these GRBs are known to be at cosmological distances (of the order of 3000 Mpc or larger), the sensitivity of these high-energy channels to the quantum-gravity refractive index and stochastic effects is such that they can be falsified with the next generation of satellite GRB-dedicated facilities, such as NASA’s Gamma-ray Large Area Space Telescope (GLAST) and the AMS experiment. Similar sensitivities might be achieved by terrestrial and extraterrestrial interferometric devices, which are also capable of detecting these stochastic quantum-gravity effects as part of the noise in the resulting interference patterns.

cernquant4_9-02

Other astrophysical probes of the stochastic quantum-gravity effects may be provided by ultra-high-energy cosmic rays (UHECR) with energies above 1019 eV, as well as by TeV photons. The presence of such events seems puzzling from the point of view of Lorentz invariance – standard kinematics imply the existence of energy thresholds, the Greisen, Zatsepin, Kuzmin (GZK) cut-off, above which certain reactions would prevent such energetic particles from reaching the observation point, assuming an extra-galactic origin. Some exotic suggestions have been made to relate Lorentz invariance violation associated with the quantum-gravity-induced modification of the particle’s dispersion relations with the existence of UHECR or TeV photons, in the form of an abolition of the GZK cut-off in such models.

High-energy cosmic neutrinos are also sensitive probes of quantum-gravity effects. For instance, if a minimally suppressed refractive index for neutrinos is valid, then ultra-high-energy neutrinos of 1019 eV, which may be emitted from GRBs, will be completely dispersed and thus unobservable, according to models with minimal Planck-scale suppression. The observation of such energetic particles would therefore exclude such models right away. Minimal suppression models of quantum-gravity foam with baryon-number violating interactions are in fact already excluded on the basis of solar and extra-galactic neutrino observations, since such effects would lead to neutrino oscillations in direct conflict with the experiment.

Last, but not least, bounds on stochastic effects significantly more stringent than those placed by astrophysical observations may be obtained by means of atomic physics experiments. Such experiments are mainly concerned with measurements of quantum-gravity induced corrections to the gyromagnetic ratio of charged particles, which can be measured with high accuracy.

In summary, a plethora of relatively low-energy measurements can be made, some of which are already on the verge of excluding Planckian physics models. Any future experimental attempt to bind models of quantum gravity should therefore be encouraged, since they present the only way to arrive at a true theory of quantum gravitation, which will probably lead to a better understanding of the structure of space-time itself. The concept of a phenomenology of quantum gravity may soon no longer be considered oxymoronic.

Cosmic-ray apparatus images Moon’s shadow

cernnews2_7-02

A dedicated cosmic-ray experiment making use of the muon identification system of CERN’s L3 experiment collected around 12 x 109 cosmic ray muons between April 1999 until L3 finished taking data in 2000. The goal of the so-called L3 plus cosmics (L3+C) experiment is to study the cosmic muon spectrum precisely over the range 20-2000 GeV/c, allowing for normalization of the calculated muon-neutrino spectrum. Other topics under analysis by the experiment include the search for point-source burst signals, exotic events and studies of the composition of very-high-energy primary cosmic rays around 1015 eV, made possible by coupling the L3+C apparatus to a surface scintillator array.

Also interesting is the ability of the apparatus to image the shadow of the Moon. High-energy muons indicate the direction of the primary cosmic-ray particle. Since the Moon can absorb primaries – usually protons – on their way to Earth, the L3+C apparatus saw fewer muons originating from that direction and so observed a shadow. Moreover, since the Earth’s magnetic field deviates the course of charged primaries, positive particles are shifted eastwards and the corresponding shadow is shifted to the west. The extent of the shift and the shape of the shadow depend on the momentum of the primary and the size and the direction of the field along its path.

Antiprotons, if any, would be deviated in the opposite direction, giving rise to an “anti-shadow”. No such feature is seen. Work is continuing to set a limit to the antiproton to proton ratio in an energy range around 1 TeV, well above the present range of direct measurements, which extend up to 50 GeV.

Fermilab experiment hints at double charm

cernnews4_7-02

The SELEX experiment at Fermilab has announced three candidates for doubly charmed baryons. In a presentation at the US laboratory at the end of May, collaboration co-spokesman Jim Russ of Carnegie Mellon University described the painstaking analysis of data recorded in 600 GeV proton, pion and sigma hyperon collisions with a diamond target in 1996. The observation comes as a surprise to the collaboration, which did not expect to see doubly charmed baryons at all, and there remain uncertainties over the conclusion to be drawn. The SELEX candidates have the right characteristics to be up-charm-charm and down-charm-charm combinations, but the mass difference between the two states is larger than expected. Like the proton and neutron, the candidate particle pair is related by the replacement of an up by a down quark, so the mass difference was naively expected to be similar. SELEX, however, sees a mass difference 60 times larger.

The collaboration chose to make its announcement in the hope that doing so would spur the broader community to take up the search. In particular, data from the BaBar and Belle collaborations at the SLAC and KEK B-factories in the US and Japan would be good places to look for doubly charmed baryons.

Multiply charmed baryons are a natural consequence of the quark model of hadrons, and it would be surprising if they did not exist. Whether or not the high-mass states that SELEX reports turn out to be the first observation of doubly charmed baryons, studying their properties is important for a full understanding of the strong interaction between quarks.

bright-rec iop pub iop-science physcis connect