Comsol -leaderboard other pages

Topics

Germany seeks to fulfil astroparticle aspirations

The fourth biennial workshop on astroparticle physics in Germany took place at DESY, Zeuthen, in October 2005. It provided scientists in all branches of astroparticle physics, high-energy physics and astronomy with the opportunity to meet representatives of the German Ministry of Education and Research (BMBF) and discuss the status and future strategies of astroparticle physics.

Astroparticle physics in Germany is mainly pursued by the Helmholtz Association – at the Forschungszentrum Karlsruhe (FZK) and DESY – the Max Planck Society and several universities. It has rapidly achieved many successes in various topics, mainly through international collaborations. However, the next generation of experiments will surpass the funding available in Germany and it may be necessary to set priorities even though the advances in many areas are promising. As Johannes Blümer, chair of the Committee for Astroparticle Physics in Germany (KAT) – which is elected by the German astroparticle physics community – noted at the workshop: “Everything is possible, but not all at the same time”.

Cosmic ray success stories

CCEger1_03-06

The main success story in astroparticle physics concerns the observations of the highest energy photons by imaging air Cherenkov telescopes (IACTs). These telescopes register the Cherenkov light emitted by extensive air showers that are initiated by photons in the atmosphere. After a long and sometimes painful period of poking around in the dark, in 1989 the Crab nebula was discovered to be a source of photons of multi-tera-electron-volt energies – the first such astrophysical source. With the latest generation of IACTs – the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescope on the island of La Palma and most notably the High Energy Stereoscopic System (HESS) telescope array in Namibia – a wealth of galactic and extragalactic sources has since shown up (figure 1). The German community has a major involvement in both HESS and MAGIC.

This research has opened up a new branch of observational astrophysics, as IACTs have detected mysterious galactic tera-electron-volt photon sources that have no counterpart at any other wavelength. Surely more surprises are to be expected.

In charged cosmic rays, the energy spectrum exhibits what is called a “knee” around 1-10 PeV and for years the solution to understanding the origin of cosmic rays in this energy region seemed to be just out of reach. A detailed multi-parameter analysis of the energy spectrum of groups of chemical elements by the Karlsruhe Shower Core and Array Detector (KASCADE) collaboration at FZK has resolved the knee into successively heavier elements. However, it has hit a wall due to the limited theoretical understanding of high-energy interactions in the atmosphere. Data from the Large Hadron Collider (LHC) at CERN may help to improve simulations of these interactions.

CCEger2_03-06

Registering cosmic rays at the highest energies, around 1020eV, the Pierre Auger Observatory (in which German universities and the FZK are major participants) is now moving quickly forward. “Hybrid” events, which were presented at the Zeuthen workshop (figure 2), made a strong impression. Although Auger is not fully operational yet, timescales are such that next-generation experiments need to be discussed now. These include, for example, the Extreme Universe Space Observatory, a space-born experiment to observe fluorescent and Cherenkov light from huge air showers. It would provide a sensitive area about one order of magnitude larger than Auger and would be perfectly suited to events above the famous Greisen-Zatsepin-Kuzmin cut-off (if indeed there are any), as well as for neutrino astronomy beyond 5 × 1019 eV.

The measurement of the radio emission from air showers has witnessed its own breakthrough. At the KASCADE ground array, the LOPES collaboration records geosynchrotron radio emission from air showers (figure 3). A realistic modelling of the radio emission has been achieved, which in turn enables the derivation of air-shower parameters from the radio data. The radio technique will allow for potentially very large and cost-effective installations. In the near future, more details will be investigated in conjunction with the Pierre Auger Observatory. This workshop series had its share in this success, as the possibility of new radio measurements was presented at the first meeting in 1999. Financial support was quickly provided by the BMBF and LOPES came into being.

 

CCEger3_03-06

Following the proof of principle of cosmic-neutrino detection by the Antarctic Muon and Neutrino Detector Array at the South Pole and the array in Lake Baikal, the installation of the 1 km3 sized IceCube neutrino telescope is now under way at the South Pole, with major participation from DESY and several German universities. IceCube should reach a sensitivity sufficient to identify sources of cosmic neutrinos. Technology tests by the ANTARES collaboration for a similar neutrino telescope in the Mediterranean have been concluded successfully.

A direct measurement of neutrino mass is badly needed to set the absolute scale for the mass differences derived from neutrino flavour oscillations. The Karslruhe Tritium Neutrino Experiment at FZK is likely to be the ultimate detector for a direct measurement of the neutrino mass from tritium decays. It aims for a mass sensitivity of 0.2 eV. The Germanium Detector Array, which is being pushed by German astroparticle physicists, is used to search for neutrinoless double beta decays and could reach a sensitivity of 0.1 eV for Majorana neutrinos.

Low-energy solar-neutrino spectroscopy is still required for a detailed understanding of the Sun and may prove the existence of matter effects in neutrino oscillations. The BOREXINO experiment, which has German participation, should start taking data in summer 2006. However, technology now allows us to aim for much larger future installations, such as the Low Energy Neutrino Astronomy project (with 50 kt of scintillator), up to 1 Mt water Cherenkov detectors or 100 kt liquid argon “bubble chambers”, as are being developed by the ICARUS collaboration. These new experiments would enable the measurement of “time resolved” neutrino spectroscopy in correlation with helioseismology. In addition, they could give access to relic and galactic supernova neutrinos and geoneutrinos, and could provide sensitive results on proton decay.

There are many convincing indirect arguments for the existence of dark matter in the universe. Theories predict weakly interacting massive particles (WIMPs) with masses above a few tens of giga-electron-volts as constituents of dark matter. Workshop participants heard of interesting evidence for galactic dark matter from archival data of the EGRET satellite. Meanwhile, three strategies are being followed in the hunt for WIMPs: they could be created at particle colliders (one of the prime targets for the LHC), while “natural” WIMPs are being searched for through their annihilation products, by satellites, IACTs and neutrino telescopes, or by elastic scattering processes in specialized detectors. A fourth line of attack is to try to detect axions, which may be created in non-thermal processes in the universe and hence provide cold dark-matter particles with masses in the region of 100 μeV.

There is already a good deal of indirect evidence for the existence of gravitational waves through the observation of energy losses in binary pulsar systems. However, the experimental systems of the Laser Interferometer Gravitational Wave Observatory (LIGO) in the US, the Virgo detector at Pisa in Italy and GEO600 near Hannover in Germany could directly detect gravitational waves, for example from the collision of two neutron stars. Unfortunately, the predicted event rates vary between one in a few years to one in a few thousand years. The advanced LIGO instrument, to which the German Max Planck Society will contribute, will achieve a much higher sensitivity. It is scheduled to start data taking in 2013. The space mission LISA, a joint effort between ESA and NASA, will extend the observational window to much lower frequencies and may even make primordial gravitational waves accessible. The decisions in Europe and the US on LIGO and LISA show the high priority that is being given to this research field, although in Germany only the Max Planck Society is significantly involved so far.

A stimulating one-day multimessenger workshop after the meeting discussed methods to combine observations of photons from radio to tera-electron-volt energies with those of neutrinos, gravitational waves and charged cosmic rays. The aim would be to extend the classical multiwavelength approach of astronomy towards a true multimessenger strategy.

Organizational issues

The total funding of astroparticle physics in Germany for 2005 amounted to approximately €35 million, while the corresponding amount for Europe is about €130 million a year. This may be compared with the investments of roughly €30 million that will be necessary for a next-generation experiment for the direct identification of dark-matter constituents, of about €8 million for each telescope in a next-generation IACT system, or of €500 million for a 1 Mt water Cherenkov detector. It is evident that such installations will be realized only within international collaborations.

An even greater challenge is to create the roadmaps necessary to guide the way into the future of astroparticle physics. Consequently, the scientific community is being asked to produce roadmaps on national, European and international levels. The most important European organizations in this respect are the Astroparticle Physics European Coordination (APPEC) in which funding agencies of 10 European countries are members at present, and the European Strategy Forum on Research Infrastructures (ESFRI). The German KAT takes a central role in linking the roadmaps for the future of astroparticle physics in Europe and Germany.

Astroparticle physics has applied for EU support (€28 million in total) for four projects with important German participation:

  • the Integrated Large Infrastructures for Astroparticle Science to concentrate on dark-matter searches, double beta decay and gravitational waves (granted);
  • a High Energy Astroparticle Physics Network (not granted);
  • KM3NeT to develop a deep-sea facility in the Mediterranean for neutrino astronomy (granted);
  • the Astroparticle ERAnet to improve further the coordination within Europe giving a solid operational basis to APPEC (granted).

In Germany the Helmholtz Association has taken the initiative to improve networking between universities and research institutes in the country by funding three “virtual institutes”: VIKHOS (high-energy radiation from the cosmos), VIDMAN (dark matter and neutrino physics) and VIPAC (particle cosmology).

In conclusion, astroparticle physics is making good progress in Germany and elsewhere. While tera-electron-volt astronomy is firmly established, several other fields are now just reaching the sensitivities for making breakthroughs. Further progress is imaginable only in the context of international collaborations following well-accepted roadmaps. The required organizational structures are being put into place and the next decade will decide on the future of many areas of astroparticle physics.

Luckily, imagination, fascination and fantasy remain unbroken. Astroparticle physicists continue to dream of utilizing the Moon as an ultimate target for observing the interactions of the highest energy cosmic rays or for listening to the feeble sound emitted in neutrino interactions in water and ice.

INTEGRAL reveals Milky Ways’ supernova rate

One supernova explosion every 50 years in our galaxy: that is the rate that a European team has determined from the observations of ESA’s INTEGRAL gamma-ray satellite. This figure is based on the amount of gamma-ray radiation emitted by radioactive aluminium produced in core-collapse supernovae.

CCEast1_01-06

With a lifetime of about a million years, radioactive aluminium (26Al) is an ideal tracer of ongoing nucleosynthesis in the galaxy. The decay of 26Al emits a gamma-ray line at an energy of 1.809 MeV. NASA’s Compton Gamma-Ray Observatory found in the 1990s that this characteristic emission is distributed along the plane of the Milky Way, as expected if 26Al is mainly produced by massive stars throughout the galaxy. It remained unclear, however, whether the dominant emission towards the centre of the galaxy was due to relatively nearby star-forming regions on the line-of-sight or to the central region itself.

It is this uncertainty that has now been lifted thanks to INTEGRAL’s very high spectral resolution. The peak of the emission from 26Al was found to be shifted towards higher energies east of the galactic centre and towards lower energies on the west side. These observations are consistent with the expected Doppler shift due to the rotation of the galaxy. They show that the 26Al emission does follow the global galactic rotation and hence comes from the inner part of the galaxy rather than from foreground regions.

The team, led by Roland Diehl of the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, used these line-shift measurements to constrain the best model for the spatial distribution of 26Al in the galaxy. This distribution was then used to convert the observed gamma-ray flux into the total mass of 26Al in the Milky Way, which was found to be about three times the mass of the sun. Finally, using results from theoretical nucleosynthesis models for a typical stellar population, the team could estimate the current star-formation rate in the galaxy to be about 7.5 stars a year, corresponding to a rate of core-collapse supernovae of 1.9 (±1.1) events a century.

The rate of two supernovae a century in the Milky Way is consistent with the rate derived from supernovae detected in other galaxies, but exceeds several times the rate deduced from the historic observations of supernova explosions during the past 2000 years. Only eight such events have been recorded – in 185, 386, 393, 1006, 1054, 1181, 1572 and 1604 – mainly by Chinese astronomers. The two last events have been observed more precisely by the famous European astronomers Tycho Brahe (1546-1601) and Johannes Kepler (1571-1630). If we exclude the supernova of 1987 in the Large Magellanic Cloud, no supernova has been seen in the galaxy during the past 400 years. Some events may have gone unnoticed because they were distant and heavily obscured by interstellar dust, but the next supernova in the galaxy will certainly not remain hidden now we can observe the sky as never before throughout the whole electromagnetic spectrum, and even with neutrinos.

Further reading

R Diehl et al. 2006 Nature 439 45.

Shifts in the gamma-ray line from 26Al caused by the Doppler effect along the plane of the galaxy, owing to galactic rotation. The broad distribution is from a three-dimensional model of the spatial distribution of 26Al – based on free electrons in the interstellar medium – that matches the line shifts measured by INTEGRAL (error bars). (Courtesy MPE.)

Quarks matter in Budapest

The Quark Matter conferences have historically been the most important venues for showing new results in high-energy heavy-ion collisions. The 18th in the series, Quark Matter 2005, held in Budapest in August 2005, attracted more than 600 participants from 31 countries in five continents; more than a third were junior participants, reflecting the momentum of the field. The major focus of the conference was the presentation of the new data from the Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) together with the synthesis of an understanding of heavy-ion data from experiments at CERN’s Super Proton Synchrotron (SPS), including new data from the NA60 experiment. The meeting also covered a broad range of theoretical highlights in heavy-ion phenomenology, field theory at finite temperature and/or density, and related areas of astrophysics and plasma physics.

After an opening talk by Norbert Kroó, vice-president of the Hungarian Academy of Science, the scientific programme of the conference began with a talk by Roy Glauber, who was soon to share the 2005 Nobel prize in physics. Glauber’s calculations in the 1960s laid the foundation for the determination of centrality in high energy heavy-ion collisions – a measure of how close to head-on they are – which is now one of the most elementary and widely used tools of heavy-ion physics. In his talk “Diffraction theory, quantum optics and heavy ions”, he discussed the concept of coherence in quantum optics and heavy-ion collisions and presented a new generalization of the Glauber-Gribov model. Further talks in the introductory session were given by Luciano Maiani, former director-general of CERN, who reassessed the main conclusions of the SPS fixed-target programme, and by József Zimányi, of the KFKI Research Institute for Particle and Nuclear Physics in Budapest, who gave an account of the evolution of the concept of quark matter.

It has become a tradition of the Quark Matter conferences to follow the introductory session with summary talks of all experiments. Thus, the first day sets the scene for the discussions of the rest of the week. This short report cannot summarize all the interesting novel experimental and theoretical developments, but it aims at illustrating the richness of these discussions with a few of the many highlights.

One of the main discoveries of the fixed-target heavy-ion programme at the SPS five years ago was the strong suppression of the J/ψ yield with increasing centrality of the collision, which probed the deconfinement phase transition. Another discovery concerned the significant enhancement of low-mass dileptons, which indicates modification in the medium of vector mesons and possibly provides information about the restoration of chiral symmetry. These major discoveries by the NA50 and CERES experiments at the SPS also raised a significant set of more detailed questions, which were recognized as central to understanding the dynamical origins of the observed effects.

CCEqua1_01-06

In particular, the dimuon invariant-mass spectrum of NA50 showed an enhancement below the J/ψ peak, which different theoretical groups ascribed either to a dramatic enhancement of the charm cross-section in the medium, or to significant thermal radiation. Having implemented a telescope of silicon pixel detectors with improved pointing resolution, NA60 was able to report in Budapest that data taken in the 2003 indium-indium run allow them to rule out conclusively an increased charm cross-section as the source for the dimuon excess. The data are, however, consistent with the exciting possibility of a significant thermal contribution. In addition, for more than a decade, there has been a theoretical debate on whether the embedding of ρ mesons in dense quantum chromodynamic (QCD) matter leads to a shift in the ρ mass, or to a density-dependent broadening, both scenarios being consistent with the original CERES dielectron data. NA60 now concludes, from data taken in the indium-indium run, that the shifting-mass scenario is not consistent with their data, which instead support a broadening induced in the medium (see figure 1). NA60 also presented their first indium-indium measurements of J/ψ suppression as a function of centrality. These confirm the strong anomalous suppression seen by NA50 in central lead-lead collisions at the SPS.

The SPS experiments NA49, CERES, NA50 and NA57 also showed new results from their continuing data analysis. In addition to earlier high transverse-momentum (pT) measurements from CERES and WA98, this year NA49 and NA57 showed new results that were extensively compared with the results of the experiments at RHIC.

The central topic of this Quark Matter conference was without doubt the full harvest of the high-luminosity gold-gold run at RHIC in 2004, from which data analyses were shown for the first time. Equally important were results from the successful copper-copper run in the first half of 2005, which had been analysed in time for the conference in a global effort by the participating institutions of the four RHIC experiments. With an integrated luminosity for 200 GeV gold-gold collisions of almost 4 nb-1, this run increased statistics by more than a factor of 10, and made much-wanted information accessible for the first time. One of the most important early discoveries of the heavy-ion experiments at RHIC was the strong suppression of hadronic spectra by up to a factor of five in the most central collisions. This so-called “jet-quenching effect” supports the picture that the matter created in heavy-ion collisions is of extreme density and thus very opaque to hard partons.

CCEqua2_01-06

Results from the PHENIX experiment at RHIC now indicate that even neutral pions of pT = 20 GeV show this dramatic energy degradation (figure 2). Moreover, the increased luminosity allowed the STAR experiment to study the recoil of hadron trigger particles up to 15 GeV, and for sufficiently high transverse momenta, this recoil is for the first time observed to punch through the soft background. However, compared with reference data from proton-proton collisions, the particle yield of the recoil is strongly reduced, consistent again with the picture of a medium that is dense and very opaque to partonic projectiles. In further support, PHENIX also reported that high-pT photons are not suppressed (figure 2), and that photons at intermediate transverse momenta show an excess, which may be attributed to thermal radiation from the hot and dense matter.

Another important piece in the puzzle of reconstructing the properties of the produced matter came from the first measurements of high-pT single-electron spectra. These spectra are thought to be dominated by the semi-leptonic decays of D- and B-mesons, thus giving for the first time experimental access to the propagation of heavy quarks in dense QCD matter. Data from STAR and PHENIX reveal a medium-induced suppression of electrons, which is of similar size to that of light-flavoured hadrons. There were many parallel talks, by both experimentalists and theorists, which contrasted these data with the theoretical expectation that massive quarks should lose less energy in the medium than massless quarks or gluons due to the so-called “dead-cone effect” in QCD. While a final assessment is still awaited, there was widespread agreement that these data will help significantly in refining our understanding of the interaction between hard probes and the medium, which is much needed for a better characterization of the dense QCD matter produced in nucleus-nucleus collisions.

Another much awaited result that gave rise to a great deal of discussion was the first statistically significant J/ψ measurement at RHIC. This was presented by the PHENIX collaboration and showed a similar pattern and strength to that observed in lead-lead and indium-indium collisions at the SPS. This result was of particular interest also to lattice QCD theorists, who now find that the dissociation of the directly produced J/ψ in a deconfined medium sets in at much higher energy densities than previously expected.

CCEqua3_01-06

The bulk properties of dense QCD matter reveal themselves not only in the modification of hard processes by the medium, but also in the collective motion of soft particle production and its hadro-chemical composition. One of the main discoveries of the first years of running RHIC was the unprecedented large size of the collective flow signals, measured in the asymmetries of particle production with respect to the reaction plane. Remarkably, the measured mass-dependence of the transverse radial and elliptic flow supports the assumption that different particle species emerge from a common flow field. Flow measurements at intermediate transverse momenta follow constituent-quark counting rules and are consistent with quark coalescence as a medium-dependent hadronization scenario (figure 3). Moreover, to the surprise of many, the hydrodynamic description of the collision in terms of an adiabatically expanding, perfect fluid of vanishing viscosity and heat conductivity appears, at RHIC energies, to be satisfactory for the first time.

Much of the discussion at QM ’05 focused on the emerging picture of the matter produced in heavy-ion collisions at RHIC, which, far from being a weakly interacting gas of quarks and gluons, shows features of a strongly coupled partonic system indicative of a perfect liquid. This liquid includes not only the light and strange quarks; the first preliminary data on the elliptic flow of charmed hadrons from the PHENIX collaboration indicates that even charmed quarks participate in the collective expansion of this new form of matter.

The conference saw a lively theoretical discussion about the dynamic mechanisms underlying a possible rapid thermalization. Emphasis was given in particular to the relationship to thermalization processes in Abelian plasmas, to formal analogies with the thermal properties of black holes, and to the possibility that plasma instabilities accelerate equilibration. The intellectual richness of the field was further illustrated by exciting reports from string theory, where theorists have succeeded for the first time in calculating the viscosity to entropy density ratio in the physically relevant, strong-coupling limit of a certain class of thermal non-Abelian gauge theories. The fact that this ratio is found to be very small indicates a non-dissipative behaviour. It raises the exciting possibility that the non-dissipative character of an almost perfect liquid, which may be created in gold-gold collisions at RHIC, could be understood from first-principles calculations in QCD.

From the point of view of heavy-ion phenomenology, the central question of whether more direct signals of negligible viscosity can be established led to another highlight of the conference. The widely discussed idea was that if dissipation is negligible, then energy, deposited by a jet in dense QCD matter, must propagate in a characteristic Mach cone, determined by the velocity of sound in the quark-gluon plasma. Reports about back-to-back particle correlations from PHENIX, which may show such a Mach-cone-like structure, were hotly debated amongst theorists and experimentalists alike (see figure 4). Most importantly, these discussions showed that heavy-ion physics at collider energies has a large set of novel tools available for the controlled experimentation with hot and dense QCD matter, and that the field is moving towards characterizing specific properties of this matter, including its speed of sound, equation of state, and its transport coefficients such as heat conductivity and viscosity.

Past, present and future

CCEqua4_01-06

The Quark Matter conferences not only highlight the experimental harvest of the recent past and the latest news from theory, they are also the arena for assessing perspectives for the future. The first heavy-ion beam at the Large Hadron Collider (LHC) at CERN is expected in 2008, and heavy-ion researchers are now well prepared for the jump in centre-of-mass energy by a factor of 30 above RHIC. Most importantly, the fact that dramatic medium-sensitive effects persist unweakened at RHIC up to the highest measured transverse momentum strongly supports the expectation that the new kinematic regime accessible at the LHC will provide many qualitatively novel tools for the study of ultra-dense QCD matter.

The LHC will not be the only big player in the field of heavy-ion physics in the next decade. At Brookhaven, the STAR and PHENIX collaborations are lining up for several important detector upgrades, which will significantly enhance their abilities to characterize specific properties of the matter created in heavy-ion collisions. Moreover, Brookhaven envisages a luminosity upgrade of RHIC, which will open yet another class of novel opportunities. Finally, the newly approved Facility for Antiproton and Ion Research at the GSI Darmstadt is preparing for the start of a versatile heavy-ion programme in the next decade. Plenary talks provided overviews of the status and possibilities of these three programmes. The field is now eagerly awaiting its future, the next slice of which will be served at the 19th Quark Matter conference in Shanghai in November 2006.

Particles in Portugal: new high-energy physics results

The 2005 European Physical Society (EPS) Conference on High Energy Physics (HEP) took place in Lisbon on 21-27 July at the Cultural Centre of Belém, beautifully situated on the right bank of the Tagus river, 10 km west of downtown Lisbon. Held in alternate years, the EPS HEP conference starts with three days of parallel talks, followed by a day off, and then three days of plenary sessions. The format thus differs from that of the Lepton-Photon conferences, which are organized in the same year, and allows the participation of more “grass-root” and young speakers.

CCEhep1_01-06

In 2005 a total of 17 sessions yielded a wealth of detailed results from both experiment and theory, including new results from astroparticle physics. One of the highlights was provided by Barry Barish, newly appointed director of the Global Design Effort for the International Linear Collider (ILC). The EPS and the European Committee for Future Accelerators organized a particularly popular “Lab directors’ session”, which presented status and future plans.

CCEhep2_01-06

The opening ceremony was honoured by the presence of Mariano Gago, Portuguese Minister for Science, Technology and Universities, who as an experimental high-energy physicist, was also a member of the local organizing committee. As usual, the plenary sessions started with the prize awards. The EPS 2005 High Energy Particle Physics Prize was presented jointly to Heinrich Wahl of CERN and to the NA31 collaboration, with other prizes awarded to Mathieu de Naurois, Matias Zaldarriaga, Dave Barney and Peter Kalmus (CERN Courier, September 2005, p43). The next highlight was the invited talk by David Gross of Santa Barbara/KITP, Nobel Laureate in 2004 and EPS Prize winner in 2003. He checked off the list of predictions he had made in the summary talk of the 1993 Cornell Lepton-Photon conference, the majority of which had been confirmed.

CCEhep3_01-06

Sijbrand de Jong of Nijmegen/NIKHEF and Tim Greenshaw of Liverpool started the main business of the plenary session with talks on tests of the electroweak and quantum chromodynamic sectors of the Standard Model, respectively. The new (lower) mass for the top quark from Fermilab, of 172.7±2.9 GeV, as presented by Koji Sata of Tsukuba in the parallel sessions, gives an upper Higgs mass of 219 GeV at 95% confidence level. Greenshaw discussed how HERA continues to play a major role in precision studies in quantum chromodynamics (QCD) of the proton, now mapped down to 10-18 m, or a thousandth of its radius. Such results will be very valuable for the analysis of data from the Large Hadron Collider (LHC). New results on the spin structure of the proton were also reported.

Riccardo Rattazzi of CERN and Pisa then talked on physics beyond the Standard Model and was followed by Fermilab’s Chris Quigg, who reviewed hadronic physics and exotics. Rattazzi presented an interesting “LEP paradox”: the hierarchy problem, with a presumed light Higgs particle, requires new physics at a low scale, whereas there are no signs of it in the data from CERN’s Large Electron-Positron collider. He also reviewed the anthropic approach to the hierarchy problem: we inhabit one of very many possible universes. This many-vacua hypothesis is also referred to as “the landscape”, and might have implications for supersymmetry. Quigg reviewed several new states discovered by the CLEO collaboration at Cornell and at the B-factories, and reminded us that the pentaquark states are still controversial.

Near- and more-distant-future possibilities were reviewed by Günther Dissertori of ETH Zurich in his talk on “LHC Expectations (Machine, Detectors and Physics)” and by Klaus Desch of Freiburg in “Physics and Experiments – Linear Collider”. Dissertori gave an overview of all the complex instrumentation in the process of being completed for both the LHC and its four major detectors. The first beams are planned for the summer of 2007, with a pilot proton run scheduled for November 2007. All detectors are expected to be ready to exploit LHC collisions starting on “Day 1”. Desch presented the ILC project and highlights of the precision measurements it will provide in electroweak physics, in particular, in the Higgs sector.

More theoretical considerations were offered by CERN’s Gabriele Veneziano and Yaron Oz of Tel Aviv, who spoke on cosmology (including neutrino mass limits) and string theory, respectively. Veneziano reviewed current understanding, according to which the total energy content of the universe is split into 5% baryons, 25% dark matter and 70% dark energy. The question of what dark energy is was compared with the problem that faced Max Planck when he realized that the total power emitted by a classical black body is infinite. Interesting speculations on alternative interpretations of cosmic acceleration were also discussed. Precision measurements in cosmology have an impact on high-energy physics: they provide an upper bound on neutrino masses, indicate preferred regions in the parameter space of minimal supergravity grand unification, and suggest self-interacting dark matter. Oz reviewed the beauties of strings and their two major challenges: to explain the big bang singularity, and the structure and parameters of the Standard Model. So far, neither is explained, but the consistencies are impressive.

The recently discovered connection between string theory and QCD was described by SLAC’s Lance Dixon. An important problem being solved is how to optimize the calculation of multiparticle processes (which might be backgrounds to new physics processes). By ingeniously exploiting the symmetries of the theory, one is able to go beyond the method of Feynman diagrams in terms of efficiency. Roughly speaking, this amounts to first representing four-vectors by spinors, and then Fourier-transforming the left-handed but not the right-handed spinors.

Getting results

Christine Davies of Glasgow presented new results on non-perturbative field theory, in particular in lattice QCD (LQCD). She reported on the very impressive recent advances in LQCD, where high-precision unquenched results are now available to confront the physics of the Cabibbo-Kobayashi-Maskawa (CKM) matrix with only 10% errors on the decay matrix elements. This has been made possible by breakthroughs in the theoretical understanding of the approximations, together with faster computers.

Josh Klein of Austin and Federico Sanchez of Barcelona reviewed neutrino physics results and prospects, respectively. Neutrino physics has become precision physics, and now oscillations, rather than just flux reductions, are beginning to emerge in data from the KamLAND and Super-Kamiokande II experiments in Japan. Sanchez discussed rich plans for the future, with two main questions to tackle. Is the neutrino mass of Majorana or Dirac origin? How can the small angle θ13 and the CP-violating phase δ be constrained, or preferably, measured? The plans include the Karlsruhe Tritium Neutrino experiment to study tritium decay, and the GERDA experiment in the Gran Sasso National Laboratory (LNGS), the Neutrino Mediterranean Observatory and the Enriched Xenon Observatory, all of which will look for neutrinoless double beta decay. The Main Injector Neutrino Oscillation Search, the Oscillation Project with Emulsion Tracking Apparatus (OPERA) in the LNGS, and the Tokai to Kamioka (T2K) long-baseline neutrino experiments will all study the phenomena of “atmospheric” neutrino oscillations under controlled conditions, and the Double CHOOZ experiment will further bound the small values of θ13. A new idea is to exploit beams of unstable nuclei, which would provide monochromatic neutrinos. Meanwhile, the CERN Neutrinos to Gran Sasso project will start taking data in 2006, with a neutrino beam from CERN to the OPERA detector.

Flavour physics was the topic for both Gustavo Branco of Centro de Física Teórica das Partículas in Lisbon, in “Flavour Physics – Theory (Leptons and Quarks)”, and Marie-Hélène Schune of LAL/Orsay, who talked about CP violation and heavy flavours. At the B-factories, the Belle detector is collecting a lot of luminosity, and after a long shutdown, BaBar is back in operation. Many detailed results on CP violation in B-decays were presented at the meeting. The BaBar and Belle results on β or φ1 are now in agreement, and the CKM mechanism works very well, leaving little room for new physics, although the precision is also steadily improving.

Looking to the skies

Astrophysics was covered by three speakers, with Thomas Lohse of Berlin talking about cosmic rays (gammas, hadrons, neutrinos), Alessandro Bettini of Padova presenting dark matter searches, and Yanbei Chen from the Max-Planck Institute for Gravitational Physics reviewing work on gravitational waves. What and where are the sources of high-energy cosmic rays? How do they work? Are the particles accelerated or due to new physics (decay products) at large mass scales? The Pierre Auger Observatory is beginning to collect data in the region of the Greissen-Zatsepin-Kuzmin cut-off, while neutrino detectors search for “coincidences” (repeated events from the same direction).

The HESS telescopes and other detectors have discovered tera-electron-volt gamma rays from the sky! The origin is unknown, but they are correlated with X-ray intensities. The galactic centre is one such tera-electron-volt gamma-ray point source. It has also been discovered that supernova shells accelerate particles (electrons or hadrons?) up to at least 100 TeV. The searches for weakly interacting massive particles, on the other hand, remain inconclusive. Other experiments are still unable to confirm or refute the observation of an annular modulation seen by the DAMA project at the LNGS.

A major instrument in the search for gravity waves is the Laser Interferometer Gravitational-Wave Observatory, a ground-based laser interferometer that is sensitive in the region from 10 Hz to 10 kHz. The sources include pulsars, and one hopes to detect a signal after the planned upgrade. The Laser Interferometer Space Antenna will be launched in 2015, and will be sensitive to lower frequencies, in the range 0.01 mHz to 0.1 Hz, as might come from super-massive black-hole binaries.

Paula Bordalo of the Laboratério de Instrumentação e Física Experimental de Partículas in Lisbon presented an experimental overview of ultra-relativistic heavy-ion physics. Photon probes are important for the study of the new state of matter observed, as they do not interact strongly and carry information about the early stage of the collision. There is also a related virtual photon or dilepton signal that shows some interesting features. The new state being explored is possibly a colour glass condensate, which is behaving more like a low-viscosity liquid rather than a gas.

Plasma wake-field appears to be still in an early stage of development, although it has the potential to achieve very high acceleration gradients.

Alexander Skrinsky of the Budker Institute of Nuclear Physics reviewed the status and prospects of accelerators for high-energy physics, covering machines in operation as well as new facilities under construction or planned. Superconductivity is widely used and is being further developed for accelerating structures and for magnets. One important line of development is oriented towards higher luminosity and higher quality beams, including longitudinal polarization and monochromization techniques. There are studies aiming at shorter and more intense bunches, suppression of instabilities involving fast digital bunch-to-bunch feedbacks and minimization of electron-cloud effects. Rapid progress is being made on energy-recovery linacs, recyclers and free-electron lasers, which are being studied for future synchrotron light sources. Higher power proton beams and megawatt targets are being developed and several promising options for neutrino factories are under study. Plasma wake-field acceleration appears to be still in an early stage of development, although it has the potential to achieve very high acceleration gradients.

Grid references

Turning to computing, DESY’s Mathias Kasemann described the status of the Grid projects in high-energy physics. The big experiments running today – CDF, D0, BaBar and ZEUS – are already using distributed computing resources and are migrating their software and production systems to the existing Grid tools. The LHC experiments are building a vast hierarchical computing system with well defined computing models. The LHC Computing Grid (LCG) collaboration has been set up to provide the resources for this huge and complex project. The LCG system is being developed with connections to the Enabling Grids for E-science (EGEE) project and the Nordic Data Grid Facility in Europe and to the Open Science Grid in the US. Basic Grid services have been defined and first implementations are already available and tested. Kasemann’s personal prediction was that the data analysis of the LHC experiments will not be late because of problems in Grid computing.

On the detector front, CERN’s Fabio Sauli reported on new developments presented at the conference. Interesting progress has been achieved in fabricating the radiation-hard solid-state detectors needed for the LHC and other high-radiation-level applications. One way is through material engineering: choosing materials that are radiation resistant, such as oxygenated silicon, silicon processed with the Czochralski method, or using thin epitaxial detectors. Other solutions have been developed by device engineering, and these include pixel detectors, monolithic active pixels or three-dimensional silicon structures. For high-rate tracking and triggering, gas micropattern detectors, such as the gas-electron multipliers, have found versatile solutions in several experiments. For calorimetry, new materials like lead tungstenate crystals have been adopted in LHC experiments. Also new scintillation materials with large light yield, fast decay time and with high density have been tested.

Boris Kayser from Fermilab closed the conference with an eloquent summary. On the day off, various excursions to charming medieval villages and ancient monasteries all converged on the city of Mafra, where the conference participants met Portuguese students and teachers in a baroque palace dating from 1717. There was also a visit to a precious library created by Franciscan friars, with 36,000 prize volumes (the “arXiv” of its time!) and where bats control the insect numbers (visitors were told). Gaspar Barreira and his colleagues handled the local organization masterfully, and the many excellent fish restaurants nearby provided a relaxed setting for informal discussions.

• The next EPS-HEP conference, in July 2007, will take place in Manchester, UK.

Tokyo meeting focuses on nucleon-spin problem

Last summer, physicists from 10 countries came to the campus of the Tokyo Institute of Technology for the 5th Circum-Pan-Pacific Symposium on High Energy Spin Physics, held on 5-8 July 2005. The aim of this symposium is to enhance communications among physicists in the circum-pan-Pacific region as well as with guests from other regions, including Europe. Another feature is the active participation of young physicists.

CCEspi1_01-06

In 1988, the European Muon Collaboration experiment at CERN reported that quark spin makes only a small contribution to the spin of the proton, contrary to what had been believed for many years. This gives rise to the well-known “nucleon spin problem”, now being studied by physicists all over the world: what in fact does give the proton and neutron their spin?

At the symposium, the COMPASS collaboration, EMC’s successor at CERN, reported on their measurements of the gluon spin’s contribution to the nucleon spin for which they use a high-energy muon beam together with the detection of “open charm” and hadron pairs. In addition to a longitudinally polarized deuteron target, COMPASS occasionally uses a transversely polarized deuteron target, and so can measure the Collins effect and hence the transversity distribution δq(x). δq(x), which is the distribution of transversely polarized quarks in a transversely polarized target, is the third distribution function at twist two, along with the momentum distribution q(x) and helicity distribution Δq(x).

At DESY, the HERMES experiment has performed quark flavour separation of helicity distributions Δq(x) using a ring-imaging Cherenkov detector for hadron identification. Further progress in this experiment has come with a transversely polarized hydrogen target, which has enabled the Collins and Sivers effects to be separately identified for the first time. Identifying each of these effects with both HERMES and COMPASS will also help in understanding the mechanism in hadron reactions, such as the sizeable single-spin asymmetries observed, for example, by the E704 experiment at Fermilab and STAR at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. A relationship between the Sivers effect and the orbital angular momenta of quarks has been suggested theoretically, and the quark orbital angular momentum could contribute to the nucleon spin.

RHIC uses polarized proton-proton collisions to study the nucleon-spin problem, and both the luminosity and beam polarization are becoming higher every year. Here each proton beam can be regarded as a bundle of high-energy partons, where gluon-gluon collisions and gluon-quark collisions tell us about the role of the gluon’s spin in the proton. The double-spin asymmetry ALL in π0 meson production has been measured with longitudinally polarized proton beams as a function of transverse momentum, pt, and transversely polarized proton beams are also used. At the symposium the PHENIX and STAR collaborations at RHIC presented recent data along with interesting plans for the future.

Jefferson Lab also has a variety of experiments to study the nucleon-spin problem. The symposium presented experiments on the quark helicity distribution at large x, deeply virtual Compton scattering, single-spin asymmetries from semi-inclusive hadron detection, investigations in the nucleon resonance region and quark-
hadron duality and so on, as well as the plan for a future beam-energy upgrade. High luminosity is one of the merits of Jefferson Lab.

The Belle collaboration at the KEK B-factory reported their analysis of the Collins fragmentation function in the hadronization process in positron-electron collisions, where a non-zero value for the function was observed. This fragmentation function is needed to extract the transversity distribution δq(x) from the Collins effect observed in lepton-nucleon scattering.

The symposium also presented plans for a neutrino-scattering experiment to study the spin of strange quarks in the nucleon. In addition, there were theory talks on nucleon-spin structure based on lattice quantum chromodynamic calculations, the chiral quark soliton model, di-quark model and resummation method etc. Here developments in generalized parton distributions were a main topic.
The symposium consisted completely of plenary sessions, so that all the participants could share the same discussions. On the afternoon of the second day, a boat trip was organized at Yokohama Bay, followed by a visit to a Japanese-style garden and a conference dinner in an old traditional Japanese house in the garden.

Do gamma rays reveal our galaxy’s dark matter?

It is well known that visible matter in the form of stars and galaxies makes up only a small fraction of the total energy in our universe. The latest evidence is that 5% is made from particles we know about, while 95% is in a form we know nothing about. The large non-visible, “dark” fraction is known to exist from its gravitational effects and comes in two forms: dark matter, constituting 23% of the total energy, provides the familiar gravitational pull, thus slowing down the expansion of the universe; the remainder, the dominant 72% of the total energy, causes antigravity, i.e. it accelerates the expansion of the universe.

Dark matter was so named by the Swiss scientist Fritz Zwicky. In studying the movements of the galaxies in the Coma cluster in the 1930s, he discovered that there must be much more matter than is visible. Later, the rotation speeds of gases and stars in spiral galaxies revealed that practically every galaxy has a halo of dark matter surrounding it. This dark matter must be much more widely distributed than the visible matter, since the rotation speeds do not fall off like 1⁄√r, as expected from the visible matter in the centre, but stay more or less constant.

The fact that the dark matter is distributed over large distances implies that it undergoes little energy loss, so any interactions it has must be weak. Therefore, dark-matter particles are generically called WIMPs, for weakly interacting massive particles. These WIMPs must, however, be able to annihilate if they were produced in thermal equilibrium with all other particles in the early universe. At that time the number densities of different particles were all of the same order of magnitude and just as the baryon/photon ratio was reduced by 10 orders of magnitude by baryon annihilation, the WIMP number density, which is of the same order of magnitude as the baryon number density, can only have been reduced by annihilation, assuming the WIMPs are stable. (If they are not stable they must have a lifetime of the order of the lifetime of the universe, otherwise they would no longer exist.)

The gamma rays play a very special role as they point straight back to the source

If WIMPs in our galaxy collide and annihilate into quark pairs, these in turn will produce stable particles including gamma rays. The gamma rays play a very special role as they point straight back to the source, in contrast to charged particles, which change their direction in galactic magnetic fields; moreover, as they hardly interact they can be easily observed from across the galaxy. Gamma rays therefore offer a perfect means for reconstructing the distribution or halo profile of dark matter though observations in different sky directions.

Of course this assumes that gamma rays from dark-matter annihilation can be differentiated from the background, but this is indeed possible, since the spectral shapes are very different, as can be understood as follows. WIMPs have almost no kinetic energy, so after their annihilation into quark pairs the WIMP mass is converted into the energy of the quarks. The gamma rays produced in the fragmentation of such mono-energetic quarks have been well studied at CERN’s Large Electron Positron collider; they originate mainly from the decay of the copiously produced π0 mesons. The background, on the other hand, originates predominantly from the decay of π0 mesons produced by cosmic rays (mainly protons) scattering inelastically on the gas of the galactic disc, and so corresponds to the spectrum of gamma rays produced in fixed-target experiments with proton-proton collisions. In this case the gamma-ray spectrum can be calculated from the known cosmic-ray spectrum.

Clearly the steep power-law spectrum of cosmic rays will yield a spectrum of gamma rays that differs from that of the mono-energetic quarks produced in dark-matter annihilation. These different shapes can therefore be fitted to the data with free normalization factors, which then determine the relative contributions from dark-matter annihilation and background. Fitting the shapes has the advantage that the amount of background is determined from the data itself in each sky direction, so there is no need to rely on complicated galactic propagation models to obtain absolute background fluxes.

So what can be seen in the gamma-ray sky? A very detailed gamma-ray distribution over the whole sky was obtained by the Energetic Gamma Ray Emission Telescope (EGRET) on NASA’s Compton Gamma Ray Observatory, which collected data from 1991 to 2000. The EGRET telescope was carefully calibrated at SLAC in a quasi-monochromatic photon beam in the energy range 0.02 to 10 GeV. In 1997 the EGRET collaboration published their findings on a diffuse component of the gamma rays that cannot be described by the background: they observed an excess as large as a factor of two above the background for gamma-ray energies above 1 GeV. Recently, at the University of Karlsruhe, we have shown that this apparent excess traces the distribution of dark matter, since knowing the distribution of both the visible and dark matter allows us to reconstruct the rotation curve of our galaxy, especially its peculiar non-flat shape, which can be explained by the EGRET excess.

Mapping the flux

Figure 1 shows the excess for the flux from the galactic centre. The curve through the data points corresponds to the two-parameter fit, where the parameters are the normalization factors for the two known spectral shapes of signal and background, as discussed above; the red and yellow areas indicate the contributions from the dark-matter annihilation signal and the background, respectively. The WIMP mass was taken to be 60 GeV, which gives an excellent fit, although WIMP masses between 50 and 100 GeV are allowed, if extremes of the background shapes are allowed. The fit was repeated for 180 independent sky directions. In every direction the excess was observed and in every direction an excellent fit could be obtained for a WIMP mass of 60 GeV, if the contribution from the extragalactic background was also taken into account towards the galactic poles.

CCEgam1_12-05

Such a detailed mapping of the flux of dark-matter annihilation in the sky allows a reconstruction of the distribution of dark matter in our galaxy. The result is surprising: it yields a pseudo-isothermal profile, as observed from the rotation curves in many galaxies, but with a substructure in the galactic plane in the form of doughnut-shaped rings at radii of 4 and 14 kpc. The position of our solar system at a distance of 8 kpc from the centre is located between this inner and outer ring. The enhanced gamma radiation at 14 kpc was also discussed in the original paper by Hunter et al. in 1997 and called the “cosmic enhancement factor”.

The ring structures in the dark-matter halo are expected to have a significant influence. A star inside the outer ring will feel an inward gravitational force from the galactic centre and an outward force from the outer ring, so the total gravitational force is reduced. This means that fast stars will go out of orbit inside the outer ring, thus causing a minimum in the rotation curve for radii within the outer ring. Outside the ring the gravitational forces from the centre and the ring add together, thus providing a maximum in the rotation curve. These effects are indeed observed, as shown in figure 2, indicating that the EGRET excess really does trace the dark matter in our galaxy.

The origin of these substructures in the dark-matter distribution is thought to be the hierarchical clustering of dark matter into galaxies: small clumps of dark matter grow from the quantum fluctuations appearing after inflation in the early universe and these clumps combine to form galaxies. That the outer ring originates from the infall of a dwarf galaxy is supported by the fact that hundreds of millions of old, mostly burned-out stars have recently been discovered in this region (Newberg et al. 2002, Ibata et al. 2003 and Crane et al. 2003). The small velocity dispersion and large-scale height perpendicular to the galactic disc of these stars proves that they cannot be part of the disc.

CCEgam2_12-05

The position and shape of the inner ring coincides with a ring of molecular hydrogen. Molecules form from atomic hydrogen in the presence of dust or heavy nuclei, so a ring of neutral hydrogen suggests an attractive gravitational potential well in which the dust can settle. The significant contribution of the inner ring to the rotation curve is also indicated in figure 2.

The perfect WIMP

The conclusion that the EGRET excess traces dark matter makes no assumption about the nature of the dark matter, except that its annihilation produces hard gamma rays consistent with the fragmentation of mono-energetic quarks between 50 and 100 GeV. Supersymmetry, which presupposes a symmetry between particles with even spin (bosons) and odd spin (fermions), provides a good WIMP candidate. This symmetry requires a doubling of the particle species in the Standard Model: each boson obtains a fermion as superpartner and vice versa. These superpartners are still to be found, but the lightest is expected to be stable, neutral, massive and barely interacting with normal matter, i.e. it is the perfect WIMP.

Although the present data cannot prove the supersymmetric nature of dark matter, it is intriguing that the WIMP mass and WIMP annihilation cross-section (which can be calculated from the present WIMP density) are perfectly compatible with supersymmetry, including all constraints from electroweak precision experiments and limits from direct searches for Higgs bosons and supersymmetric particles, at least if the spin-0 superpartners are in the tera-electron-volt range. Figure 3 shows the allowed range of masses for spin-0 and spin-½ superpartners, assuming mass unification at the grand unification scale, i.e. common masses m0 (m½) for the spin-0 (½) supersymmetric particles.

CCEgam3_12-05

The allowed region in figure 3 is within reach of the Large Hadron Collider, so finding the predicted spectrum of light spin-½ and heavy spin-0 superpartners would prove the supersymmetric nature of the WIMP, especially if the lightest superpartner is stable and has the same mass as the WIMP mass deduced from the EGRET data. The lightest superpartner has properties akin to a spin-½ photon for the allowed region of figure 3, in which case the dark matter could be considered the supersymmetric partner of the cosmic microwave background, if supersymmetry is discovered. It is interesting to note that this region of parameter space yields perfect unification of the gauge couplings without any free parameters. In our first analysis in 1991, the scale of the supersymmetric masses had to be treated as a free parameter (Amaldi et al. 1991).

The statistical significance of the EGRET excess is at least 10 s and alternative models without dark matter do not yield good fits if all sky directions are considered. Furthermore, alternative models do not explain the peculiar shape of the rotation curve, or the occurrence of the hydrogen rings at 4 and 14 kpc and the high density of old stars at 14 kpc. Therefore, we conclude that the EGRET excess provides an intriguing hint that dark matter is not so dark, but is visible by flashes of typically 30-40 gamma rays for each annihilation.

Gamma-ray bursts: a look behind the headlines

Gamma-ray bursts (GRBs) – intense but brief flashes of gamma rays – were first discovered accidentally by US military satellites in 1967, and have since become a major puzzle for astrophysics. By 1992, however, observations mainly with the Burst and Transient Source Experiment (BATSE) on-board NASA’s Compton Gamma Ray Observatory, had provided compelling evidence that GRBs originate mostly at large cosmological distances and, moreover, divide into two distinct classes: short hard-spectrum bursts (SHBs) with a typical duration of less than one second, and long soft bursts that typically last longer than two seconds. However, the nature of the GRBs remained a mystery.

CCEray1_12-05

A significant breakthrough came when the Italian and Dutch space agencies put BeppoSAX into orbit in 1996. This X-ray satellite localized GRBs in its field of view with arcminute precision and led to the discovery of X-ray, optical and radio afterglows for long-duration GRBs. These afterglows faded relatively slowly and enabled subarcsecond localization of the long GRBs, as well as measurement of their cosmological redshifts and absolute brightness, identification of their star-forming galaxies and finally their progenitors – ultrarelativistic jets ejected from supernova explosions due to the core collapse of massive stars (see CERN Courier June 2003 p5 and p12). Yet despite this impressive progress, many important questions regarding long GRBs remained unanswered. What type of core-collapse supernova produces them? What sort of remnant is left over? What is the true production mechanism? Moreover, despite extensive searches no afterglow was detected for the SHBs, and their redshifts, intrinsic brightness, host galaxies and progenitors remained unknown.

CCEray2_12-05

This situation has changed dramatically in the past few months after the successful launch in November 2004 of Swift, NASA’s multi-wavelength observatory dedicated to the study of GRBs. Its main missions are to detect GRBs, measure their properties, localize their sky positions with sufficient precision shortly after detection, and communicate these positions automatically to other space- and ground-based telescopes in order to discover and follow up the afterglows in a broad range of wavelengths, soon after the beginning of the bursts. By the end of September 2005, Swift had detected and localized 70 GRBs. Three of these – 050509B, 050724 and 050813 – were SHBs and follow-up observations have discovered elliptical host galaxies at redshifts 0.225, 0.258 and 0.722, respectively. Shortly after Swift’s detection and localization of SHB 050509B, NASA’s High Energy Transient Explorer satellite, HETE-2, which had been launched in 2000, detected and localized another SHB, 050709, on 9 July 2005 (Gehrels et al. 2005 and Villasenor et al. 2005). Follow-up measurements have found and measured its X-ray and optical afterglows, which led in turn to the discovery of its host – a star-forming young galaxy at redshift 0.16 (Hjorth et al. 2005 and Fox et al. 2005).

CCEray3_12-05

The observed brightness and energy fluence, and the measured redshifts of the SHBs imply that their intrinsic brightness is smaller than that of typical long GRBs by two to three orders of magnitude. Moreover, their inferred total emitted radiation, assuming isotropic emission, is smaller by four to five orders of magnitude. So it is quite possible that SHBs are seen at relatively small redshifts because they are intrinsically faint and cannot be seen from large cosmological distances. However, it is not clear why around 20% of the bursts observed by BATSE are SHBs, but only 5% of those seen by Swift.

CCEray4_12-05

Has the mystery been solved?

These observations have led to recent press releases by NASA and some prestigious universities, and the publication of articles in astrophysical journals and in Nature, Science and Scientific American, which claim that “the 35 year old mystery of GRBs” has finally been solved and that SHBs have been proven to be produced by the merger of neutron stars or of a neutron star and a stellar black hole in close binary systems. But is this so?

The relatively small redshifts of SHBs and their association with both star-forming spiral galaxies and elliptical galaxies containing mainly old stars appears consistent with their origin in the merger of neutron stars in binary systems, as the merger usually takes place a long time after the formation of the neutron stars. The idea is that a large number of the neutrinos and antineutrinos that are emitted in the merger collide with each other outside the merging stars and annihilate into electron-positron pairs, which form a fast expanding fireball that produces the GRB (Goodman et al. 1987). Later, it was suggested that instead of spherical fireballs, mergers produce highly relativistic jets along the rotation axis, which can produce shorter and brighter GRBs through, for example, inverse Compton scattering of ambient light around the merging stars.

At first sight the merger scenario seems consistent with the observations, but a more careful examination raises serious doubts. The cosmic rate of such mergers as a function of redshift can be calculated from general relativity using the observed properties of galactic neutron-star binaries and their production rate, which must be proportional to the measured star formation rate. Despite the small statistics the redshift distribution of the SHBs detected by Swift and HETE-2 appears inconsistent with the theoretical expectations from the merger model.

A second problem concerns an X-ray flare observed by the Chandra X-ray Observatory in the afterglow of SHB 050709 on day 16 after the burst. In the fireball models of GRBs, X-ray flares in the afterglow are interpreted as due to “re-energization” of the afterglow by the central engine (Zhang and Meszaros 2004). The final merger in a neutron-star binary due to gravitational wave emission, however, takes place in less than a millisecond and produces a black hole. It is hard to imagine that the remnant can “re-energize” the X-ray afterglow after 16 days, a time scale one billion times larger than a millisecond. On the other hand, in the alternative “cannonball” model of GRBs, X-ray flares are produced when the highly relativistic jets from the central engine (in this case mass accretion on a compact object) encounter density changes in the interstellar medium (Dado et al. 2002). Indeed, SHB 050709 took place not far from the centre of a galaxy where star formation produces strong winds and density irregularities.

Other scenarios for SHB production have been dismissed as unfavoured by the observations, but this may have been premature. Accretion-induced collapse of neutron stars in compact neutron-star/white-dwarf binaries is consistent with all the observations. Origin in a supernova collapse was ruled out for 050509B and 050709 by follow-up measurements with powerful optical telescopes, but only for these SHBs; much larger statistics are needed to conclude that SHB production in a type Ia supernova is unlikely. Origin in soft gamma-ray repeaters (SGRs), which are anomalous pulsars that occasionally produce GRBs, was ruled out by the claim that they are too faint to be observed at the measured redshifts of SHBs. Consider, however, the burst emitted on 27 December 2004 by the galactic SGR 1806-20. It was the brightest GRB ever recorded from any astronomical object, beginning with a short 0.2 second spike that was followed by a much longer and dimmer tail modulated by the 7.55 second period of the pulsar. Had it taken place in a distant galaxy, the spike, if detected, would have been classified as an SHB.

The maximum distance from which such a spike could be detected depends on the uncertain distance of SGR 1806-20 and, if it was relativistically beamed into a small solid angle like ordinary long GRBs, on its viewing angle. A viewing angle three to four times smaller than that for the spike presumably beamed from the superburst of SGR 1806-20 would be sufficient to make it look like a normal SHB at a redshift of z ∼ 0.25 (Dar 2005). Moreover, SGRs may be born not only in core collapse supernova explosions but, for example, in the accretion-induced collapse of white dwarfs, as suggested by the fact that, despite their young age, only one of the four known SGRs is located inside a supernova remnant. This may explain why SHBs are produced both in elliptical galaxies with old stellar populations and in star-forming spiral galaxies with young stellar populations.

Unsurprising behaviour

Because of its higher sensitivity, Swift can see deeper into space than any previous gamma-ray satellite. Indeed, the 14 long GRBs localized by Swift for which a redshift has been reported have a mean redshift of  = 2.8. This is twice the mean of  = 1.4 for the 43 GRBs with a known redshift that have been localized by BeppoSAX, HETE-2 and the interplanetary network over the past seven years.

Swift record redshift so far is z = 6.29, for GRB 050904, which looks like an ordinary burst with an ordinary afterglow (Haislip et al. 2005). This redshift is comparable to that of the most distant quasar measured to date, and in the standard cosmological model it corresponds to a look-back time of nearly 14 billion years, to when the universe was only one billion years old. Thus this single GRB already indicates that star formation and core-collapse supernova explosions took place at this early cosmic time, and together with previous measurements shows that the rate of star formation has not declined between z = 1.4 and z = 6.29. It also demonstrates that long GRBs and their optical afterglows, which are more luminous than any known astronomical object by many orders of magnitude, can be used as excellent tools for studying the history of star formation, galaxies, and intergalactic space since the time of the early universe.

The X-ray telescope on-board Swift also recorded, in fine detail, the evolution of X-ray afterglows of a dozen or so GRBs, from right after the burst until they became undetectable. Most of these X-ray afterglows demonstrate a universal behaviour: an initial fast fall-off within the first few minutes followed by a shallow decline over the next few hours, which afterwards steepens gradually into a much faster power-law decline (see figure 1). In some cases Swift has also detected X-ray flares superimposed on this universal behaviour. These observations were presented in Nature and Science as complete surprises that cannot easily be explained by current theoretical models of GRBs. This may be true for the popular fireball models of GRBs, but it is not true for the cannonball model, which correctly predicted these effects long ago.

The origin of SHBs is still an unsolved mystery.

The fast initial fall-off and the gradual roll over of the shallow decline to a later power-law decline were in fact already indicated by observations in 1998. Figure 2 shows the comparison with these observations of the universal behaviour predicted from the cannonball model in 2001. Moreover, an X-ray flare had also already been seen in 1997 by BeppoSAX in the afterglow of GRB 970508 (Pian et al. 2001) and can also be explained in the cannonball model.

In the cannonball model, the early X-ray afterglow originates in thin bremsstrahlung from a rapidly expanding plasmoid – the cannonball – which stops expanding within a few observer minutes after ejection. Synchrotron emission from the ionized interstellar electrons, which are swept into the decelerating cannonball, then takes over. The shallow decline followed by the roll-over into a power-law decline is a simple effect of off-axis viewing of decelerating jets in the interstellar medium, which has been observed in many optical afterglows but misinterpreted by fireball models. The flares are caused by collisions of the jet with density jumps in the interstellar medium produced by stellar winds and supernova explosions.

In conclusion, it seems that the localization of SHBs by Swift and HETE-2, which led to the discovery of their afterglows, the identification of their host galaxies and the measurements of their redshifts, have been over-interpreted. While these are undoubtedly observational breakthroughs, the origin of SHBs is still an unsolved mystery. Nevertheless, the small redshifts of SHBs are good news for gravitational wave detectors such as LIGO and LISA, in particular, if SHBs are produced mainly by mergers of neutron stars or a neutron star and a black hole in binaries, as first suggested in 1987. Moreover, the observed behaviour of the early X-ray afterglows of long GRBs and the X-ray flares – both claimed to be a complete surprise and unexpected in the fireball models – were predicted correctly long ago, like many other features of long gamma-ray bursts, by the cannonball model.

ILC comes to Snowmass

In August 2004 the Executive Committee of the American Linear Collider Physics Group (ALCPG), galvanized by the technology choice for the future International Linear Collider (ILC), decided to host an extended international summer workshop to further the detector designs and advance the physics arguments. Subsequently, the International Linear Collider Steering Committee (ILCSC) elected to hold their Second ILC Accelerator Workshop in conjunction with the Physics and Detector Workshop. Ed Berger of Argonne and Uriel Nauenberg of Colorado were selected to co-chair the organizing committee for this joint workshop, which was held at Snowmass, Colorado, US, for two weeks in August. ALCPG co-chairs Jim Brau of Oregon and Mark Oreglia of Chicago, along with accelerator community representatives Shekhar Mishra from Fermilab and Nan Phinney from SLAC, rounded out the committee. While hosted by the North American community, the workshops were planned with worldwide participation in all the advisory committees and in the scientific programme committees for the accelerator, detector, physics and outreach activities.

CCEilc1_12-05

As Berger described in the opening address, the primary accelerator goals at Snowmass were to define an ILC Baseline Configuration Document – to be completed by the end of 2005 – and to identify critical R&D topics and timelines. On the detector front, the goal was to develop detector design studies with a firm understanding of the technical details and physics performance of the three major detector concepts, the required future R&D, test-beam plans, machine-detector interface issues, beamline instrumentation and cost estimates. The physics goals were to advance and sharpen ILC physics studies, including precise higher-order calculations, synergy with the physics programme of CERN’s Large Hadron Collider (LHC), connections to cosmology, and, very importantly, relationships to the detector designs. A crucial fourth goal was to facilitate and strengthen the broad participation of the scientific and engineering communities in ILC physics, detectors and accelerators, and to engage the greater public in this exciting work.

A rich new world

Over the past few years, prestigious panels in Europe (the European Committee for Future Accelerators – ECFA), Asia (the Asian Committee for Future Accelerators – ACFA) and the US (the High Energy Physics Advisory Panel – HEPAP) have reached an unprecedented consensus that the next major accelerator for world particle physics should be a 500 GeV electron-positron linear collider with the capability of extension to higher energies. This machine would be ideal for exploiting the anticipated discoveries at the LHC and would also have its own unique discovery capabilities. The ability to control the collision energy, polarize one or both beams, and measure cleanly the particles produced will allow the linear collider to zero in on the crucial features of a rich new world that Peter Zerwas of DESY described on the first day of the workshop, which might include Higgs bosons, supersymmetric particles and evidence of extra spatial dimensions.

This physics programme dictates specific requirements for the detectors and for the accelerator design. As the ILC community turns increasingly to design and engineering, there was considerable activity in the physics groups to formulate these requirements concretely. Early in the workshop, an international panel set up this spring presented a proposed list of benchmark processes to be used in optimizing the ILC detector designs. This brought a new flavour to the physics discussions – one that will continue in future work on physics at the ILC.

CCEilc2_12-05

This influence was felt most strongly in the working groups on Higgs physics and supersymmetry. Precision electroweak data predict that the neutral Higgs boson will be observed within the initial energy reach of the ILC, which will provide a microscope to study the whole range of possible Higgs boson decays and measure coupling strengths to the percent level. To accomplish this goal, the ILC detectors must have significantly better performance in several respects than those at CERN’s Large Electron-Position collider (LEP). In contrast with the quite specific implications of Higgs boson physics, the idea of supersymmetry encompasses various models with diverse implications. Some of the signatures of supersymmetry will be studied at the LHC, but the problem of understanding the exact nature of any new physics will be a difficult one. Through the study of a diverse set of specific parameter sets for supersymmetry, work done at Snowmass showed that the ILC experiments could address this problem robustly, and the necessary detector performances were specified.

CCEilc3_12-05

Precision is crucial

The precision of the ILC experiments should be supported by equally precise theoretical calculations. Among those discussed at the workshop were Standard Model analyses, including higher-order contributions in quantum chromodynamics, calculations of radiative corrections to the key Higgs boson production processes, and precision calculations within models of new physics. The Supersymmetry Parameter Analysis project, presented at Snowmass, proposes a convention for the parameters for supersymmetry models from which observables can be computed to the part-per-mille level for unambiguous comparison of theory and experiment. The fourth in the series of LoopFest conferences on higher-order calculations took place during the Snowmass workshop, the highlight this year being a presentation of new twistor space methods for computing amplitudes for emission of very large numbers of gluons and other massless particles. New calculations of the process e+e → tbar th showed that higher-order corrections enhance this process by a factor of two near threshold, making it possible for the 500 GeV ILC to obtain a precise measurement of the top quark Yukawa coupling.

CCEilc4_12-05

The capabilities of the ILC will make it possible to explore new models, which include Higgs sectors with CP violation (for which the ILC offers specific probes of quantum numbers), and models with a “warped” extra dimension, which predict anomalies in the top quark couplings that can be seen in tbar t production just above threshold.

Many of the discussions of new physics highlighted the connections to current problems of cosmology. Supersymmetry and many other models of new physics contain particles that could make up (at least part of) the cosmic dark matter. If these models are correct, dark-matter candidates will be produced in the laboratory at the LHC. Studies at Snowmass showed how precise measurements at the ILC could be used to verify whether these particles have the properties required to account for the densities and cross-sections of astrophysical dark matter. Here all the strands of ILC physics – exotic models, precision calculations and incisive experimental capabilities – could combine to provide physical insight that can be obtained in no other way.

The accelerator design effort

In August 2004 the International Technology Recommendation Panel concluded that the ILC should be based on superconducting radio-frequency accelerating structures. This recommendation has been universally adopted as the basis for the ILC project, now being coordinated via the Global Design Effort (GDE), led by Barry Barish from Caltech. At Snowmass, the accelerator experts carried the baton from the successful launch of the ILC design effort at the first ILC workshop at KEK in Japan in November 2004. Snowmass also provided the forum for the first official meeting of the GDE. The working groups established for the first ILC workshop at KEK formed the basis of the organizing units through Snowmass. In addition, six global groups were formed to work towards a realistic reference design: Parameters, Controls & Instrumentation, Operations & Availability, Civil & Siting, Cost & Engineering, and Options.

Sources of electrons and positrons are the starting points of the accelerator chain. The successful production of intense beams of polarized electrons at the SLAC Linear Collider (SLC) between 1992 and 1998 demonstrated the best mechanism for producing electrons. When polarized laser light is fired at special cathode materials, electrons are produced with their spin vectors aligned, with polarization of up to 90% achieved in the laboratory. The ability to select the “handedness” of the beam is an incisive capability that will allow probes of the left- or right-handed nature of the couplings of new particles, such as those in supersymmetric models.

As well as the positron production systems used previously, other approaches are being studied to achieve polarized beams. One involves passing the high-energy electron beam through the periodic magnetic field provided by an “undulator”, similar to those used at synchrotron light sources. The intense photon beams radiated by the undulating electrons can be converted in a thin target into electron-positron pairs. A second method involves boosting the energies of photons produced in laser beams by Compton back-scattering them from electrons, and then similarly converting the boosted photons to yield positrons. If the intermediate photons are polarized, both of these methods allow polarized positron production.

The electron and positron beams produced must be “cooled” in so-called damping rings, in which their transverse size is reduced via synchrotron radiation during several hundred circuits. A few different designs are being studied for these rings. Challenges include precise component alignment and the high degree of stability required for low emittance, while minimizing collective effects that can blow up the beams.

Most of the length of the linear collider, some 20 km or so, will be devoted to accelerating the electron and positron beams in two opposing linacs. The debate at Snowmass centred on critical issues, such as the operating choice for the accelerating voltage gradient in the superconducting niobium cavities and the choice of advanced technologies that must be used to power the cavities. The details of the shape and surface preparation of the cavities are among the issues that affect the gradient that can be supported. Larger radii of curvature of the cavity lobes are desirable to reduce peak surface electric fields that can induce breakdown. Also, advanced surface preparation techniques such as electropolishing are being refined, and cavities are being produced and tested by strong international teams at regional test facilities. Based on experience to date, a draft recommendation was reached for a mean initial operating gradient of around 31 MV per metre. Each linac would then need to be just over 10 km long to reach the initial target centre-of-mass energy of 500 GeV.

Similar expert attention was devoted to the modulators, klystrons and distribution systems that convert “wall-plug” power into the high-power (10 MW) millisecond-long pulses applied to the cavities. Industrial companies in Europe, Asia and the US have developed prototype klystrons for this purpose. These are in use at the TESLA Test Facility at DESY, which provides a working prototype linac system. Several innovative ideas for solid-state modulators or more compact klystrons are also being explored with industry.

Once at their final energy, the beams must be carefully focused and steered into collision. The collision point lies at the ends of the two linacs and encompasses the interaction region, including the detector(s). The working recommendation, defined at the workshop at KEK, is to consider two interaction regions, each with one detector. Many important ramifications were discussed at Snowmass. For example, the current plan calls for the beams to be brought into the interaction region with a small horizontal crossing angle of either 2 or 20 mrad. In either case the final-focus magnets must be carefully designed to be compact and stable with respect to vibrations that could be transferred to beam motion. A detailed engineering design is being prepared, which will also include beam-steering feedback systems to maintain the beams in collision and optimize the luminosity. Intermediate values for the crossing angle, such as 14 mrad, are also under study.

Of no less importance is the need to remove the spent beams safely from the interaction region and transport them to the beam dumps. As each beam carries an equivalent of several megawatts of power, the design must allow the necessary clearances, and be capable of being aborted safely in the event of equipment failure. The “machine protection” system and beam dumps remain subjects for active R&D. Many crucial diagnostic systems for measuring the beam energy, polarization and luminosity will be based in the extraction lines, and excellent progress was made in defining the locations and configurations of the necessary instrumentation.

The GDE will build on the consensus reached at Snowmass and produce an accelerator Baseline Configuration Document (BCD) by the end of 2005. As Nick Walker from DESY summarized at the end of the workshop, the BCD will define the most important layout and technology choices for the accelerator. For each subsystem a baseline technology will be specified, along with possible alternatives which, with further R&D, will offer the promise to reduce the cost, minimize the risk or further optimize the performance of the ILC. The engineering details of the baseline design will then be refined and costed. A Reference Design Report will follow at the end of 2006. This will represent a first “blueprint” for the ILC, paving the way for a subsequent effort to achieve a fully engineered technical design.

Detector concepts

The Snowmass workshop was an important opportunity for proponents of the three major detector-concept studies to work together on their detector designs. They are planning to draft detector outline documents before the next Linear Collider Workshop (LCWS06) in Bangalore in March 2006. Detector capabilities are challenged by the precision physics planned at the ILC. The environment is relatively clean, but the detector performance must be two to ten times better than at LEP and the SLAC Linear Collider. Details of tracking, vertexing, calorimetry, software algorithms and other aspects of the detectors were discussed vigorously.

The three major international detector concepts rely on a “particle flow” approach in which the energy of jets is measured by reconstructing individual particles. This technique can be much more precise than the purely calorimetric approach employed at hadron colliders like the LHC. In a typical jet, 70% of the energy consists of hadrons, which are measured with only moderate resolution in the hadron calorimeter, while 30% consists of photons, which are measured with much better precision in the electromagnetic calorimeter. Of the hadronic energy typically 60% is carried by charged particles, which can be measured precisely with the tracking system. The hadron calorimeter is thus relied on only for the 10% carried by neutral hadrons. For the particle-flow approach, it is necessary to separate the charged and neutral particles in the calorimeters, where the showers overlap or are often very close to each other. Separation of the showers is accomplished differently in each of the detector concepts, trading off detector radius, magnetic-field strength and granularity of the calorimeter.

Physical processes to be studied at the ILC require tagging of bottom and charm quarks with unprecedented efficiency and purity, as well as of tau leptons.

A specialized group worked on the development of the particle-flow algorithms. A conventional shower-reconstruction algorithm tends to combine the showers of different hadrons, but more sophisticated software should be able to separate them based on the substructure of the showers. At present the energy resolution of jets is still limited by confusion in the reconstruction, but significant progress was achieved at Snowmass, with optimism that a resolution of 30%⁄√E can be reached.

In the Silicon Detector Concept (SiD) the goal is a calorimeter with the best possible granularity, consisting of a tungsten absorber and silicon detectors. To make this detector affordable, a relatively small inner calorimeter radius of 1.3 m is chosen. Shower separation and good momentum resolution are achieved with a 5 T magnetic field and very precise silicon detectors for charged particle tracking. The fast timing of the silicon tracker makes SiD a robust detector with respect to backgrounds.

The Large Detector Concept (LDC), derived from the detector described in the technical design report for TESLA, uses a somewhat larger radius of 1.7 m. It also plans a silicon-tungsten calorimeter, possibly with a somewhat coarser granularity. For charged particle tracking, a large time-projection chamber (TPC) is planned to allow efficient and redundant particle reconstruction. The larger radius is needed to achieve the required momentum resolution.

The GLD concept chooses a larger radius of 2.1 m to take advantage of a separation of showers just by distance. It uses a calorimeter with even coarser segmentation and gaseous tracking similar to the LDC. Progress at Snowmass on the GLD, LDC and SiD concepts was summarized at the end of the workshop by Yasuhiro Sugimoto of KEK, Henri Videau of Ecole Polytechnique and Harry Weerts of Argonne, respectively. A fourth concept was introduced at Snowmass, one not relying on the particle-flow approach.

A common challenge for all detector concepts is the microvertex detector. Physical processes to be studied at the ILC require tagging of bottom and charm quarks with unprecedented efficiency and purity, as well as of tau leptons. This task is complicated by backgrounds from the interacting beams and the long bunch trains during which readout is difficult. The detectors must be extremely precise, and also extremely thin, to avoid deflection of low-momentum particles and deterioration of interesting information. Several technologies are under discussion, all employing a “pixel” structure based on the excellent experience of the SLD vertex detector at SLAC, ranging from charge-coupled devices (CCDs) and complementary metal-oxide semiconductor (CMOS) sensors used in digital cameras, to technologies that improve on the ones already used for the LHC.

In the gaseous tracking groups much discussion centred on methods to increase the number of points in the TPC, compared with LEP experiments. A possibility is to use gas electron-multiplier foils, a technology that was developed at CERN for the LHC detectors. Micromesh gaseous structure detectors, or “micromegas”, are another option, pursued mainly in France. The availability of test beams is crucial for advancing detector designs. TPC tests have been performed at KEK, and a prototype of the electromagnetic calorimeter has been tested at DESY. Further tests are also planned at the test-beam facilities at CERN and Fermilab.

As befitting a workshop with both detector and accelerator experts present in force, discussion of the machine-detector interface issues played a big role. The layout of the accelerator influences detectors in many ways – for example, beam parameters determine the backgrounds, the possible crossing angle of the beams affects the layout of the forward detectors, and the position of the final focus magnets dictates the position of important detector elements. All of these parameters have to be optimized by accelerator and detector experts working in concert. At a well attended plenary “town meeting” one afternoon, several speakers debated “The Case for Two Detectors”. Issues included complementary physics capabilities, cross-checking of results, total project cost and two interaction regions versus one.

Outreach, communication and education

A special evening forum on 23 August addressed “Challenges for Realizing the ILC: Funding, Regionalism and International Collaboration”. Eight distinguished speakers, representing committees and funding agencies with direct responsibility for the ILC, shared their wisdom and perspectives: Jonathan Dorfan (chairman of the International Committee for Future Accelerators, ICFA), Fred Gilman (HEPAP), Pat Looney (formerly of the US Office of Science and Technology Policy), Robin Staffin (US Department of Energy), Michael Turner (US National Science Foundation), Shin-ichi Kurokawa (ACFA chair and incoming ILCSC chair), Roberto Petronzio (Funding Agencies for the Linear Collider) and Albrecht Wagner (incoming ICFA chair). The brief presentations were followed by animated questions and comments from many in the audience.

Educational activities played a prominent role in the Snowmass workshop. Reaching out to particle experimenters and theorists, the accelerator community ran a series of eight lunchtime accelerator tutorials. Of broader interest to the general public were a dark-matter café and quantum-universe exhibit in the Snowmass Mall, a Workshop on Dark Matter and Cosmic-Ray Showers for high-school teachers, and a cosmic-ray-shower study in the Aspen Mall. Two evening public lectures attracted many residents and tourists, with Young-Kee Kim talking on “E = mc2“, and Hitoshi Murayama on “Seeing the Invisibles”. A physics fiesta took place one Sunday in a secondary school in Carbondale, where physicists and teachers from the workshop engaged children in hands-on activities.

Communication was also on the Snowmass agenda, and the communications working group defined a strategic communication plan. During the workshop a new website, www.linearcollider.org , was launched together with ILC NewsLine, a new weekly online newsletter open to all subscribers.

• Proceedings of the Snowmass Workshop will appear on the SLAC Electronic Conference Proceedings Archive, eConf.

REX-ISOLDE accelerates the first isomeric beams

CCErex2_12-05

Since 2001, the combination of the Isotope-Separator On-Line (ISOLDE) and the Radioactive Beam Experiment (REX) has provided accelerated beams of radioactive ions. Now, with the aid of a laser-separation technique, specific metastable excited states – isomers – can be selected and “post-accelerated” in REX. This allows not only nuclear-decay experiments, but also the production of short-lived excited states.

Purity is an important parameter of any radioactive beam. At ISOLDE, the Resonant Ionisation Laser Ion Source (RILIS) allows the selection of a single chemical element. Combined with the mass selection of the ISOLDE separators, this results in a high-purity beam composed of essentially a single isotope. In a further step, narrow-bandwidth lasers can select different long-lived isomers from the same isotope (Köster et al. 2000a and 2000b). This has already allowed the separation of two different beta-decaying states in 68Cu as well as the unambiguous identification of three isomeric states in 70Cu (Van Roosbroeck et al. 2004). (These are both neutron-rich radioactive isotopes of copper, which occurs naturally as the stable isotopes 63Cu and 65Cu.)

Now, in experiment IS435, isomeric beams of 68Cu and 70Cu have been post-accelerated in REX-ISOLDE to 2.8 MeV per nucleon. The beams were then directed onto a target in the centre of the Miniball set-up, which was used to detect emitted gamma rays, and hence the existence of excited states in the different nuclei. The experiments are showing that the technique can produce isomeric beams of sufficient purity to study individual excited states in a radioactive nucleus, as the preliminary results for 68Cu indicate.

The radioactive nucleus 68Cu has two beta-decaying states, the ground state with spin 1 and positive parity (Iπ = 1+) and a metastable (isomeric) one with Iπ = 6. Both states are well known and decay to the stable 68Zn nucleus. In nuclei, protons and neutrons tend to fill energy levels in pairs, with the angular momentum of the pairs coupling to zero. So in 68Cu, with an odd number of protons (29) and an odd number of neutrons (39), the multiplet structure of the low-lying energy states is largely determined by the coupling of the two odd nucleons, which occupy different orbitals outside the full core of pairs. The structures containing the ground state and the beta-decaying isomeric state are expected to be significantly different, as in these states, although the odd proton is in the same orbital (2p3/2), the odd neutron is in very different orbitals (2p1/2 and 1g9/2). Previous investigations using transfer reactions (Sherman et al. 1977) and beta-decay and lifetime measurements (Hou et al. 2003) have indicated the existence of different multiplet structures (figure 1), but not much is known about the composition of the states.

CCErex1_12-05

The aim of experiment IS435 was to study these two multiplet structures. In one case, an almost pure (∼90%) beam of the ground state of 68Cu (1+) was accelerated and underwent Coulomb excitation in order to investigate the coupling between the proton p3/2 and neutron p1/2 orbitals. Figure 2 shows the gamma-ray energy spectrum from the Miniball detector. It clearly reveals a gamma transition of 84 keV indicating that the Iπ = 2+ state of the ground-state multiplet is excited with quite a high probability. This hints at a significant electric quadrupole (E2) component in the transition connecting these two states, contradicting the conclusions of some previous studies (Hou et al. 2003).

Figure 3 shows the gamma-ray energy spectrum from the excitation of the isomeric 6 state of 68Cu. Here a Doppler-broadened line is clearly visible at 178 keV together with lines at 84 keV and 693 keV that are not Doppler-broadened. Applying the Doppler correction to the spectrum (the red line in the figure) leads to a narrowing of the 178 keV line and a broadening or complete disappearing of the 84 keV and 693 keV lines respectively. This indicates that the gamma rays from these two transitions were not emitted in flight but from nuclei at rest – in other words from states that have relatively long half-lives compared with the other states excited, and that therefore come to rest before they decay. Taking into account the energy of the beam leads to an estimate of the half-lives of these two transitions of the order of a few nanoseconds. Comparing the results from the transfer reaction and beta decay indicates positions for the two states as given by the dashed lines in figure 1.

CCErex3_12-05

The distinctively different patterns of the spectra in figures 2 and 3 prove that two different isomeric beams with sufficient purity have been post-accelerated at REX-ISOLDE and that completely different structures in the 68Cu nucleus have been populated and studied. This is the first instance of such studies being carried out with the help of post-accelerated isomeric beams.

CCErex4_12-05

A number of techniques for nuclear studies, including Coulomb excitation and nuclear transfer reactions, will clearly benefit from the use of isomeric beams. The very high selectivity of RILIS combined with the very good beam spot and precise energy definition after the REX linac make REX-ISOLDE a unique place for this type of measurement during the coming years.

CCErex5_12-05

• IS435 was performed by CERN; IKS KU Leuven; INRNE, Bulgarian Academy of Sciences, University of Sofia; Universita di Camerino; LMU, Munich; MPI, Heidelberg; University of Köln; TU Darmstadt; TU Munich; Warsaw University; IPN Orsay; Lund University; INP, NCSR “Demokritos”; University of Gent; Miniball and the REX collaboration.

Further reading

L Hou et al. 2003 Phys. Rev. C68 54306.

U Köster et al. 2000a Nucl. Instr. and Meth. B160 528.

U Köster et al. 2000b Hyperfine Interactions 127 417.

J D Sherman et al. 1977 Phys. Lett. B67 275.

J Van Roosbroeck et al. 2004 Phys. Rev. Lett. 92 112501.

Experiments finally unveil a precise portrait of the Z

From 1989 to 1995, the Large Electron-Positron collider (LEP) at CERN provided collisions at centre-of-mass energies, 88-94 GeV. This range includes the mass of the Z boson, which is thus produced as a resonance, the Z pole (figure 1). In this first phase of LEP running (LEP-1), the four large state-of-the-art detectors ALEPH, DELPHI, L3 and OPAL recorded 17 million Z decays. Over a similar period, from 1992 to 1998, the SLD experiment at SLAC in the US collected 600,000 Z events at the world’s first high-energy linear collider, the SLAC Linear Collider (SLC), with the added advantage of a longitudinally polarized electron beam.

Now, the five big experimental collaborations have submitted a joint paper for publication in Physics Reports. Signed by 2500 authors, “Precision electroweak measurements on the Z resonance” summarizes and combines thousands of cross-section and asymmetry measurements. The data sample consists of the world set of electron-positron interactions at the Z pole. The Z boson decays to all kinematically accessible fermion-antifermion pairs, i.e. all leptons and quarks, except the top quark. Hence the collected data allow very detailed investigations of the properties of the Z-boson and Z-to-fermion couplings.

Combining the wealth of measurements has been a long and painstaking task. The large data sample has demanded advanced analysis techniques to reduce systematic measurement uncertainties in the sophisticated detectors to below the statistical precision. This is one of the main reasons for the long delay between the end of data-taking at Z-pole energies and the publication of this report. Any measurement used in the combined review had to have been published in a journal beforehand. Furthermore, to exploit the power of the combined data sets of the experiments, it was necessary to investigate how each measurement could be meaningfully and properly combined with other measurements, while accounting for correlated systematic effects.

Early in the LEP programme, the high-precision measurements resulting from complex analyses made it clear that a dedicated effort by experts was required to tackle such inter-collaborational aspects of the scientific work. This led to the formation of the LEP Electroweak Working Group (LEP-EWWG), the first of several LEP-wide working groups. The LEP-EWWG consists of members from the experimental collaborations and is responsible for properly combining both published and preliminary results of the LEP experiments. It makes use of the expertise of its members in scrutinizing the measurements for combination purposes, in particular in evaluating correlations between measurements. The group also maintains close contact with many theorists, who are advancing calculations of the many observables and their radiative corrections, thus reducing the theoretical uncertainties to the level required by the precision of the data. The great success of the LEP-EWWG has spawned similar efforts at other accelerators, for example, between experiments at B-factories and between experiments at Fermilab’s Tevatron.

CCEzbo1_11-05

One of the first and foremost combined measurements of the Z resonance at LEP concerns the mass and total decay width of the Z boson and the number of light neutrino species (figure 1). The determination of these quantities is based on total cross-sections measured accurately at precisely known centre-of-mass energies; here the LEP beam-energy calibration is crucial. In 1986, during the preparation of the LEP physics programme, it was estimated that the Z-boson mass and width could possibly be measured to an accuracy of about 50 MeV. Today, the Z-pole report shows that an accuracy nearly 25 times better has been finally achieved. The mass of the Z is now known with a relative precision of 2.3 × 10−5, MZ = 91187.5±2.1 MeV – approaching that of the Fermi constant – and the Z width is known to better than 1‰, GZ = 2495.2±2.3 MeV. Precision luminosity measurements for normalizing the total cross-section measurements were indispensable in determining to better than 3‰ accuracy the number of light neutrino species, and thus the number of fermion families, to be the three known, with Nn = 2.9840±0.0082.

In measurements of Z decays to heavy quarks, beauty and charm, the SLD experiment, despite smaller Z statistics, has made competitive measurements by virtue of the small beam spot and beam-pipe size of the SLC. This allowed the vertex detector to be positioned very close to the interaction point, in turn leading to precision tagging of b and c quarks produced in Z decays. The LEP-EWWG is therefore collaborating intensively and successfully with colleagues from SLD in the area of heavy-quark production at the Z pole.

CCEzbo2_11-05

By measuring production cross-sections and forward-backward asymmetries both for the inclusive hadronic final state and for identified charged lepton and quark flavours, the experiments scrutinized the couplings between fermions and the Z boson in great detail. While the LEP experiments provided high-statistics measurements, SLD with beam polarization made a unique contribution in measuring both left-right and left-right forward-backward asymmetries. With both sets of measurements, the effective vector and axial-vector coupling constants for leptons and quarks have now been determined with a precision several orders of magnitude better than before (figure 2). The comparison in terms of the effective electroweak mixing angle is shown in figure 3. The two most precise determinations of this quantity, based on the left-right asymmetry measured by SLD and the bb- forward-backward asymmetry measured at LEP, differ by 3.2s. Both measurements are still statistics-dominated, but is this the first hint of new physics or just a fluctuation?

CCEzbo3_11-05

The precision of the results is such that small changes with respect to the Born-term expectation are measured quantitatively. These electroweak radiative corrections are sensitive to all kinds of virtual particles, notably the top quark and the Higgs boson, neither of which is directly produced at Z-pole energies. Analysing the precision measurements within the framework of the Standard Model, particularly once LEP started up, allowed good predictions of the mass of the top quark a few years before the quark itself was discovered and its mass measured by the Tevatron experiments CDF and D0 in 1995 (figure 4). The close agreement between prediction and direct measurement is one of the greatest triumphs of particle physics. Similar agreement is found in the case of the W-boson mass.

CCEzbo4_11-05

Based on this success in predicting the masses of heavy particles, the precision electroweak measurements are now also used to predict the mass of the as yet unobserved Higgs boson, in the framework of the Standard Model, in conjunction with measurements of the mass and width of the W boson at LEP-2 and the Tevatron and the mass of the top quark measured at the Tevatron. These analyses predict the Higgs boson to weigh at most a few hundred giga-electron-volts (figure 5), but we must wait for the Large Hadron Collider to show if this prediction is correct.

CCEzbo5_11-05

The Z-pole report has been in the works for the past six years, pushed forward by a team of editors: Richard Kellogg, Klaus Moenig, Günter Quast, Mike Roney, Peter Rowson, Pippa Wells and Martin Grünewald (chair of the LEP-EWWG and lead editor), and in the early stages Robert Clare and Roger Jones. Meetings on reviewing the status, discussing the draft, and planning the next steps were held every few months at CERN, with participants attending in person, by videoconference or by telephone. In fact, some of the editors have yet to meet each other in person – an event is foreseen later this year. This work proceeded in parallel with the regular LEP-EWWG work, involving many more physicists, which provides updated combinations of both published and preliminary results twice a year, for winter and summer conferences.

The effort of the LEP-EWWG will now focus on electron-positron collisions at centre-of-mass energies above the Z-pole – the LEP-2 running. These measurements test fermion-antifermion and boson-pair production at the highest possible energies, thereby investigating the properties of the charged W bosons – the mass, width and decay properties, as well as gauge couplings between the electroweak gauge bosons – in similar detail to that achieved for the Z boson. With the analyses using the available Z-pole data now concluded, the combined Z-pole results will stand for a long time, to be improved only if a future linear collider takes physics data at the Z resonance.

bright-rec iop pub iop-science physcis connect