Comsol -leaderboard other pages

Topics

Hard probes conference finds success in Portugal

The town of Ericeira on the Atlantic coast faces Cabo da Roca, the western limit of the European continent. It proved an inspiring setting for Hard Probes 2004, the first International Conference on Hard and Electromagnetic Probes of High Energy Nuclear Collisions.

The conference grew out of a series of Hard Probe Café meetings, the first of which was held in 1994 at CERN. The idea then was to form a collaboration of theorists and experimentalists interested in the interface between hard perturbative quantum chromodynamics (QCD) and relativistic heavy-ion physics. CERN’s Super Proton Synchrotron (SPS), with a beam energy of up to 200 GeV/nucleon, was the highest-energy heavy-ion facility at the time and hard processes were rare. But it was becoming clear that the use of penetrating hard probes – for example, high-mass lepton pairs and high-momentum photons – held promise for understanding the strongly interacting hot medium formed in heavy-ion collisions.

CCEcon1_06-05

Subsequent experimental results from the SPS, and the commissioning of the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory, put hard processes in the focus of physicists’ attention. After meetings in Europe and the US, when the first published proceedings helped in planning experiments at RHIC and the Large Hadron Collider (LHC) at CERN, the Hard Probe Café could no longer accommodate all the enthusiasts. So Hard Probes 2004 was born, organized by Carlos Lourenço, Helmut Satz, João Seixas and Jorge Dias de Deus, and held on 3-10 November 2004 in the beautiful resort of Ericeira. The 120 or so participants did not have much free time to enjoy the sea breeze; the programme was intense as well as interesting, and local maritime advice underlined the importance of keeping the aim in mind (figure 1).

After a first day of lectures that were more pedagogically oriented, Krishna Rajagopal of MIT opened the conference by surveying what is known about the QCD phase diagram and its new states of matter, from quark-gluon plasma (QGP) to colour superconductors. Jochen Bartels of DESY recalled the parton formulation of high-energy interactions, addressing parton evolution and saturation. These aspects have led to major progress in the understanding of the initial conditions in heavy-ion collisions, forming a new approach to the physics of high-energy hadron and nuclear collisions: the colour glass condensate, reviewed by Edmond Iancu of Saclay and Raju Venugopalan of Brookhaven. Related percolation studies were presented by Carlos Pajares of Santiago de Compostela. It is becoming evident that QCD at high parton density can provide a common framework for describing different high-energy interactions, from deep inelastic scattering to relativistic nuclear collisions.

Probes with charm

One of the main topics discussed at the Hard Probe Café was the fate of heavy quarkonia – bound states of heavy quarks and anti-quarks – in hot quark-gluon matter. Around 20 years ago, Tetsuo Matsui and Helmut Satz predicted that at sufficiently high temperatures Debye screening in the quark-gluon plasma would lead to the dissociation of quarkonia. At the conference, Frithjof Karsch of Bielefeld surveyed the status of theoretical quarkonium studies; our understanding of the topic has progressed significantly following recent lattice QCD calculations, which were discussed by Tetsuo Hatsuda of Tsukuba, Peter Petreczky of Brookhaven and others.

CCEcon2_06-05

The different binding energies and bound-state radii of the various quarkonia lead to different dissociation temperatures; while the higher excited charmonium states melt near the deconfinement point, the J/ψ (the cbarc ground state) can survive up to higher temperatures. Such behaviour had been previously obtained from potential model studies, and had shown that the in-medium dissociation pattern of quarkonia constitutes a very effective tool for the study of quark-gluon plasma. It can now provide a direct way to relate QCD calculations to data collected from heavy-ion collisions.

The use of heavy quarks for the diagnostics of QCD matter depends of course on reliable computations of their yields in perturbative QCD; the status of these calculations was reviewed by Stefano Frixione of Genova and Ramona Vogt of Lawrence Berkeley National Laboratory (LBNL). The increase of heavy-quark production at high energies could in fact even lead to enhanced quarkonium yields, as Ralf Rapp of Texas A&M and Bob Thews of Arizona showed for different recombination and coalescence models. A further issue to be resolved is the possibility of initial state quarkonium dissociation by parton percolation, which was reviewed by Marzia Nardi of Torino.

The suppression of charmonium production in nuclear collisions was indeed observed at the SPS (figure 2). Louis Kluberg of CERN and Laboratoire Leprince-Ringuet reviewed the 20 year evolution and the final results of the pioneering NA38 and NA50 experiments. Further studies are being pursued at CERN by NA60, with improved detector capabilities, and at RHIC by PHENIX, where the much lower integrated luminosities, so far, limit the usefulness of the higher collider energies. The HERA-B collaboration presented recent results on χc production in proton-nucleus collisions at HERA, while Mike Leitch of Los Alamos reviewed several issues in quarkonium production. It is particularly puzzling that the ground state resonances J/ψ and Υ show complete absence of polarization, contrary to the expectations of non-relativistic QCD, while the excited states Υ(2S) and Υ(3S) show maximum transverse polarization.

CCEcon3_06-05

The meeting also discussed measurements of heavy-flavour production. The STAR collaboration reported on open-charm measurements made by reconstructing the D° → Kπ+ hadronic decay in d-Au collisions at RHIC. The reconstruction of such hadronic decay modes is difficult to perform in heavy-ion collisions, owing to the high particle multiplicities. The single-electron transverse momentum spectrum provides an alternative, albeit indirect, measurement of charm production at RHIC energies. The charm production cross-sections currently derived from the PHENIX and STAR data differ by a factor of two. Effects that might cause this discrepancy are being investigated and improved results should be available soon.

Another promising direction is the use of electromagnetic probes – leptons and photons; their production has for a long time been considered one of the basic pieces of evidence for the formation of a quark-gluon plasma. There is great interest in these probes because they escape from the medium almost without any interactions, and thus carry valuable information about the early stages in the evolution of dense matter. Moreover, their emission rates can be calculated in lattice QCD as well as in perturbation theory, as discussed by Jean-Paul Blaizot of Saclay, Charles Gale of McGill and others. Rolf Baier of Bielefeld showed that parton-saturation effects also play a crucial role here.

New experimental information on dilepton production was presented by the NA60 experiment at CERN, which took proton-nucleus and In-In data in 2002 and 2003, respectively, with better statistics and mass resolution than previous measurements. Such “second-generation” data should answer some of the questions raised by results previously obtained by CERES at the SPS and lower-energy experiments (such as DLS at LBNL’s BEVALAC and KEK’s E235), reviewed by Itzhak Tserruya of the Weizmann Institute. Currently, PHENIX at RHIC cannot explore the physics of the low-mass dilepton continuum, given the overwhelming combinatorial background levels. This should be solved by a “hadron blind detector”, based on a proximity focus Cherenkov detector, soon to be added to PHENIX.

Jets are another classic hard probe. Colliding beams of protons or heavy nuclei produce jets when partons from the incoming projectiles undergo hard scattering off each other and emerge from the reaction at large angles. In the early 1980s, James Bjorken proposed that jets would interact with the material generated by high-energy nuclear collisions in a way analogous to the more familiar interaction of charged particles in detector material. He suggested that this interaction would lead to energy loss in a quark-gluon plasma (jet quenching). Further theoretical analysis showed that gluon bremsstrahlung is an efficient way of dissipating jet energy to the medium, generating large and potentially observable differences between hot and cold strongly interacting matter.

Jets and RHIC

Jets are the hard probe par excellence at RHIC, where the collision energy is high enough to produce them in vast numbers. The first runs with gold beams at RHIC did indeed reveal strong modifications to jet structure, agreeing with the predictions of jet quenching in matter many times denser than cold nuclear matter. Heavy-ion physicists are now looking more deeply into jet-related measurements and interesting nuclear effects continue to emerge. The diversity and quality of the high-momentum-transfer data from the four RHIC experiments justified eight detailed talks. Data were presented from pp, d-Au and Au-Au collisions at the top RHIC centre-of-mass energy of 200 GeV, together with Au-Au measurements at 62.4 GeV, chosen to match the energy of CERN’s Intersecting Storage Rings (for which extensive pp collision data are available for comparison).

One of the key pieces of evidence for jet quenching is the strong suppression of high-momentum inclusive pion and charged particle production in the most central nuclear collisions, seen by all RHIC experiments and now provided by PHENIX for transverse momenta up to 14 GeV/c. It is crucial to crosscheck such measurements and theoretical calculations in simpler systems. Inclusive particle spectra at high transverse momentum in pp collisions are described well by perturbative QCD calculations, so that the reference spectra for measuring nuclear effects are well understood. Jets and hard photons at high momentum are generated by similar mechanisms, but direct photons should not lose energy in the nuclear medium, since they have no colour charge. Klaus Reygers of Münster showed that, at RHIC, direct photons are indeed produced at the rate expected from QCD calculations, while high-momentum pions are suppressed by a factor of five (figure 3).

On the theory side, Xin-Nian Wang of LBNL and Urs Wiedemann of CERN discussed partonic energy loss in matter, and showed that perturbative QCD calculations incorporating medium-induced bremsstrahlung can describe the main jet-related measurements. A key test is the variation of energy loss with collision energies. Jet quenching generates strong effects at the top RHIC energy of 200 GeV, but does it diminish at lower collision energy? Recently analysed Au-Au data from RHIC at 62.4 GeV found a hadron suppression similar to that at 200 GeV. Model calculations of jet quenching had predicted this, as the result of smaller overall energy loss convoluted with a softer underlying initial partonic spectrum.

It is therefore natural to look at the extensive data amassed by the fixed-target experiments at the SPS, with centre-of-mass energies of 17-20 GeV. Though jet production at SPS energies is rare, high-statistics data sets can probe the lower reaches of the hard-scattering regime. Until recently it was thought that in Pb-Pb collisions at the SPS, production of hadrons with high transfer momentum was enhanced, not suppressed. David D’Enterria of Columbia has re-examined the pp reference data used to measure the hadron production at the SPS, and concludes that its uncertainties were previously underestimated and that signs of jet quenching may indeed also be present at the SPS. This has spurred the SPS heavy-ion collaborations to re-analyse their old data, with more news to be expected by the summer of 2005.

Future prospects

Most of the Au-Au data from RHIC presented at the conference are from the 2002 run, with an integrated luminosity of 250 μb-1. The RHIC collaborations are still analysing the 2004 data set, with a much higher integrated luminosity (3.7 nb-1), and new results on jet physics and other rare probes are expected within a few months.

John Harris of Yale discussed the long-term future of RHIC, including upgrades to the major detectors and the addition of electron cooling to the accelerator which will increase its luminosity for Au-Au by a factor of 10 (RHIC~II). The theoretical interest in low x forward physics was emphasized by Al Mueller of Columbia; this topic should also be high on RHIC’s agenda, as noted by Les Bland of Brookhaven. In 2008, the high-energy frontier in heavy-ion collisions will move to the LHC; Bolek Wyslouch of MIT, Andreas Morsch of CERN and Philippe Crochet of Clermont-Ferrand previewed the possibilities this will open up.

Further heavy-ion runs at the SPS could occur in parallel with operation of the LHC, as advocated by Hans Specht of Heidelberg, to profit from what seems to be an ideal collision energy for the studies of the transition to the QGP phase, associated with the high luminosities offered by fixed target running. High-precision data from such runs could be available several years before the start of GSI’s Facility for Antiproton and Ion Research (FAIR), where heavy-ion collisions will be studied at up to 35 GeV/nucleon (data sets have been taken at the SPS at 20-200 GeV/nucleon).

The wealth of information presented at the meeting was summarized in three talks: on quarkonia and heavy flavours, by Enrico Scomparin of Torino; on jets and high-transverse-momentum physics, by Peter Jacobs of LBNL; and on electromagnetic probes, by Axel Drees of Stony Brook. Dmitri Kharzeev of Brookhaven summarized the theory presentations at the meeting, inspired by the venue’s history. When the Pope divided the unknown world between Portugal and Spain in the 1494 treaty of Tordesillas, he drew a line in what he thought was an empty ocean; 10 years later, South America had been discovered and was being explored. Similarly, the boundaries of 10 years ago, between the old hadronic and the new partonic worlds in the phase diagram of strongly interacting matter, are now more complex and less sharp, thanks to impressive recent progress.

The focused programme, good attendance, spectacular location and extracurricular activities (including a concert of 18th-century popular music in the majestic Convento de Mafra) made this a memorable and successful meeting – one in a new conference series. The second will be held in the spring of 2006 in the San Francisco Bay Area, convened by physicists from Berkeley and Brookhaven. A third is already on the horizon, as Santiago de Compostela in northern Spain would like to welcome a pilgrimage of hard probe physicists.

Model suggests dark energy is an illusion

Arguably the most fascinating question in modern cosmology is why the universe is expanding at an accelerating rate. An original solution to this puzzle has been put forward by four theoretical physicists: Edward Kolb of Fermilab, Sabino Matarrese of the University of Padova, Alessio Notari of the University of Montreal, and Antonio Riotto of the Italian National Institute for Research in Nuclear and Subnuclear Physics (INFN)/Padova. Their study has been submitted to the journal Physical Review Letters.

CCEnew3_05-05

In 1998, observations of distant supernovae provided detailed information about the expansion rate of the universe, demonstrating that it is accelerating. This can be interpreted as evidence of “dark energy”, a new component of the universe, representing some 70% of its total mass. (Of the rest, about 25% appears to be another mysterious component, dark matter, while only about 5% consists of the ordinary “baryonic” matter.) Other explanations include a modification of gravity at large distances and more exotic ideas, such as the presence of a dynamic scalar field referred to as “quintessence”.

Although the hypothesis of dark energy is fascinating and more appealing than the other explanations, it faces a serious problem. Attempts to calculate the amount of dark energy give answers much larger than its measured magnitude: more than 100 orders of magnitude larger, in fact.

Kolb and colleagues offer an alternative explanation, which they say is rather conservative. They propose no new ingredient for the universe; instead, their explanation is firmly rooted in inflation, an essential concept of modern cosmology, according to which the universe experienced an incredibly rapid expansion at a very early stage.

The new explanation, which the researchers refer to as the Super-Hubble Cold Dark Matter (SHCDM) model, considers what would happen if there were cosmological perturbations with very long wavelengths (“super-Hubble”) larger than the size of the observable universe. They show that a local observer would infer an expansion history of the universe that would depend on the time evolution of the perturbations, which in certain cases would lead to the observation of accelerated expansion. The origin of the long-wavelength perturbations is inflation, as, effectively, the visible universe is only a tiny part of the pre-inflation-era universe. The accelerating universe is therefore simply an impression due to our inability to see the full picture.

Of course, observation is the ultimate arbiter between theories. The SHCDM model predicts a different relationship between luminosity-distance and redshift than the dark-energy models do. While the two models are indistinguishable within current experimental precision, more precise cosmological observations in the future should be able to distinguish between them.

Belle discovers yet more new particles

The record performance of the KEK B-factory is currently supplying Belle with about 1 million B Bbar meson pairs per day. While data analyses on charge-parity (CP) violation and searches for new physics beyond the Standard Model continue, the vast amounts of accumulated data have helped another important aspect of Belle’s physics programme: the discovery of new particle states in the charm sector.

CCEnew4_05-05

Recent additions to Belle’s new particle list are the Y(3940) and the strange charmed baryon Σc(2800), to be added to the ηc(2S), the D0*(2308), the D1(2427) and the X(3872) already discovered. This brings Belle’s total of new particles discovered to six.

Now it seems, however, that Belle’s new-particle tally may be seven. Last summer the collaboration reported strong evidence for a mass peak in the spectrum of particles recoiling against a J/Ψ in electron-positron collisions with a similar mass to the Y(3940). Although the mass of this new peak is the same as that of the Y(3940) within errors (measurement errors are about 10 MeV for both observations), the Belle team is not yet convinced that these two states are the same and, for the time being at least, are referring to the new object as the X(3940).

The X(3940) mainly decays into D plus anti-D* mesons, as expected for charmonium states with this mass. The Y(3940), on the other hand, does not seem to follow this pattern and its preference to decay into an w and a J/Ψ is difficult to understand in the context of heavy quark potential models, which have explained the charmonium spectrum up to now. The Y(3940) may not therefore be an ordinary quark-antiquark meson, but rather a “hybrid state” – a meson comprising a charmed quark, an anti-charmed quark and a gluon.

Belle’s particle hunters have their work cut out as they try to pin down the identity of the new particles they have already observed, while more data – and opportunities for more discoveries – pour in faster and faster.

TWIST tests the Standard Model

Normal muon decay (μ+ → e+ νe νμbar) is an ideal process to investigate the electroweak interaction in the Standard Model. The reaction involves only leptons, obviating the need for uncertain strong-interaction corrections, thus making it a clean probe of the theory’s purely left-handed (V-A) structure. A high-precision determination of the parameters describing the muon-decay spectral shape explores physics possibilities beyond the Standard Model, for example involving right-handed interactions. The world’s most precise determination of these parameters has been the goal of the TRIUMF Weak Interaction Symmetry Test (TWIST) experiment. The collaboration has recently completed its first phase by directly measuring the muon-decay parameters ρ and δ, improving the Particle Data Group (PDG) values by factors of 2.5 and 2.9 respectively.

CCEnew5_05-05

The distribution in energy and angle of positrons from polarized muon decay is described by the four “Michel parameters”. The spectrum’s isotropic part has a momentum dependence determined by ρ plus an additional small term that is proportional to a second parameter, η. The asymmetry is proportional to a third parameter ξ multiplied by the muon polarization, Ρμ, while a fourth parameter, δ, determines its momentum dependence. Within the Standard Model, these parameters are predicted to be ρ = ¾, δ= ¾,ξ = 1 and η = 0.
TWIST uses beams of positive muons as they can be produced with high polarization. The high-intensity TRIUMF proton beam produces π+, some of which decay at rest at the surface of a production target to create a highly polarized “surface” muon beam with momentum 29.6 MeV/c, which is subsequently transported into a 2 T superconducting solenoid.

Most of the muon beam stops in a thin target, located at the centre of a symmetric array of 56 low-mass, high-precision planar drift-chambers. Limitations on final errors are dominated by systematic effects since the statistical precision is very high. The measured momentum and angular distribution of the decay positrons are shown in the figure. The drop in acceptance near | cos (θ) | = 0 is because of the poor reconstruction efficiency in that region. To extract the muon-decay parameters, a 2D fit is made to a fiducial region where the detector acceptance is uniform, using a blind-analysis technique. The results are based on 6 x 109 muon decays, spread over 16 data sets. Four sets were analysed for both ρ and δ. A fifth set of low-polarized muons from pion decays in flight was also analysed for ρ. The remaining data sets, combined with further Monte Carlo simulations, were used to estimate the sensitivities to various systematic effects.

TWIST’s new measurement of ρ = 0.75080 ± 0.00032(stat.) ± 0.00097(syst.) ± 0.00023 (last uncertainty due to the current PDG error in η) sets an upper limit on the mixing angle of a possible heavier right-handed partner to the W boson, |ζ| <0.03 at 90% confidence level (c.l.). Combining ρ with the new measurement of δ = 0.74964 ± 0.00066(stat.) ± 0.00112(syst.), and the PDG value of Ρμξδ/ρ, an indirect limit is set on Ρμξ: 0.996 <Ρμξ< 1.004 with 90% c.l. The lower limit slightly improves the limit on the mass of the possible right-handed boson, WR ≥ 420 GeV/c2. Finally, an upper limit is found for the muon right-handed coupling probability, QμR< 0.00184 at 90% c.l.

Muon decay, combined with measurements from experiments at higher energies and in nuclear beta decay, helps our understanding of the asymmetry in the weak interaction’s handedness. In the future phases of the experiment, TWIST aims to produce a direct measurement of Ρμξ with a precision of a few parts in 104 and to increase its sensitivity to ρ and δ by approximately another factor of five.

Electrons reveal secrets of neutrinos

The US Department of Energy’s Thomas Jefferson National Accelerator Facility (JLab) is well known for its Continuous Electron Beam Accelerator Facility (CEBAF), where experiments with a 6 GeV electron beam probe nuclear structure. Now it turns out the same beam may also be helpful for neutrino research. Physicists from several neutrino projects around the world recently visited JLab to take electron-scattering data on carbon, hydrogen, deuterium and iron targets.

Precise knowledge of neutrino beams and neutrino interactions with atomic nuclei helps neutrino researchers analyse the results of their experiments. They gather this by participating in nuclear and high-energy physics experiments, a practice known as “neutrino engineering”. Examples are the HARP experiment at CERN, which measures pion-production cross-sections of protons on nuclear targets, and experiment E04-001 at JLab, which measures electron-nucleus cross-sections.

Electrons at CEBAF energies interact with nuclei predominantly via the electromagnetic force, while neutrinos interact via the weak force. However, precise information about the electron interaction provides information about the neutrino interaction, since the two forces are actually different aspects of the electroweak force. Electrons probe the vector structure of the nucleon, whereas neutrinos probe both the vector and axial-vector structure. So both probes are needed to understand the full electroweak structure of the nucleon and the nucleus.

The nuclear targets studied in the JLab experiment are the same as, or closely resemble, the production targets and detectors commonly used in neutrino experiments. Thus electron-scattering studies with nucleons and nuclei at low momentum-transfer-squared Q2, such as the data taken at JLab in the Q2 range of 0.01-2 (GeV/c)2, can provide information about how neutrinos interact in neutrino experiments. For example, since experiments such as K2K (KEK to Kamioka) in Japan and MiniBooNE at Fermilab use 1 GeV neutrino beams to study neutrino oscillations, electroweak analysis of 1 GeV electron-scattering data from E04-001 can be used as a first step to provide constraints on neutrino cross-sections needed in the study of neutrino oscillations.
In the long term, JLab and JLab’s own researchers are collaborating in a future experiment, MINERvA (Main Injector Experiment v-A), which is dedicated to measuring neutrino cross-sections in Fermilab’s NuMI (Neutrinos at the Main Injector) beam line. Combining the high-precision electron cross-section data from E04-001 with precise data on neutrino cross-sections from MINERvA should allow the axial structure of the nucleon to be extracted.

HESS detects mysterious high-energy sources in the Milky Way

The first detailed image of the central part of our galaxy at very-high-energy gamma rays shows several sources. Surprisingly, some of them do not have a known counterpart at radio, optical or X-ray wavelengths, so their nature is a complete mystery.

Gamma rays at tera-electron-volt energies are detected using the Earth’s atmosphere as a detector. The passage of such a photon through the upper atmosphere triggers a shower of relativistic electrons and positrons moving faster than the speed of light in the air, thus emitting Cherenkov radiation. This faint bluish light-flash can be detected at night by dedicated ground-based telescopes.

Currently, the most sensitive Cherenkov telescope array is the European-African High Energy Stereoscopic System (HESS) located in the Namibian desert. It consists of four mirror telescopes 13 m in diameter placed at the corners of a square of side 120 m. Its image resolution of a few arc-minutes has enabled for the first time a map to be made at tera-electron-volt energies of the central part of our galaxy, the Milky Way.

The image published in the journal Science by Felix Aharonian and an international team of scientists reveals eight new sources of very-high-energy gamma rays in the central 60° of the disc of our galaxy. This essentially doubles the number of sources known at these energies. Three of the newly discovered sources could be associated with supernova remnants, two with giga-electron-volt gamma-ray sources discovered by the Energetic Gamma-Ray Experiment Telescope (EGRET) aboard the Compton Gamma-Ray Observatory, and in three cases an association with pulsar-powered nebulae such as the Crab Nebula is not excluded.

However, at least two of the sources discovered by HESS are not at a position where there is a possible counterpart. These could be members of a new class of “dark” particle accelerators.

Cosmic particle accelerators are believed to accelerate charged particles in strong shockwaves such as those produced when the gas expelled from a supernova hits the ambient interstellar medium. High-energy gamma rays are secondary products, which have probably been boosted to tera-electron-volt energies by ultra-relativistic electrons through the inverse Compton process. Gamma rays are easier to detect because they travel in straight lines from their source – unlike charged particles, which are deflected by magnetic fields in the galaxy. The discovery of new sources in the HESS survey of the galaxy therefore helps to solve the long-standing question of the origin of cosmic rays.

Further reading

F Aharonian et al. 2005 Science 307 1938.

The ice cube at the end of the world

The IceCube observatory is being built to detect extraterrestrial neutrinos with energies above 100 GeV. Neutrinos are attractive for high-energy astronomy because they are not absorbed in dense sources like other probe particles, and they travel in straight lines from their source. Protons or nuclear cosmic rays are also bent by interstellar magnetic fields, and while photons fly straight, at energies above 10 TeV, their interactions (by e+e pair creation) with interstellar background photons limit their range.

CCEice1_05-05

The interaction cross-sections of neutrinos are tiny, so a huge detector is required. Calculations of neutrino production in many different types of sources show that a 1 km3 (1 Gt) detector is required to observe astrophysical signals. IceCube will observe neutrinos that interact in Antarctic ice at the South Pole, producing muons, electrons or tau particles. These leptons interact with the ice (and the tau also decays), producing additional charged particles. High-energy (peta-electron-volts) muons travel many kilometres in the ice, and IceCube will observe muons that traverse the detector. The charged particles emit optical Cherenkov radiation, which can travel hundreds of metres before being detected by IceCube’s digital optical modules (DOMs). The type of neutrino, its direction and its energy can then be reconstructed by measuring the intensity and arrival time of the light at many DOMs.

Work at the South Pole

The South Pole might seem like an unusual place to build a huge detector, but the Antarctic ice is very clear and very stable. Deep below the surface, the light-absorption length can exceed 250 m. Compared with seawater, which is another active medium, the ice has much lower levels of background radiation and a longer attenuation length, but more light is scattered.

Using the US South Pole station as a base for operations, deployment of the IceCube detector in the ice began on 15 January, when the first hole was started. A jet of water heated to 90 °C was used to melt the hole. The drill pumped 750 l/min of water from a 5 MW heater to reach a drilling speed of slightly over 1 m/min. Drilling this first hole, 2450 m deep and 60 cm in diameter, took about 52 h. Once the drilling was complete, a string of 60 DOMs was lowered into the hole, which took another 20 h. The DOMs are attached to the string every 17 m, between depths of 1450 and 2450 m. The water that remained in the hole took about two weeks to freeze.

CCEice2_05-05

The South Pole is very different from an accelerator laboratory, and logistics is a key issue for IceCube. Environmental conditions are rough, and manpower and working time are limited, so everything must be carefully engineered and tested before being shipped to the Pole. Everything must be flown in from the Antarctic coast on LC-130 turboprop aeroplanes equipped with skis. The drilling rig alone filled 30 flights, about an eighth of the annual capacity; fuel for the drill required another 25 flights. Many of the components were transported in pieces to the Pole. The reassembly time limited this inaugural drilling campaign to about 10 days.

The image below shows an early result of this hard work, a cosmic-ray muon in IceCube, in coincidence with an air shower observed by eight surface tanks that form part of the IceTop array above IceCube. These data were taken less than two weeks after deployment, showing that everything works “right out of the box”. At the time, many of the DOMs were not yet turned on. More recent tests have verified that all 60 DOMs are working.

CCEice3_05-05

This success owed much to the Antarctic Muon and Neutrino Detector Array (AMANDA), which preceded IceCube and had more than 650 modules. The AMANDA optical modules contained only a photomultiplier tube (PMT) and analogue signals were transmitted to the surface on the power cable; later versions used fibre-optic cables to transmit analogue signals. These schemes worked in AMANDA, which observed several thousand atmospheric neutrinos, but the approach required manpower-intensive calibrations and could not be scaled up for the much larger IceCube. The solution was “String 18”, a string of DOMs that, in addition to the AMANDA fibre readout, included electronics for locally digitizing the signals, and sending digital signals to the surface. The digital readout worked, and the DOM approach was adopted by IceCube. This advance was a key to reaching the 1 km3 scale.

Each DOM functions independently. Data collection starts when a photon is detected. The PMT output is collected with a custom waveform-digitizer chip, which samples the signal 128 times at 300 megasamples per second. Three parallel 10 bit digitizers combine to provide a dynamic range of 16 bits. Late-arriving light is recorded with a 40 MHz, 10 bit analogue-to-digital converter, which stores 256 samples (6.4 μs). These waveforms enable IceCube to reconstruct the arrival time of most detected photons. A large field-programmable gate array with an embedded processor controls the system, compresses the data and forms it into packets.

The entire DOM uses only 5 W of power. Adjacent DOMs communicate via local-coincidence cables, allowing for possible coincidence triggers. Data are transmitted to the surface over the DOM power cables, and at the surface trigger conditions for the strings and array-wide are applied. The data are stored locally and selected samples transmitted to the Northern Hemisphere via satellite link.

CCEice4_05-05

A surface cosmic-ray air-shower detector array, IceTop, forms part of IceCube. IceTop will eventually consist of 160 ice-filled tanks 2 m in diameter, distributed over 1 km2. The tanks are similar to the water tanks used in other air-shower arrays – such as at Haverah Park in the UK, the Milagro Gamma Ray Observatory in the US and the Pierre Auger Observatory in Argentina. Each tank contains two DOMs frozen in the ice. DOMs in each pair of tanks are connected via local coincidence signals, providing a simple local trigger. Part of IceTop was the first piece of IceCube to be deployed, with eight tanks installed in December 2004.

IceTop will serve several functions: tagging IceCube events that are accompanied by cosmic-ray air showers; studying the cosmic-ray composition up to around 1018 eV (correlating IceTop showers with IceCube muons); and as a calibration source to tag directionally the cosmic-ray muons that reach IceCube.

One big problem in large arrays is measuring the relative timing between separated detector elements. IceCube solved this with “RapCal”, a timing calibration whereby signals are sent down the cables from the surface then retransmitted to the surface. Accuracy is maintained by using identical electronics at the two ends of the cables. In IceCube, laboratory measurements and early data show that the local DOM clocks are kept calibrated to about 2 ns.

CCEice5_05-05

The environment at the South Pole motivated extensive reliability engineering and pre-deployment testing. The extended temperature range – down to -55 °C – was a challenge for the selection of parts and for design verification. Indeed, IceTop may reach temperatures below -55 °C, beyond the design range of any electronic components. Reliability estimates were also challenging. Conventional models predict that the failure rate halves for each 10 °C drop in temperature; according to these models, IceCube will last forever.

The physics of IceCube

IceCube will study many physics topics, but the major objective is high-energy neutrino astronomy. Any source that accelerates protons or heavier ions (cosmic rays) also produces neutrinos. The accelerated particles will collide with other nuclei, producing hadronic showers. Pions and kaons in the shower will decay, emitting neutrinos. Cosmic rays have been observed with energies up to 3 x 1020 eV and the neutrino spectrum should extend to a few per cent of this energy. The neutrino flux is model-dependent, but most calculations predict that a 1 km3 detector should see at least a handful of events each year.

There are several likely astrophysical sources of neutrinos. These include active galactic nuclei (AGNs), gamma-ray bursters (GRBs) and supernova remnants. AGNs are galaxies with massive black holes at their centres, which can power a jet of relativistic particles. Calculations based on the observed flux of photons at energies of tera-electron-volts show that IceCube should observe neutrinos from AGNs. GRBs are mysterious objects that produce bursts of high-energy gamma rays. They are associated with objects in galaxies, including hypernovae (very large supernovae) and colliding neutron stars. Some calculations suggest that IceCube should see a handful of neutrinos from a single GRB, which would be a striking result. Supernova remnants such as the Crab Nebula are the likely source of most cosmic rays of moderate energy in our galaxy. If this is correct, they must also produce neutrinos.

Neutrinos also probe cosmic rays more directly. Ultra-high-energy (above 5 x 1019 eV) protons interact with relic microwave photons from the big bang. These protons are excited into Δ resonances, which decay to lower-energy nucleons and pions. Subsequent pion and neutron decays then produce neutrinos. The proton energy-loss limits the range of very energetic protons; this is known as the Greisen-Kuzmin-Zatsepin cutoff. Photo-dissociation plays a similar role for heavier ions, limiting their range and producing neutrinos. By measuring the ultra-high-energy neutrino spectrum, IceCube will probe the cosmic-ray composition and possible evolution (with redshift) of energetic cosmic-ray sources.

Besides being two orders of magnitude larger, IceCube has several advantages over experiments such as AMANDA and the array in Lake Baikal. IceCube is optimized for higher-energy neutrinos (especially above 1 TeV), where the atmospheric neutrino background is lower. The high detector granularity will allow IceCube to study electron-neutrinos and tau-neutrinos as well as muon-neutrinos. The electron-neutrinos produce blob-like electromagnetic showers, which contrast strongly with long muon tracks; the latter can extend for many kilometres. Above 1015 eV, tau-neutrinos are identifiable through their distinctive “double-bang” signature – an initial shower from the tau-neutrino interaction and the single track of a tau particle, which eventually decays, producing a second shower.

IceCube will study many other physics topics. Over a decade, it will observe about 1 million atmospheric neutrinos, enough to search for deviations from the standard three-flavour scenario for neutrino oscillations. The IceCube collaboration will also look for violations of the Lorentz and equivalence principles, and will search for neutrinos produced by the annihilation of weakly interacting massive particles that have been gravitationally captured by the Earth or the Sun. Because of the very low dark-noise rates (about 800 Hz per DOM), IceCube can detect bursts of low-energy neutrinos from collapsing supernovae. The detector will also contribute to glaciology, studying the dust layers that record the Earth’s weather over the past 200,000 years.

• The IceCube collaboration consists of more than 150 scientists, engineers and computer scientists from the US, Belgium, Germany, Japan, the Netherlands, New Zealand, Sweden and the UK. IceCube is funded by a $242 million Major Research Equipment Grant from the US National Science Foundation, plus approximately $30 million from European funding agencies.

Further reading

J Ahrens et al. 2004 Astropart. Phys. 20 507.
E Andres et al. 2001 Nature 410 6827.
See also www.icecube.wisc.edu.

Physics in the Italian Alps

Now in its 19th year, the Rencontres de Physique de la Vallée d’Aoste is known for being a vibrant winter conference, where presentations of new results and in-depth discussions are interlaced with time for skiing. Taking place in La Thuile, a village on the Italian side of Mont Blanc, it consistently attracts a balanced mix of young researchers and seasoned regulars from both theoretical and experimental high-energy physics. The 2005 meeting, which took place from 27 February to 5 March, was no exception.

CCEalp1_05-05

As well as the standard sessions on particle physics, cosmology and astrophysics typical for such a conference, the organizers always try to include a round-table session on a topical subject, as well as a session on a wider-interest topic that tackles the impact of science on society. This year, the first of these sessions was Physics and the Feasibility of High-Intensity, Medium-Energy Accelerators, and the second was The Energy Problem.

Dark energy, WIMPs and cannon balls

An increasing number of experiments are trying to answer questions in high-energy physics by taking to the skies, making the distinction between particle physics and astronomy more fuzzy. The first session of the conference presented an impressive array of experiments and results, ranging from gravitational-wave detection to gamma-ray astronomy. The team working on the Laser Interferometer Gravitational-Wave Observatory (LIGO), with two fully functioning antennas 3000 km apart, now understands the systematics and has begun the fourth period of data-taking with improved sensitivity.

In gamma-ray astronomy, ground-based detectors – which detect the Cherenkov light emitted when gamma-ray-induced particle showers traverse the atmosphere – are constantly improving. The High Energy Stereoscopic System (HESS) in Namibia became fully operational in 2004 with a threshold of 100 GeV, while new detectors with thresholds as low as 20 GeV are in the pipeline. Satellite-based gamma-ray detectors have also provided some excitement, with the Energetic Gamma Ray Experiment Telescope (EGRET) observing an excess of diffuse gamma rays above 1 GeV, uniformly distributed over all directions in the sky.

CCEalp2_05-05

This excess could be interpreted as due to the annihilation of neutralinos. The neutralino is the supersymmetric candidate of choice as a weakly interacting massive particle (WIMP) – a popular option for the dark matter of the universe. This prompted Dmitri Kajakov of the Institute for Theoretical and Experimental Physics (ITEP), Moscow, to state that “dark matter is the supersymmetric partner of the cosmic microwave background”, since neutralinos can be thought of as spin-½ photons.

The Gamma-Ray Large Area Space Telescope (GLAST) satellite, launching in 2007, will offer an important improvement in gamma-ray astronomy, with sensitivity to 10,000 gamma-ray sources compared with EGRET’s 200.

The DAMA/NaI collaboration raised some eyebrows. It reported an annual modulation of 6.3σ significance in data observed over seven years in its nuclear-recoil experiment at the Gran Sasso National Laboratory, which stopped taking data in 2002. This modulation could be interpreted as due to a WIMP component in the galactic halo, which is seen from Earth as a “wind” with different speeds, depending on the annual cycle. The collaboration’s study of possible backgrounds has not identified any process that could mimic such a signal, but other experiments have not observed a similar effect. The new set-up, DAMA/LIBRA, which is more than twice as big and started taking data in 2003, might shed some light.

Another way of looking for WIMPs is through their annihilations that produce antimatter. Antimatter in the universe is not produced in large quantities in standard processes, therefore any excess of antimatter seen would be exciting news for WIMP searchers. The Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA) satellite due to be launched later this year will provide valuable data on antiproton and positron spectra.

Alvaro De Rújula of CERN, using traditional (and increasingly rare) coloured transparencies written by hand, gave an account of his theory of gamma-ray bursts (GRBs), which has now developed into a theory of cosmic rays. Central to the theory are the cosmic “cannon balls”, objects ejected from supernovae with a density of one particle per cubic centimetre, and with a mass similar to that of the planet Mercury but a radius similar to that of the orbit of Mars. These cannon balls, moving through the interstellar medium at high speeds (with initial γ factors of the order of 1000), not only explain GRBs and their afterglows in a simple way, but also explain all features of cosmic-ray spectra and composition, at least semi-quantitatively, without the need to resort to fanciful new physics. What the theory does not attempt to explain, however, is how cannon balls are accelerated in the first place.

Dark energy was reviewed by Antonio Masiero of the University of Padova. Masiero pointed out that theories that do not associate dark energy with the cosmological constant do exist. One can assume, for instance, that general relativity does not hold over very long distances, or that there is some dynamical explanation, like an evolving scalar field that has not yet reached its state of minimum energy (known as a quintessence scalar field), or even that dark energy is tracking neutrinos. With the latter assumption, he came to the interesting conclusion that the mass of the neutrinos depends on their density, and therefore that neutrino mass changes with time. The cosmological constant or vacuum-energy approach, however, offers the less exotic explanation of dark energy.

Finally, Andreas Eckart of the University of Cologne reviewed our knowledge of black holes, with emphasis on the massive black hole at the centre of our own galaxy, Sagittarius A*. He played an impressive time sequence of observations taken over 10 years of the vicinity of this black hole, showing star orbits curving around it.

The golden age of neutrino experiments

The neutrino session began with Guido Altarelli of CERN, who reviewed the subject in some depth. Although impressive progress has been made during the past decade, there are unmeasured parameters that the new generation of experiments must address. The Antarctic Muon and Neutrino Detector Array (AMANDA), which uses the clean ice of the South Pole for neutrino detection, reported no signal from its search for neutrino point-sources in the sky, but the collaboration is already excited about its sequel, IceCube.

The Sudbury Neutrino Observatory (SNO) collaboration has added salt to its apparatus, to increase the detection efficiency by nearly a factor of three compared with the earlier runs. Analysis yields slightly smaller errors on Δm13 than K2K (KEK to Kamiokande), the long-baseline experiment in Japan, which reported on the end of data-taking. K2K is now handing over to the Main Injector Neutrino Oscillation Search (MINOS) in the US, which had recorded the first events in its near detector just in time for the conference. MINOS is similar in conception to K2K, but has a magnetic field in its fiducial volume – the first time in such an underground detector – and it will need three years of data-taking to provide competitive results.

The director of the Gran Sasso National Laboratory, Eugenio Coccia, gave a status report of the activities of the laboratory, which is undergoing an important safety and infrastructure upgrade following a chemical leak. The laboratory is the host of a multitude of experiments on neutrino and dark-matter physics. These include the Imaging Cosmic And Rare Underground Signals (ICARUS) and Oscillation Project With Emulsion Tracking Apparatus (OPERA) experiments for the future CERN Neutrinos to Gran Sasso (CNGS) project and Borexino, which is the only experiment other than KamLAND in Japan that can measure low-energy solar neutrinos. The laboratory also houses neutrinoless double-beta-decay experiments.

Strong, weak and electroweak matters

In the session on quantum chromodynamics, Michael Danilov of ITEP had the unenviable task of reviewing the numerous experiments that have looked for pentaquarks. In recent years, there have been 17 reports of a pentaquark signal and 17 null results. Danilov justified his sceptical approach by pointing out various problems with the observed signals. The small width of the Θ+ is very unusual for strong decays. Moreover, this state has not been seen at the Large Electron Positron (LEP) collider, although this fact can be circumvented by assuming that the production cross-section falls with energy. However, the Belle experiment at KEK does not see the signal either, weakening the cross-section argument. The Θc is seen by the H1 experiment at HERA, but not by ZEUS or by the Collision Detector at Fermilab (CDF). Finally, many experiments have not seen the Ξ signal. Although Danilov thinks that the statistical significance of the reported signals has been overestimated, it is still too large to be a statistical fluctuation. The question will only be settled by high-statistics experiments coming soon.

Amarjit Soni of Brookhaven summarized our knowledge of charge-parity (CP) violation by emphasizing the success of the B-factories, the fact that the Cabibbo-Kobayashi-Maskawa paradigm is confirmed, and that we now know how to determine the unitarity triangle angles α and γ, as well as the previously known angle β.

The electroweak session began with a report on new results from LEP, with LEP showing no signs that it has said its final word yet. The running of αQED has been the subject of a new analysis of Bhabha events at LEP. The results from the OPAL experiment, recently submitted for publication, give the strongest direct evidence for the running of αQED ever achieved in a single experiment, with a significance above 5σ. Regarding the W mass, the combined data error for LEP now stands at 42 MeV, whereas at the Tevatron, Run II is being analysed and the error from CDF from 200 fb-1 of data (a third of the collected data) is already less than their published result for Run I. The Tevatron collaborations expect to achieve a 30-40 MeV error on the W mass with 2 fb-1 of data. The search is on for the Higgs particle at Fermilab with a new evaluation of the Tevatron’s reach. For a low-mass Standard Model Higgs, the integrated luminosity needed for discovery (5σ) is 8 fb-1; evidence (3σ) needs 3 fb-1, while exclusion up to 130 GeV needs 4 fb-1.

From high intensity to future physics

The round-table discussion on physics and the feasibility of high-intensity, medium-energy accelerators was chaired by Giorgio Chiarelli of the University of Pisa, and after a short introduction he asked the panel members for their views. Pantaleo Raimondi of Frascati gave an overview of B and f factories and Gino Isidori, also of Frascati, pointed to a series of interesting measurements that can be performed by a possible upgrade to the Double Annular Ring For Nice Experiments (DAFNE) set-up at Frascati, where the time schedule would be a key point.

Francesco Forti of Pisa discussed the possibility of a “super B-factory”. He noted that by 2009, 1 ab-1-worth of B-physics data will be available around the world, and to have a real impact any new machine would need to provide integrated luminosities of the order of 50 ab-1. Roland Garoby of CERN talked about a future high-intensity proton beam at CERN, where the need for a powerful proton driver, a necessary building block of future projects, has been identified. Finally, Franco Cervelli of Pisa reviewed the high-intensity frontier, including prospects for the physics of quantum chromodynamics, kaons, the muon dipole-moment and neutrinos. A lively debate followed.

In the interesting science and society session on alternative energy sources, Durante Borda of the Instituto Superior Tecnico of Lisbon gave a detailed account of ITER, the prototype nuclear-fusion reactor that is expected to be the first of its kind to generate more energy than it consumes. ITER is designed to fuse deuterium (obtained from water) with tritium obtained in situ from lithium bombarded with neutrons, thereby creating helium and releasing heat (in the form of neutrons) captured through heat exchangers. It is hoped that this ambitious project, with its many engineering challenges, will pave the way for commercial fusion-power plants.

This talk was followed by presentations on geothermal, solar, hydroelectric and wind energy, covering a wide spectrum of renewable energy resources. It was clear from the presentations that the problem of future energy production is complicated, and a clear winner has yet to emerge from these alternative energy sources.

In the session on physics beyond the Standard Model, Andrea Romanino of CERN did not make many friends among the community working towards the Large Hadron Collider (LHC) at CERN. He stated that “split supersymmetry” – a variation of supersymmetry (SUSY) that ignores the naturalness criterion – pushes the SUSY scale (and any SUSY particles) beyond reach of the LHC, although within reach of a future multi-tera-electron-volt collider.

Fabiola Gianotti of CERN appeared undeterred. She closed the session and the conference by giving a taste of the first data-taking period of the LHC to come. She reminded the audience that for Standard Model processes at least, one typical day at the LHC (at a luminosity of 1033) is equivalent to 10 years at previous machines.

• The conference series is organized by Giorgio Bellettini and Giorgio Chiarelli of the University of Pisa and Mario Greco of the University of Rome.

First neutrinos head for MINOS

The Main Injector Neutrino Oscillation Search (MINOS) experiment was officially inaugurated in a ceremony at Fermilab on 4 March. MINOS is the latest weapon in the arsenal of neutrino-oscillation searches. Its main goal is to measure the largest difference in mass-squared between different neutrino species (Δm223) with an accuracy of 10% – more than a factor of two better than it is known today.

CCEnew5_04-05

MINOS takes over from the KEK to Kamioka (K2K) experiment in Japan, which has finished taking data with a similar set-up. The unique feature of MINOS, however, is its 1.5 T magnetic field. This enables the experiment to distinguish positively and negatively charged tracks and hence discriminate between neutrinos and antineutrinos.

MINOS uses a neutrino beam produced by Fermilab’s Neutrinos at the Main Injector (NuMI) facility, where 120 GeV protons from the Main Injector hit a graphite target, producing hadrons including pions. A “horn” focusing system selects positive pions, which then decay in a 700 m-long decay pipe. After passing through a beam absorber, the beam comprises mostly muon neutrinos. An important advantage of the system is that the energy of the neutrinos can be tuned by moving the horn focusing system.

The neutrino beam is aimed at the MINOS “far” detector, located in the Soudan Underground Laboratory in northeastern Minnesota, some 730 km away from Fermilab. The laboratory is 700 m underground in an old iron mine. To reduce errors by measuring directly the beam composition and neutrino energy spectrum, a “near” detector is also incorporated in the experiment 1 km from the target. It is essentially a miniature of the 6000 t far detector.

An important milestone was reached on 4 December 2004, when the first beam reached the target hall. The horns were powered in January and the near detector has already recorded its first events.

• MINOS is a collaboration of 200 scientists, engineers, technical specialists and students from 32 institutions in Brazil, France, Greece, Russia, the UK and the US.

Exploiting the synergy between great and small

There are a number of astrophysical phenomena, notably in connection with cosmology and ultrahigh-energy cosmic rays, that open a new window onto particle physics and lead to a better microscopic understanding of matter, space and time. On the other hand, particle physics is often exploited to great depths for an ultimate understanding of astrophysical phenomena, in particular the structure and evolution of the universe. These frontier-physics issues attracted a record number of 188 participants to Hamburg for the latest annual DESY Theory Workshop, held on 28 September – 1 October 2004 and organized by Georg Raffelt.

CCEexp1_04-05

The workshop started with the traditional day of introductory lectures aimed at young physicists, which covered the main topics of the later plenary sessions. Most of the participants jumped at this opportunity. At the end of the day, they had learned much about Big Bang cosmology, including the thermal history of the universe; about the evolution of small fluctuations in the early universe, and their imprints on the cosmic microwave background (CMB) radiation and the large-scale distribution of matter; and about how these initial fluctuations may emerge during an inflationary era of the universe. They were also up to date in ultrahigh-energy cosmic-ray physics. Thus the ground was laid for the workshop proper.

Highlighting the dark

In recent years, significant advances have been made in observational cosmology, as several plenary talks emphasized. Observations of large-scale gravity, deep-field galaxy counts and Type Ia supernovae favour a universe that is currently about 70% dark energy – accounting for the observed accelerating expansion of the universe – and about 30% dark matter. The position of the first Doppler peak in recent measurements of the CMB radiation, by for example the Wilkinson Microwave Anisotropy Probe (WMAP) satellite, strongly suggests that the universe is spatially flat. These values for the cosmological parameters, together with today’s Hubble expansion rate, are collectively known as the “concordance” model of cosmology, for they fit a wide assortment of cosmological data. Indeed, we have entered the era of precision cosmology, with the precision set to continue increasing in the coming decade as a result of further observational efforts. It is now the turn of theoretical particle physicists to explain these cosmological findings, in particular why the dominant contribution to the energy density of the present universe is dark and what it is made of microscopically.

Dark matter

Successful Big Bang nucleosynthesis requires that about 5% of the energy content of the universe is in the form of ordinary baryonic matter. But what about the remaining non-baryonic dark matter?

CCEexp2_04-05

This 25% cannot be accounted for in the Standard Model of particle physics: the only Standard Model candidates for dark matter, the light neutrinos, were relativistic at the time of recombination and therefore cannot explain structure formation on small galactic scales. Studies of the formation of structure – as observed today by the Sloan Digital Sky Survey, for example – from primordial density perturbations measured in the CMB radiation yield an upper bound of about 2% on the energy fraction in massive neutrinos. This translates into an upper bound of around 1 eV for the sum of the neutrino masses. Observations, by means of the forthcoming Planck satellite, of distortions in the temperature and polarization of the CMB will improve the sensitivity in the sum of neutrino masses by an order of magnitude to 0.1 eV. This is comparable to the sensitivity of the future Karlsruhe Tritium Neutrino Experiment (KATRIN), which measures the neutrino mass via the tritium beta-decay endpoint spectrum, and of the planned second-generation experiments on neutrinoless double beta-decay.

In theories beyond the Standard Model, there is no lack of candidates for the dominant component of dark matter. Notable viable candidates are the lightest supersymmetric partners of the known elementary particles, which arise in supersymmetric extensions of the Standard Model: the neutralinos, which are spin-½ partners of the photon, the Z-boson and the neutral Higgs boson, and the gravitinos, which are spin-½ partners of the graviton. Showing that one of these particles accounts for the bulk of dark matter would not only answer a key question in cosmology, but would also shed new light on the fundamental forces and particles of nature.

CCEexp3_04-05

While ongoing astronomical observations will measure the quantity and location of dark matter to greater accuracy, the ultimate determination of its nature will almost certainly rely on the direct detection of dark-matter particles through their interactions in detectors on Earth. Second-generation experiments such as the Cryogenic Dark Matter Search II (CDMS II) and the Cryogenic Rare Event Search with Superconducting Thermometers II (CRESST II), which are currently being assembled, will provide a serious probe of the neutralino as a dark-matter candidate.

Complementary, but indirect, information can be obtained from searches for neutrinos and gamma rays from neutralino-antineutralino annihilation, coming from the direction of particularly dense regions of dark matter, for example in the central regions of our galaxy, the Sun or the Earth. Ultimately, however, the proof of the existence of dark matter and the determination of its particle nature will have to come from searches at accelerators, notably CERN’s Large Hadron Collider (LHC). Even the gravitino, which is quite resistant to detection in direct and indirect dark-matter searches because it interacts only very feebly through the gravitational force, can be probed at the LHC.

Dark energy

In contrast with dark matter, dark energy has so far no explanation in particle physics. Apart from the observed accelerated expansion, the fact that we seem to be living at a special time in cosmic history, when dark energy appears only recently to have begun to dominate dark and other forms of matter, is also puzzling. Explanations put forth for dark energy range from the energy of the quantum vacuum to the influence of unseen space dimensions. Popular explanations invoke an evolving scalar field, often called “quintessence”, with an energy density varying in time in such a way that it is relevant today. Such an evolution may also be linked to a time variation of fundamental constants – a hot topic in view of recent indications of shifts in the frequencies of atomic transitions in quasar absorption systems, which hint that the electromagnetic fine-structure constant was smaller 7-11 billion years ago than it is today.

Depending on the nature of dark energy, the universe could continue to accelerate, begin to slow down or even recollapse. If this cosmic speed-up continues, the sky will become essentially devoid of visible galaxies in only 150 billion years. Until we understand dark energy, we cannot comprehend the destiny of the universe. Determining its nature may well lead to important progress in our understanding of space, time and matter.

The first order of business is to establish further evidence for dark energy and to discern its properties. The gravitational effects of dark energy are determined by its equation of state, i.e. the ratio of its pressure to its energy density. The more negative its pressure, the more repulsive the gravity of the dark energy. The dark energy influences the expansion rate of the universe, which in turn governs the rate at which structure grows, and the correlation between redshift and distance. Over the next two decades, high-redshift supernovae, counts of galaxy clusters, weak-gravitational lensing and the microwave background will all provide complementary information about the existence and properties of dark energy.

Inflationary ideas

The inflationary paradigm that the very early universe underwent a huge and rapid expansion is a bold attempt to extend the Big Bang model back to the first moments of the universe. It uses some of the most fundamental ideas in particle physics, in particular the notion of a vacuum energy, to answer many of the basic questions of cosmology, such as “Why is the observed universe spatially flat?” and “What is the origin of the tiny fluctuations seen in the CMB?”.

The exact cause of inflation is still unknown. Thermalization at the end of the inflationary epoch leads to a loss of details about the initial conditions. There is, however, a notable exception: inflation leaves a telltale signature of gravitational waves, which can be used to test the theory and distinguish between different models of inflation. The strength of the gravitational-wave signal is a direct indicator of what caused inflation. Direct detection of the gravitational radiation from inflation might be possible in the future with very-long-baseline, space-based, laser-interferometer gravitational-wave detectors. A promising shorter-term approach is to search for the signature of these gravitational waves in the polarized radiation from the CMB.

Matter Matters

The ordinary baryonic matter of which we are made is the tiny residue of the annihilation of matter and antimatter that emerged from the earliest universe in not-quite-equal amounts. This tiny imbalance may arise dynamically from a symmetric initial state if baryon number is not conserved in interactions that violate the conservation of C (C = charge conjugation) and the combination CP (P = parity), which produce more baryons than antibaryons in an expanding universe.

There are a few dozen viable scenarios for baryogenesis, all of which invoke more or less physics beyond the Standard Model. A particularly attractive scenario is leptogenesis, according to which neutrinos play a central role in the origin of baryon asymmetry. Leptogenesis predicts that the out-of-equilibrium, lepton-number violating decays of heavy Majorana neutrinos, with an exchange responsible for the smallness of the masses of the known light neutrinos, generate a lepton asymmetry in the early universe that is transferred into a baryon asymmetry by means of non-perturbative electroweak baryon- and lepton-number violating processes. Leptogenesis works nicely within the currently allowed window for the masses of the known light neutrinos.

Heavenly accelerators

The Earth’s atmosphere is continuously bombarded by cosmic particles. Ground-based observatories have measured them in the form of extensive air showers with energies up to 3 x 1020 eV, corresponding to centre-of-mass energies of 750 TeV, far beyond the reach of any accelerator here on Earth. We do not yet know the sources of these particles and thus cannot understand how they are produced.

Astrophysical candidates for high-energy sources include active galaxies and gamma-ray bursts. Alternatively, a completely new constituent of the universe could be involved, such as a topological defect or a long-lived superheavy dark-matter particle, both associated with the physics of grand unification. Only by observing many more of these particles, including the associated gamma rays, neutrinos and perhaps gravitational waves, will we be able to distinguish these possibilities.

Identifying the sources of ultrahigh-energy cosmic rays requires several kinds of large-scale experiments, such as the Pierre Auger Observatory, currently under construction, to collect large enough data samples and determine the particle directions and energies precisely. Dedicated neutrino telescopes of cubic-kilometre size in deep water or ice, such as IceCube at the South Pole, can be used to search for cosmic sources of high-energy neutrinos. An extension of their sensitivity to the ultrahigh-energy regime above 1017 eV will offer possibilities to infer information about physics in neutrino-nucleon scattering beyond the reach of the LHC.

bright-rec iop pub iop-science physcis connect