Comsol -leaderboard other pages

Topics

Laser-pulse blasts set antiparticle production record

CCnew2_04_09

The latest record for antiparticle density created in the laboratory has not come from an accelerator facility but from the Lawrence Livermore National Laboratory’s Jupiter laser facility. Hui Chen and colleagues blasted picosecond laser pulses carrying 1020 Wcm–2 from the Titan laser onto gold targets some 1 mm thick. Part of each laser pulse created a plasma and part drove the plasma’s electrons into the gold. The gold nuclei then slowed down the electrons, producing photons that converted into electron–positron pairs. The result was an estimated 1016 positrons/cubic centimetre.

In addition to being intrinsically interesting this work could aid better understanding of astrophysical phenomena such as gamma-ray bursts. It could also lead to new ways to produce positron sources, which at present are limited to positron-emitting radioisotopes and pair-creation from high-energy photons at accelerators.

First sector is closer to cool down…

Installation of the new helium pressure-release system for the LHC is progressing well. The first sector to be fully completed is 5-6, with all 168 individual pressure-release ports now in place. These ports will allow a greater rate of helium escape in the event of a sudden increase in temperature.

To install the pressure-release ports teams had to cut and open the “bellows” – the large accordion-shaped sleeves that cover the interconnections between two magnets. Once all of the ports were fitted, work on closing the bellows could begin. This marked the end of the consolidation work on this sector and the start of preparations to cool it down. By the end of March the first three vacuum subsectors had been sealed. Each subsector is a 200 m long section of the insulating vacuum chamber that surrounds the magnet cold mass. Once sealed, each subsector is tested for leaks before the air is pumped out.

Meanwhile, teams are working through the night and on weekends to install the replacement magnets in the damaged area of sector 3-4 at a rate of six to seven per week. At the same time, the pace of interconnection work has increased sharply over the past few weeks. For example, within a fortnight, the number of joints being soldered rose from two to eight a week. Elsewhere, a magnet in sector 1-2 that was found to have high internal resistance has now been replaced.

• For up-to-date news, see The Bulletin at http:/cdsweb.cern.ch/journal/.

…while the injection chain sees beam again

CCnew3_04_09

On 18 March beam commissioning started in Linac 2, the first link in CERN’s accelerator complex. This marks the start of what will be the longest period of beam operations in the laboratory’s history, with the accelerators remaining operational throughout winter 2009/2010 to supply the LHC. This will limit the opportunities for maintenance, so teams are anticipating what they would normally have done in the winter shutdown and doing as much as possible during the consolidation work on the LHC.

The injection chain for the LHC also contains more venerable accelerators, which have had considerable refurbishment work done on them over recent years. At 50 years old this year, the PS was starting to show signs of its age back in 2003, when the long period of radiation exposure on electrical insulation caused a fault in two magnets and a busbar connection. Since then there has been a huge campaign to refurbish more than half of the PS magnets, with the 51st and final refurbished magnet being installed in the tunnel on 3 February this year. In addition, the power supplies for the auxiliary magnets have been completely replaced and this year new cables have been laid.

The Magnet Group has also thermally tested almost every part of the machine – the first thorough survey of its kind in the history of the PS. After leaving the magnets to run for several hours the team used a thermal camera to check for poor connections, which would lead to slight heating.

The SPS has also undergone considerable refurbishment on top of the normal shutdown activities over the past few years. The final 90 dipole magnets have been repaired this year, ending the three-year project to refurbish the cooling pipes in 250 of the dipole magnets. Also, most of the cabling to the short straight sections has been replaced.

ALICE prepares for jet measurements

The ALICE experiment has reached another milestone with the successful installation of the first two supermodules of the electromagnetic calorimeter (EMCal).

CCnew4_04_09

ALICE is designed to study matter produced in high-energy nuclear collisions at the LHC, in particular using lead ions. The goal is to investigate thoroughly the characteristics of hot, dense matter as it is thought to have existed in the early universe. Experiments at RHIC at Brookhaven have shown that an important way to probe the matter formed in heavy-ion collisions is to study its effect on high-energy partons (quarks and gluons) produced early in the collision. As the partons propagate through the resulting “fireball” their energy loss depends on the density and interaction strength of the matter they encounter. The high-energy partons become observable as jets of hadrons when they hadronize and the energy loss becomes evident through the decreased energy of the products that emerge from the fragmentation process.

Although the ALICE experiment has excellent momentum-measurement and identification capabilities for charged hadrons, it previously lacked a large-acceptance electromagnetic calorimeter to measure the neutral energy component of jets. The EMCal, a lead-scintillator sampling calorimeter with “Shashlik”-style optical-fibre read-out, will provide ALICE with this capability. It consists of identical modules each comprising four independent read-out towers of 6 cm × 6 cm. Twelve modules attached to a back-plate form one strip-module, and 24 strip-modules inserted into a crate comprise one EMCal supermodule with a weight of about 8 t.

The EMCal is a late addition to ALICE, arriving in effect as a first upgrade. Indeed, the full approval (with construction) funds didn’t occur until early 2008. The calorimeter covers about one-third of the acceptance of the central part of ALICE, where it must fit within the existing structure by means of a novel independent support structure – between the magnet coil and the layer of time-of-flight counters. Installation of the 8 t supermodules requires a system of rails with a sophisticated insertion device to bridge across to the support structure. The full EMCal will consist of 10 full supermodules and two partial supermodules.

NSCL researchers constrain nuclear symmetry energy at low density

By analysing collisions between several combinations of tin nuclei, researchers at the Michigan State University National Superconducting Cyclotron Laboratory (NSCL) have refined the understanding of nuclear symmetry energy. Their work marks the first successful theoretical explanation of common observables that are related to symmetry energy in heavy-ion experiments. The results should help in discerning the properties of neutron stars, particularly in the crust region.

CCnew6_04_09

The nuclear attraction between a neutron and a proton is, on average, stronger than that between two protons or two neutrons. The nuclear contribution to the difference between the binding energy of a system of all neutrons and another with equal numbers of protons and neutrons is known as the symmetry energy. To allow for this difference, formulae to calculate nuclear masses include a symmetry-energy term. This term often takes a form that assumes the symmetry energy to be independent of density, even though its value inside the nucleus, at normal density, should exceed its value at the surface, where the density is lower and the ratio of proton to neutron densities differs from that for the nuclear interior.

The symmetry energy of a stable nucleus reflects typical nuclear densities of about 2–3 × 1014 g/cm3; it contributes modestly to the binding energy but influences significantly the stability of nuclei against beta decay. Despite the sensitivity of nuclear masses to its average value, the precise understanding of the dependence of symmetry energy on density has proved elusive, leading to large uncertainties in theoretical predictions for properties of nuclei that are very rich in neutrons. The effects of symmetry energy loom even larger in environments that have unusual ratios of protons to neutrons and much larger ranges of density, such as in neutron stars. There, the dependence of the symmetry energy upon density is one of the most uncertain parts of the mathematical palette describing the forces at play.

Now, Betty Tsang, Bill Lynch, Pawel Danielewicz and colleagues have helped to constrain understanding of the density dependency of symmetry energy by studying how it affects heavy-ion reactions at NSCL’s Coupled Cyclotron Facility (Tsang et al. 2009). In two experiments, the team directed various beams of tin nuclei at stationary targets of tin. The four combinations included a beam of 124Sn (50 protons and 74 neutrons) on a target of 124Sn, 112Sn (62 neutrons) on 112Sn, 124Sn on 112Sn, and 112Sn on 124Sn. This allowed the researchers to create and study nuclear matter with different neutron-to-proton ratios over a range of density, which could be varied by adjusting the energy of the beam and the centrality of the collisions.

The team collected data on several observables, including isospin diffusion, which probes the neutron-to-proton ratio of neutron-rich projectile nuclei after collisions with neutron-deficient target nuclei. During grazing collisions at relative velocities of 0.3 c, a neck region with reduced density can form between projectile and target nuclei through which neutrons and protons can diffuse. The stronger the symmetry energy is in this neck region, the more likely the neutron-to-proton ratios in the projectile and target nuclei will equilibrate and become equal. A second observable involves comparisons of the energy spectra of neutrons and protons in central head-on collisions. In this case the symmetry energy expels neutrons from the central overlap region of the projectile and target nuclei; the ratio of neutron-to-proton emission then provides a probe of the variation in symmetry energy as the system compresses and expands during the collision.

By comparing the experimental data to results obtained with theoretical models developed by their Chinese colleagues, YingXun Zhang and Zhuxia Li at the China Institute of Atomic Energy, the team obtained constraints on the density dependence of symmetry energy at densities ranging from normal down to around one third nuclear matter density. The results will help to describe the inner crust of neutron stars, where the density of nuclear matter is in the 1–2 × 1014 g/cm3 range. The role of symmetry energy at the cores of such stars, where the density of nuclear matter reaches 8 × 1014 g/cm3, is currently associated with the largest uncertainty in descriptions of neutron stars.

PAMELA finds an anomalous cosmic positron abundance

CCast1_04_09

The collaboration for the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) experiment has published evidence of a cosmic-positron abundance in the 1.5–100 GeV range. This high-energy excess, which they identify with statistics that are better than previous observations, could arise from nearby pulsars or dark-matter annihilation.

PAMELA, which went into space on a Russian satellite launched from the Baikonur cosmodrome in June 2006, uses a spectrometer – based on a permanent magnet coupled to a calorimeter – to determine the energy spectra of cosmic electrons, positrons, antiprotons and light nuclei. The experiment is a collaboration between several Italian institutes with additional participation from Germany, Russia and Sweden.

Preliminary, unofficial results from the PAMELA mission appeared last autumn on preprint servers together with speculation that PAMELA had found the signature of dark-matter annihilation. The paper by Oscar Adriani from the University of Florence and collaborators now published in Nature is more cautious with the dark-matter interpretation of the positron excess, identifying pulsars as plausible alternatives. The data presented include more than a thousand million triggers collected between July 2006 and February 2008. Fine tuning of the particle identification allowed the team to reject 99.9% of the protons, while selecting more than 95% of the electrons and positrons. The resulting spectrum of the positron abundance relative to the sum of electrons and positrons represents the highest statistics to date.

Below 5 GeV, the obtained spectrum is significantly lower than previously measured. This discrepancy is believed to arise from modulation of the cosmic rays induced by the strength of the solar wind, which changes periodically through the solar cycle. At higher energies the new data unambiguously confirm the rising trend of the positron fraction, which was suggested by previous measurements. This appears highly incompatible with the usual scenario in which positrons are produced by cosmic-ray nuclei interacting with atoms in the interstellar medium. The additional source of positrons dominating at the higher energies could be the signature of dark-matter decay or annihilation. In this case, PAMELA has already shown that dark matter would have a preference for leptonic final states. Adriani and colleagues deduce this from the absence of a similar excess of the antiproton-to-proton abundance, a result that they published earlier this year. They suggest that the alternative origin of the positron excess at high energies is particle acceleration in the magnetosphere of nearby pulsars producing electromagnetic cascades.

The authors state that the PAMELA results presented here are insufficient to distinguish between the two possibilities. They seem, however, confident that various positron-production scenarios will soon be testable. This will be possible once additional PAMELA results on electrons, protons and light nuclei are published in the near future, together with the extension of the positron spectrum up to 300 GeV thanks to on-going data acquisition. Complementary information will also come from the survey of the gamma-ray sky by the Fermi satellite.

Why antihydrogen and antimatter are different

 

“Those who say that antihydrogen is antimatter should realize that we are not made of hydrogen and we drink water, not liquid hydrogen.” These are words spoken by Paul Dirac to physicists gathered around him after his lecture “My life as a Physicist” at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice in 1981 – 53 years after he had, with a single equation, opened new horizons to human knowledge. To obtain water, hydrogen is, of course, not sufficient; oxygen with a nucleus of eight protons and eight neutrons is also needed. Hydrogen is the only element in the Periodic Table to consist of two charged particles (the electron and the proton) without any role being played by the nuclear forces. These two particles need only electromagnetic glue (the photon) to form the hydrogen atom. The antihydrogen atom needs two antiparticles (antiproton and antielectron) plus electromagnetic antiglue (antiphoton). Quantum electrodynamics (QED) dictates that the photon and the antiphoton are both eigenstates of the C-operator (see later) and therefore electromagnetic antiglue must exist and act like electromagnetic glue.

If matter were made with hydrogen, the existence of antimatter would be assured by the existence of the two antiparticles (antiproton and antielectron), the existence of the antiphoton being assured by QED. As Dirac emphasized, to have matter it is necessary to have another particle (the neutron) and another glue (the nuclear glue) to allow protons and neutrons to stay together in a nucleus. This problem first comes into play in heavy hydrogen, which has a nucleus – the deuteron – made of one proton and one neutron. For these two particles to remain together there needs to be some sort of “nuclear glue”. We have no fundamental theory (like QED) to prove that the nuclear antiglue must exist and act like the nuclear glue. It can be experimentally established, however, by looking at the existence of the first example of nuclear antimatter: the antideuteron, made with an antiproton, an antineutron and nuclear antiglue. If the antideuteron exists, all other antielements beyond heavy antihydrogen must exist. Their nuclei must contain antiprotons, antineutrons and nuclear antiglue. But if the antideuteron did not exist, nothing but light antihydrogen could exist: farewell anti-water and farewell all forms of antimatter.

Dirac’s statement takes into consideration half a century of theoretical and experimental discoveries, which have ultimately concluded that the existence of antimatter is supported exclusively by experiment. The CPT theorem implies that if matter exists then so should antimatter, but T D Lee has shown that the theorem is invalid at the Planck scale (around 1019 GeV) where all of nature’s fundamental forces converge (Lee 1995). Because this grand unification is the source of everything, if CPT collapses at the energy scale where it occurs, then we can bid farewell to all that derives from CPT.

CPT and the existence of antimatter

The CPT theorem states that physical laws are invariant under simultaneous transformations that involve inversions of charge (C), parity (P) and time (T). The first of these invariance operators to be discovered was C, by Hermann Weyl in 1931. This says that physical reality remains invariable if we replace the charges that are additively conserved by their corresponding anticharges – the first known example being that of the electron and the antielectron. The P operator, discovered by Eugene Wigner, Gian-Carlo Wick and Arthur Wightman, tells us that in replacing right-handed systems with left-handed ones, the results of any fundamental experiment will not change. The T operator, discovered by Wigner, Julian Schwinger and John Bell, established that inverting the time axis will also not alter physical reality.

The mathematical formulation of relativistic quantum-field theory (RQFT), which is supposed to be the basic description of nature’s fundamental forces, possesses the property of CPT invariance whereby inverting all does not change the physical results. In other words, if we invert all charges using C, the three space reference axes (x, y, z) using P, and the time axis using T, all will remain as before. However, matter is made of masses coupled to quantum numbers that are additively conserved: electric charges, lepton numbers, baryon numbers, “flavour” charges etc. If we were to apply the three CPT operators to matter in a certain state we would obtain an antimatter state. This means that if the CPT theorem is valid then the existence of matter implies the existence of antimatter and that the mass of a piece of matter must be identical to that of the corresponding piece of antimatter.

Suppose that nature obeys the C invariance law; in this case, the existence of matter implies the existence of antimatter. If C invariance is broken, the existence of antimatter is guaranteed by CPT. Now, suppose that CP is valid; again, the existence of matter dictates the existence of antimatter. If CP is not valid, then the existence of antimatter is still guaranteed by CPT. If CPT collapses, however, only experimental physics can guarantee the existence of antimatter. This summarizes what effectively happened during the decades after Dirac’s famous equation of 1928 – until we finally understood that CPT is not an impervious bulwark governing all of the fundamental forces of nature.

Three years after Dirac came up with his equation, Weyl discovered C and it was thought at the time that the existence of the antielectron and the production of electron–antielectron pairs were the consequences of C invariance. The equality of the mean life of positive and negative muons was also thought to be an unavoidable consequence of the validity of C. These ideas continued with the discoveries of the antiproton, the antineutron and, finally, of the neutral strange meson called θ2. This apparent triumph of the invariance operators came in parallel with the success in identifying a “point-like” mathematical formulation that was capable of describing the fundamental forces of nature. Building on the four Maxwell equations the marvellous construction of RQFT was finally achieved. This theory should have been able to describe not only the electromagnetic force (from which it was derived) but also the weak force and the nuclear force. Two great achievements reinforced these convictions: Enrico Fermi’s mathematical formulation of the weak force and Hideki Yukawa’s triumphant discovery of the “nuclear glue” – the famous π meson – thanks to Cesare Lattes, Hugh Muirhead, Giuseppe Occhialini and Cecil Powell (CERN Courier September 2007 p43).

These initial extraordinary successes were, however, later confronted with enormous difficulties. In QED, there were the so-called “Landau poles” and the conclusion that the fundamental “bare” electric charge had to be zero; for the weak forces, unitarity fell apart at an energy of 300 GeV; and in the realm of the nuclear force, the enormous proliferation of baryons and mesons was totally beyond understanding in terms of RQFT. This is when a different mathematical formulation, the “scattering matrix” or S-matrix, was brought in, and with it the total negation of the “field” concept. It required three conditions: analyticity, unitarity and crossing. So, why bother with RQFT if the S-matrix is enough? On the other hand, if RQFT does not exist, how do we cope with the existence of CPT invariance? This opened the field related to the breaking of the invariance laws, C, P, T.

The shock of CP violation

In 1953 Dick Dalitz discovered the famous θ–τpuzzle: two mesons, with identical properties, had to be of opposite parity. Intrigued by this “puzzle”, T D Lee and C N Yang analysed experimental results in 1956 and found that there was no proof confirming the validity of C and P in weak interactions. Within one year of their findings, Chien-Sung Wu and her collaborators discovered that the invariance laws of C and P are violated in weak interactions. So how could we cope with the existence of antimatter? This is why Lev Landau proposed in 1957 that if both the C and P operators are violated then their product, CP, may be conserved; the existence of antimatter is then guaranteed by the validity of CP (Landau 1957).

There is one small detail that was always overlooked. In 1957, in a paper that not many had read (or understood), Lee, Reinhard Oehme and Yang demonstrated that, contrary to what had been said and repeated, the existence of the two neutral strange mesons, θ1 and θ2, was not a proof of the validity of C or P, or of their product CP (Lee, Oehme and Yang 1957).

I was in Dubna in 1964 when Jim Cronin presented the results on CP violation that he had obtained together with Val Fitch, James Christenson and René Turley. On my right I had Bruno Touschek and on my left Bruno Pontecorvo. Both said to me of Cronin and his colleagues, “they have ruined their reputation”. The validity of Landau’s proposal of CP invariance, with antimatter as the mirror image of matter, was highly attractive; to put it in doubt found very few supporters. Dirac, however, was one of the latter and he fell into a spell of deep “scientific depression”. He, who was well known for his caution, had total belief in C invariance, which had led him to predict the existence of antiparticles, antimatter, antistars and antigalaxies. Now even CP was breaking.

If the CPT product is to remain conserved, the breaking of CP involves that of T. For some of the founding fathers of modern physics, however, invariance relative to time inversion at the level of the fundamental laws had to remain untouched. So, if CP breaks and T does not, then CPT must break. After all, why not? In fact, the bulwark of CPT was RQFT, but it already seemed as if this mathematical formulation had to be replaced by the S-matrix. The breaking of the invariance operators (C, P, CP) and the apparent triumph of the S-matrix were coupled at the time to experimental results that indicated no trace of antideuterons, even among the production of 10 million pions in proton collisions.

To obtain the first example of antimatter it seemed that CPT had to be proved right, which meant proving the violation of T. No one back in 1964 could imagine that physics would open the new horizons that we know today. The only actions left to us then were of a technological–experimental nature. It turned out that the discovery of true antimatter required the realization of the most powerful beam of negative particles at CERN’s PS, as well as the invention of a new technology capable of measuring, with a precision never achieved before, the time-of-flight of charged particles. This is how we came to discover an antideuteron produced, not after 10 million pions, but only after a 100 million (Massam et al. 1965).

The crucial experiment

The search for the existence of the first example of nuclear antimatter needed a high-intensity beam of negative particles produced in high-energy interactions. This negative beam was dominated by pions, with a fraction of K mesons and a few antiprotons. It was necessary to separate particles with different masses, starting with pions and then going up with mass to K mesons, antiprotons and (it was hoped) antideuterons. To accomplish this the first step was a combined system of bending magnets coupled with magnetic quadrupoles – for focusing purposes – and a strong electrostatic separator. This high-intensity beam of negative “partially separated” particles was the result of a special project made and carried out with two friends of mine, Mario Morpurgo and Guido Petrucci. The second vital step was a sophisticated time-of-flight system capable of achieving the time resolution needed to detect one negative particle (the antideuteron) in a background of a 100 million other negative particles (essentially, π mesons). The results, which showed the existence of a negative particle with mass equal to that of the deuteron, were obtained on 11 March 1965, the same day as the 41st birthday of the PS director, Peter Standley.

Dirac came out of his depression when he received a phone call from his friend Abdus Salam, saying: “Relax Paul, my friend Nino Zichichi has discovered the antideuteron”. Dirac called me and invited me for lunch at his place, and this started a friendship that led us to the realization of the Erice Seminars on Nuclear Wars.

To understand the importance of this discovery we need to have a clear idea of what is meant by “matter”. Particles are not sufficient to constitute matter; we also need “glues”. With electromagnetic glue we can make atoms and molecules; to make the nucleus, we need protons, neutrons and nuclear glue. To make antimatter requires antiprotons, antineutrons and nuclear “antiglue”; but we also need to know that nuclear antiglue allows these constituents of antimatter to stick together just as protons and neutrons do to form matter. A fundamental law is needed that establishes the existence of nuclear antiglue that is exactly identical to the nuclear glue in matter. This fundamental law is missing.

In fact, we know today that the strengths of all of the fundamental forces converge at the Planck energy, where CPT invariance breaks down. Moreover, if we replace the “points” with “strings”, nothing changes. CPT results from the “point-like” mathematical formulation of RQFT but it collapses at the energy scale at which the fundamental forces originate, i.e. at the Planck energy. If we replace “points” with “strings” then relativistic quantum string theory results; but it cannot validate CPT. This implies that no theory exists that can guarantee that if we have matter then antimatter must exist. This is why the fact that all anti-atoms with their antinuclei must exist with certitude resulted from the experiment at CERN in March 1965.

In 1995, during his opening lecture for the symposium celebrating the 30th Anniversary of the Discovery of Antimatter in Bologna, T D Lee said: “Werner Heisenberg discovered quantum mechanics in 1925 and by 1972 he had witnessed almost all of the big jumps in modern physics. Yet he ranked the discovery of antimatter as the biggest jump of all. In fact in his book The Physicist’s Conception of Nature (1972), Heisenberg writes, ‘I think that this discovery of antimatter was perhaps the biggest jump of all big jumps in physics in our century.’.”

• This article is based on the opening talk given at the event to celebrate the 50th anniversary of the Karlsruhe Nuclide Chart, held in Karlsruhe on 9 December 2008 (see www.nucleonica.net:81/wiki/index.php/Help:Karlsruhe_Nuclide_Chart). For the full article with complete references, see www.nucleonica.net:81/wiki/images/a/aa/05_Zichichi_Karlsruhe.pdf.

New Zealand meeting looks at dark matter

The 7th Heidelberg International Conference on Dark Matter in Astrophysics and Particle Physics – Dark 2009 – was held at Canterbury University in Christchurch on 18–24 January. The event saw 56 invited talks and contributions, which provided an exciting and up-to-date view of the development of research in the field. The participants represented well the distribution of dark-matter activities around the world: 25 from Europe, 11 from the US, 5 from Japan and Korea, 14 from Australia and New Zealand, and 1 from Iran. The programme covered the traditionally wide range of topics, so this report looks at the main highlights.

CCdar1_04_09

The conference started with an overview of searches for supersymmetry at the LHC and dark matter by Elisabetta Barberio of the University of Melbourne. To date, the only evidence for cold dark matter from underground detectors is from the DAMA/LIBRA experiment in the Gran Sasso National Laboratory, as Pierluigi Belli from the collaboration explained. This experiment, which looks for an expected seasonal modulation of the signal for weakly interacting massive particles (WIMPs), now has a significance of 8.4 σ. Unfortunately, all other direct searches for dark matter do not currently have the statistics to look for this signal. Nevertheless, Jason Kumar from Hawaii described how testing the DAMA/LIBRA result at the Super-Kamiokande detector might prove interesting.

Later sessions covered other searches for dark matter. Tarek Saab from Florida gave an overview of ongoing direct searches in underground laboratories, including recent results from the Cryogenic Dark Matter Search experiment in the Soudan mine, and Nigel Smith of the UK’s Rutherford Appleton Laboratory presented results from the ZEPLIN III experiment in the Boulby mine. Irina Krivosheina of Heidelberg and Nishnij Novgorod discussed the potential offered by using bare germanium detectors in liquid nitrogen or argon for dark-matter searches, on the basis of the results from the GENIUS-Test-Facility in the Gran Sasso National Laboratory. Chung-Lin Shan of Seoul National University reported on how precisely WIMPs can be identified in experimental searches in a model-independent way.

Searching for signals from dark-matter annihilation in X-rays and weighing supermassive black holes with X-ray emitting gas were subjects for Tesla Jeltema of the University of California Observatories/Lick Observatory and David Buote of the University of California, Irvine. Stefano Profumo of the University of California, Santa Cruz, provided an overview of fundamental physics with giga-electron- volt gamma rays. Iris Gebauer of Karlsruhe addressed the excess of cosmic positrons indicated by the Energetic Gamma Ray Experiment Telescope, which are still under discussion, as well as the new anomalies observed by the Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA, PAMELA finds an anomalous cosmic positron abundance ) satellite experiment and the Advanced Thin Ionization Calorimeter (ATIC) balloon experiment. These results and the limits that they set on some annihilating dark matter (neutralino or gravitino) models were also discussed by Kazunori Nakayama of Tokyo and Koji Ishiwata of Tohoku.

Other presentations outlined results and prospects for the AMANDA, IceCube and ANTARES experiments, which study cosmic neutrinos – though there is still a long way to go before they have conclusive results. Emmanuel Moulin of the Commissariat à l’énergie Atomique/Saclay presented results from imaging atmospheric Cherenkov telescopes, in particular the recent measurements from HESS, which exploited the fact that dwarf spheroidal galaxies, such as Canis Major, are highly enriched in dark matter and are therefore good candidates for its detection. Unfortunately, the results do not yet have the sensitivity of the Wilkinson Microwave Anisotropy Probe in restricting either the minimal supersymmetric Standard Model or Kaluza–Klein scenarios.

Leszek Roszkowski of Sheffield gave an overview of supersymmetric particles (neutralinos) as cold dark matter, while scenarios of gravitino dark matter and their cosmological and particle-physics implications were presented by Gilbert Moultaka of the University of Montpellier and Yudi Santoso of the Institute for Particle Physics Phenomenology, Durham. Dharam Vir Ahluwalia of the University of Canterbury put the case for the existence of a local fermionic dark-matter candidate with mass-dimension one, on the basis of non-standard Wigner classes. However, as the proposed fields, as outlined in detail by Ben Martin of Canterbury, do not fit into Steven Weinberg’s formalism of quantum-field theory, this suggestion led to dispute between other experts. An interesting candidate for dark matter was presented by Norma Susanna Mankoc-Borstnik of the University of Ljubljana, who proposed a fifth family as candidates for forming dark matter.

Dark energy and the cosmos

Dark energy was a major topic at the conference. Chris Blake of Swinburn University of Technology in Melbourne presented the prospects for the WiggleZ survey at the Anglo-Australian Telescope, the most sensitive experiment of this kind, and Matt Visser of Victoria University in Wellington gave a cosmographic analysis of dark energy. On the theoretical side there are diverging approaches to dark energy, including attempts to explain it in a “radically conservative way without dark energy”, as David Wiltshire of Canterbury University, Christchurch, explained.

A particular highlight was the presentation by Terry Goldman of Los Alamos, which discussed a possible connection between sterile fermion mass and dark energy. His conclusion was that a neutrino with mass of 0.3 eV could solve the problem of dark energy. This possibility was qualitatively supported by results of non-extensive statistics in astroparticle physics that Manfred Leubner of the University of Innsbruck presented, in the sense that dark energy is expected to behave like an ordinary gas. Goldman’s suggestion is also of interest with respect to the final result of the Heidelberg–Moscow double-beta-decay experiment, reported by Hans Klapdor-Kleingrothaus, which predicts a Majorana neutrino mass of 0.2–0.3 eV.

Danny Marfatia of the University of Kansas discussed mass-varying neutrinos in his presentation about phase transition in the fine structure constant. He proposed that the coupling of neutrinos to a light scalar field might explain why Ωdark energy is of the same order as Ωmatter. Possible connections between dark matter and dark energy with models of warped extra dimensions and the hierarchy problem were outlined by Ishwaree Neupane of the University of Canterbury and Yong Min Cho of Seoul National University.

Dark mass and the centre of the galaxy was the topic of a special session in which Andreas Eckart of the University of Cologne presented recent results on the luminous accretion onto the dark mass at the centre of the Milky Way. Patrick Scott of Stockholm University discussed dark stars at the galactic centre, while Benoit Famaey of the Université Libre de Bruxelles and Felix Stoehr of the Space Telescope European Coordinating Facility/ESO in Garching discussed the distribution of dark and baryonic matter in galaxies. Primordial molecules and the first structures in the universe were the topics addressed by Denis Puy of the Univesité Montpellier II. Youssef Sobouti of the Institute of Advanced Studies on Basic Science in Zanjan, Iran, presented a theorem on a “natural” connection between baryonic dark matter and its dark companion, while Matthias Buckley of the California Institute of Technology put forward ideas about dark matter and “dark radiation”.

Gravity also came under scrutiny. David Rapetti of SLAC explored the potential of constraining gravity with the growth of structure in X-ray galaxy clusters, while Agnieszka Jacholkowska of IN2P3/Centre National de la Recherche Scientifique gave an experimental view of probing quantum-gravity effects with astrophysical sources. In a special session on general relativity, Roy Patrick Kerr of Canterbury University gave an interesting historical lecture entitled “Cracking the Einstein Code”.

To conclude, the lively and highly stimulating atmosphere of Dark 2009 reflected a splendid future for research in the field of dark matter in the universe and for particle physics beyond the Standard Model. The proceedings will be published by World Scientific.

Study group considers how to preserve data

High-energy-physics experiments collect data over long time periods, while the associated collaborations of experimentalists exploit these data to produce their physics publications. The scientific potential of an experiment is in principle defined and exhausted within the lifetime of such collaborations. However, the continuous improvement in areas of theory, experiment and simulation – as well as the advent of new ideas or unexpected discoveries – may reveal the need to re-analyse old data. Examples of such analyses already exist and they are likely to become more frequent in the future. As experimental complexity and the associated costs continue to increase, many present-day experiments, especially those based at colliders, will provide unique data sets that are unlikely to be improved upon in the short term. The close of the current decade will see the end of data-taking at several large experiments and scientists are now confronted with the question of how to preserve the scientific heritage of this valuable pool of acquired data.

To address this specific issue in a systematic way, the Study Group on Data Preservation and Long Term Analysis in High Energy Physics formed at the end of 2008. Its aim is to clarify the objectives and the means of preserving data in high-energy physics. The collider experiments BaBar, Belle, BES-III, CLEO, CDF, D0, H1 and ZEUS, as well as the associated computing centres at SLAC, KEK, the Institute of High Energy Physics in Beijing, Fermilab and DESY, are all represented, together with CERN, in the group’s steering committee.

Digital gold mine

The group’s inaugural workshop took place on 26–28 January at DESY, Hamburg. To form a quantitative view of the data landscape in high-energy physics, each of the participating experimental collaborations presented their computing models to the workshop, including the applicability and adaptability of the models to long-term analysis. Not surprisingly, the data models are similar – reflecting the nature of colliding-beam experiments.

The data are organized by events, with increasing levels of abstraction from raw detector-level quantities to N-tuple-like data for physics analysis. They are supported by large samples of simulated Monte Carlo events. The software is organized in a similar manner, with a more conservative part for reconstruction to reflect the complexity of the hardware and a more dynamic part closer to the analysis level. Data analysis is in most cases done in C++ using the ROOT analysis environment and is mainly performed on local computing farms. Monte Carlo simulation also uses a farm-based approach but it is striking to see how popular the Grid is for the mass-production of simulated events. The amount of data that should be preserved for analysis varies between 0.5 PB and 10 PB for each experiment, which is not huge by today’s standards but nonetheless a large amount. The degree of preparation for long-term data varies between experiments but it is obvious that no preparation was foreseen at an early stage of the programs; any conservation initiatives will take place in parallel with the end of the data analysis.

The main issue will be the communication between the experimental collaborations and the computing centres after final analyses

From a long-term perspective, digital data are widely recognized as fragile objects. Speakers from a few notable computing centres – including Fabio Hernandez of the Centre de Calcul de l’Institut, National de Physique Nucléaire et de Physique des Particules, Stephen Wolbers of Fermilab, Martin Gasthuber of DESY and Erik Mattias Wadenstein of the Nordic DataGrid Facility – showed that storage technology should not pose problems with respect to the amount of data under discussion. Instead, the main issue will be the communication between the experimental collaborations and the computing centres after final analyses and/or the collaborations where roles have not been clearly defined in the past. The current preservation model, where the data are simply saved on tapes, runs the risk that the data will disappear into cupboards while the read-out hardware may be lost, become impractical or obsolete. It is important to define a clear protocol for data preservation, the items of which should be transparent enough to ensure that the digital content of an experiment (data and software) remains accessible.

On the software side, the most popular analysis framework is ROOT, the object-oriented software and library that was originally developed at CERN. This offers many possibilities for storing and documenting high-energy-physics data and has the advantage of a large existing user community and a long-term commitment for support, as CERN’s René Brun explained at the workshop. One example of software dependence is the use of inherited libraries (e.g. CERNLIB or GEANT3), and of commercial software and/or packages that are no longer officially maintained but remain crucial to most running experiments. It would be an advantageous first step towards long-term stability of any analysis framework if such vulnerabilities could be removed from the software model of the experiments. Modern techniques of software emulation, such as virtualization, may also offer promising features, as Yves Kemp of DESY explained. Exploring such solutions should be part of future investigations.

Examples of previous experience with data from old experiments show clearly that a complete re-analysis has only been possible when all of the ingredients could be accounted for. Siggi Bethke of the Max Planck Institute of Physics in Munich showed how a re-analysis of data from the JADE experiment (1979–1986), using refined theoretical input and a better simulation, led to a significant improvement in the determination of the strong coupling-constant as a function of energy. While the usual statement is that higher-energy experiments replace older, low-energy ones, this example shows that measurements at lower energies can play a unique role in a global physical picture.

The experience at the Large Electron-Positron (LEP) collider, which Peter Igo-Kemenes, André Holzner and Matthias Schroeder of CERN described, suggested once more that the definition of the preserved data should definitely include all of the tools necessary to retrieve and understand the information so as to be able to use it for new future analyses. The general status of the LEP data is of concern, and the recovery of the information – to cross-check a signal of new physics, for example – may become impossible within a few years if no effort is made to define a consistent and clear stewardship of the data. This demonstrates that both early preparation and sufficient resources are vital in maintaining the capability to reinvestigate older data samples.

The next-generation publications database, INSPIRE, offers extended data-storage capabilities that could be used immediately to enhance public or private information related to scientific articles

The modus operandi in high-energy physics can also profit from the rich experience accumulated in other fields. Fabio Pasian of Trieste told the workshop how the European Virtual Observatory project has developed a framework for common data storage of astrophysical measurements. More general initiatives to investigate the persistency of digital data also exist and provide useful hints as to the critical points in the organization of such projects.

There is also an increasing awareness in funding agencies regarding the preservation of scientific data, as David Corney of the UK’s Science and Technology Facilities Council, Salvatore Mele of CERN and Amber Boehnlein of the US Department of Energy described. In particular, the Alliance for Permanent Access and the EU-funded project in Framework Programme 7 on the Permanent Access to the Records of Science in Europe recently conducted a survey of the high-energy-physics community, which found that the majority of scientists strongly support the preservation of high-energy-physics data. One important aspect that was also positively appreciated in the survey answers was the question of open access to the data in conjunction with the organizational and technical matters, an issue that deserves careful consideration. The next-generation publications database, INSPIRE, offers extended data-storage capabilities that could be used immediately to enhance public or private information related to scientific articles, including tables, macros, explanatory notes and potentially even analysis software and data, as Travis Brooks of SLAC explained.

While this first workshop compiled a great deal of information, the work to synthesize it remains to be completed and further input in many areas is still needed. In addition, the raison d’être for data preservation should be clearly and convincingly formulated, together with a viable economic model. All high-energy-physics experiments have the capability of taking some concrete action now to propose models for data preservation. A survey of technology is also important, because one of the crucial factors may indeed be the evolution of hardware. Moreover, the whole process must be supervised by well defined structures and steered by clear specifications that are endorsed by the major laboratories and computing centres. A second workshop is planned to take place at SLAC in summer 2009 with the aim of producing a preliminary report for further reference, so that the “future of the past” will become clearer in high-energy physics.

Happy 20th birthday, World Wide Web

In March 1989 Tim Berners-Lee, a physicist at CERN, handed a document entitled “Information management: a proposal” to his group leader Mike Sendall. “Vague, but exciting”, were the words that Sendall wrote on the proposal, allowing Berners-Lee to continue with the project. Both were unaware that it would evolve into one of the most important communication tools ever created.

Berners-Lee returned to CERN on 13 March this year to celebrate the 20th anniversary of the birth of the World Wide Web. He was joined by several web pioneers, including Robert Cailliau and Jean-François Groff, who worked with Berners-Lee in the early days of the project, and Ben Segal, the person who brought the internet to CERN. In between reminiscing about life at CERN and the early years of the web, the four gave a demonstration of the first ever web browser running on the very same NeXT computer on which Berners-Lee wrote the original browser and server software.

The event was not only about the history of the web; it also included a short keynote speech from Berners-Lee, which was followed by a panel discussion on the future of the web. The panel members were contemporary experts who Berners-Lee believes are currently working with the web in an exciting way.

Berners-Lee’s original 1989 proposal showed how information could easily be transferred over the internet by using hypertext, the now familiar point-and-click system of navigating through information pages. The following year, Cailliau, a systems engineer, joined the project and soon became its number-one advocate.

The birth of the web
Berners-Lee’s idea was to bring together hypertext with the internet and personal computers, thereby having a single information network to help CERN physicists to share all of the computer-stored information not only at the laboratory but around the world. Hypertext would enable users to browse easily between documents on web pages that use links. Berners-Lee went on to produce a browser-editor with the goal of developing a tool to make a creative space to share and edit information and build a common hypertext. What should they call this new browser? “The Mine of Information”? “The Information Mesh”? When they settled on a name in May 1990 – before even the first piece of code had been written – it was Tim who suggested “the World Wide Web”, or “WWW”.

Development work began in earnest using NeXT computers delivered to CERN in September 1990. Info.cern.ch was the address of the world’s first web site and web server, which was running on one NeXT computer by Christmas of 1990. The first web-page address was http://info.cern.ch/hypertext/WWW/TheProject.html, which gave information about the WWW project. Visitors to the pages could learn more about hypertext, technical details for creating their own web page and an explanation on how to search the web for information.

Although the web began as a tool to aid particle physicists, today it is used in countless ways by the global community

To allow the web to extend, Berners-Lee’s team needed to distribute server and browser software. The NeXT systems, however, were far more advanced than the computers that many other people had at their disposal, so they set to work on a far less sophisticated piece of software for distribution. By the spring of 1991, testing was under way on a universal line-mode browser, created by Nicola Pellow, a technical student. The browser was designed to run on any computer or terminal and worked using a simple menu with numbers to provide the links. There was no mouse and no graphics, just plain text, but it allowed anyone with an internet connection to access the information on the web.

Servers began to appear in other institutions across Europe throughout 1991 and by December the first server outside the continent was installed in the US at the Stanford Linear Accelerator Center (SLAC). By November 1992 there were 26 servers in the world and by October 1993 the number had increased to more than 200 known web servers. In February 1993 the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign released the first version of Mosaic, which made the web easily available to ordinary PC and Macintosh computers.

The rest, as they say, is history. Although the web began as a tool to aid particle physicists, today it is used in countless ways by the global community. Today the primary purpose of household computers is not to compute but “to go on the web”.

Berners-Lee left CERN in 1994 to run the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology and help to develop guidelines to ensure long-term growth of the web. So what predictions do Berners-Lee and the W3C have for the future of the web? What might it look like at the age of 30?

In his talk at the WWW@20 celebrations Berners-Lee outlined his hopes and expectations for the future: “There are currently roughly the same number of web pages as there are neurons in the human brain”. The difference, he went on to say, is that the number of web pages increases as the web grows older.

One important future development is the “Semantic Web” – a place where machines can do all of the tedious work. The concept is to create a web where machines can interpret pages like humans. It will be a “move from using a search engine to an answer engine,” explains Christian Bizer of the web-based system groups at Freie Universität Berlin. “When I search the web I don’t want to find documents, I want to find answers to my questions!” he says. If a search engine can understand a web page then it can pick out the exact answer to a question, rather than simply presenting you with a list of web pages.

As Berners-Lee put it: “The Semantic Web is a web of data. There is a lot of data that we all use every day, and it’s not part of the web. For example, I can see my bank statements on the web, and my photographs, and I can see my appointments in a calendar, but can I see my photos in a calendar to see what I was doing when I took them? Can I see bank-statement lines in a calendar? Why not? Because we don’t have a web of data. Because data is controlled by applications, and each application keeps it to itself.”

“Device independence” is a move towards a greater variety of equipment that can connect to the web. Only a few years ago, virtually the only way to access the web was through a PC or workstation. Now, mobile handsets, smart phones, PDAs, interactive television systems, voice-response systems, kiosks and even some domestic appliances can access the web.

The mobile web is one of the fastest-developing areas of web use. Already, more global web browsing is done on hand-held devices, like mobile phones, than on laptops. It is especially important in developing countries, where landlines and broadband are still rare. For example, African fishermen are using the web on old mobile phones to check the market price of fish to make sure that they arrive at the best port to sell their daily catch. The W3C is trying to create standards for browsing the web on phones and to encourage people to make the web more accessible to everyone in the world.

• The full-length webcast of the WWW@20 event is available at http://cdsweb.cern.ch/record/1167328?ln=en.

bright-rec iop pub iop-science physcis connect