Comsol -leaderboard other pages

Topics

CDF and D0 report single top quark events

CCnew8_03_09

Almost 14 years to the day after the announcement of the discovery of the top quark in 1995, the CDF and D0 collaborations at Fermilab have announced the observation of top quarks produced singly in proton–antiproton collisions, rather than in top antitop pairs. On 4 March, the two teams submitted their independent results to Physical Review Letters. Unlike pair-production of top quarks, which occurs through the strong interaction, the production of single top quarks occurs through the weak interaction and has important implications for possible new physics beyond the Standard Model.

Only one in every 20,000 million proton–antiproton collisions produces a single top quark, and to make matters worse, the signal of these rare occurrences is easily mimicked by other “background” processes that occur at much higher rates. Both teams have previously published evidence for single top production at Fermilab’s Tevatron, CDF last year and D0 in 2007. These earlier papers reported significance levels of 3.7 σ and 3.6 σ for CDF and D0, respectively. Now both teams report the first observation of the process with a significance of 5.0 σ, based on 3.2 fb–1 of proton–antiproton collision data in CDF and 2.3 fb–1 in D0.

CCnew9_03_09

Examples of single top quark candidates in D0 (see other image) and CDF. In both events the top quark decays and produces a b quark jet, a muon and a neutrino. In the CDF event (this image), the arrow indicates the direction of the escaping neutrino.
Image credit: D0 and CDF.

The analyses also constrain the magnitude of |Vtb|, an important parameter of the Standard Model’s Cabibbo-Kobayashi Maskawa (CKM) matrix, which describes how quarks can change from one type to another. If the CKM matrix describes the intermixing of only three generations of quarks – with top and bottom forming the third generation – the value of |Vtb| should be close to one. In the new analysis CDF finds |Vtb| = 0.91 ± 0.11(stat.+syst.) ± 0.07(theor.), while D0 reports |VtbfL| = 1.07±0.12 where fis the strength of the left-handed coupling between the W boson and the top and bottom quarks.

In addition to its inherent success, discovering single top quark production has presented the collaborations with challenges similar to the search for the Higgs boson, in terms extracting an extremely small signal from a large background. Advanced analysis techniques pioneered for the single top discovery are now in use in both collaborations for the Higgs boson search.

Fermi sees most powerful gamma-ray burst

CCast1_03_09

The Fermi Gamma-ray Space Telescope has observed the evolution of a gamma-ray burst over six orders of magnitude in photon energy. The combination of its brightness and its remote distance makes it by far the most energetic gamma-ray blast ever seen. Furthermore, the observed delay of the highest-energy emission gives a lower limit on the strength of quantum-gravity effects.

Since the launch of the Swift satellite in November 2004, up to a few gamma-ray bursts (GRBs) are routinely detected every day. The phenomenon now seems commonplace and only the record-breaking bursts attract public attention.

After the “Rosetta stone” GRB 030329 and the “naked-eye” GRB 080319B, here comes the “extreme” GRB 080916C. This giant burst was observed by Fermi, which was launched into space last year. It is one of the rare bursts detected up to giga-electron-volt energies by the Large Area Telescope (LAT), the main instrument aboard Fermi. In five months the LAT has detected only 3 GRBs out of 58 that were in its field of view, according to the positions provided by the secondary instrument, the Gamma-ray Burst Monitor (GBM).

The burst of 16 September 2008, GRB 080916C, was the brightest observed so far and the only one with a distance determined by an observed redshift. The redshift of z = 4.35 ± 0.15, measured by the Gamma-Ray Burst Optical/Near-Infrared Detector (GROUND) on the 2.2 m Max Planck Telescope at La Silla, in Chile, locates the collapsing-star event at a distance of 12.2 thousand million light-years. This cosmological distance means that GRB 080916C was intrinsically extremely luminous – at least twice as much as the previous record-holder, GRB 990123, which was observed by the Energetic Gamma-Ray Experiment Telescope aboard the Compton Gamma-Ray Observatory.

The Fermi LAT and Fermi GBM collaborations have jointly published a detailed analysis of the emission of this extreme burst. The combined GBM and LAT spectra – covering the range from 8 keV to 300 GeV – are consistent with a very simple spectral shape. Spectra were extracted for five distinct epochs during the evolution of the burst and all have the simple form of a Band function, which smoothly joins low- and high-energy power laws. A simple physical interpretation for such spectra is synchrotron radiation of charged particles in a magnetic field, but this cannot be confirmed, because the synchrotron self-Compton emission expected in this case could not be detected.

The most interesting result is probably the evidence of a consistently increasing delay of higher-energy radiation during the second peak of the GRB emission. This time lag can be intrinsic to the source or induced by quantum-gravity effects along the path from the remote source to the telescope. The delay by about 16 s of the most energetic photon – 13 GeV – with respect to the on-set of the burst allows the researchers to derive a lower limit on the quantum-gravity mass of only about one order of magnitude below the Planck mass. The question of whether the observed delay is intrinsic to the source or results from its long journey through the quantum foam of space–time will eventually be solved with the detection of several other bursts with known redshift and measurable time delays.

Dark-matter research arrives at the crossroads

CCdes1_03_09

There is overwhelming evidence that the universe contains dark matter made from unknown elementary particles. Astronomers discovered more than 75 years ago that spiral galaxies, such as the Milky Way, spin faster than allowed by the gravity of known kinds of matter. Since then there have been many more observations that point to the existence of this dark matter.

Gravitational lensing, for example, provides a unique probe of the distribution of luminous-plus-dark matter in individual galaxies, in clusters of galaxies and in the large-scale structure of the universe. The deflection of  gravitational light depends only on the gravitational field between the emitter and the observer, and it is independent of the nature and state of the matter producing the gravitational field, so it yields by far the most precise determinations of mass in extragalactic astronomy. Gravitational lensing has established that, like spiral galaxies, elliptical galaxies are dominated by dark matter.

Strong evidence for the fact that most of the dark matter has a non-baryonic nature comes from the observed heights of the acoustic peaks in the angular power spectrum of the cosmic microwave background measured by the Wilkinson Microwave Anisotropy Probe, because the peaks are sensitive to the fraction of mass in the baryons. It turns out that only about 4% of the mass of the universe is in baryons, whereas about 20% is in non- baryonic dark matter – a finding that is also in line with inferences from primordial nucleosynthesis.

A host of candidates

This leaves some pressing questions. What is the microscopic nature of this non- baryonic dark matter? Why is its mass fraction today about 20%? How dark is it? How cold is it? How stable is it?

CCdes2_03_09

Progress in finding the answers to such questions provided the focus for the 2008 DESY b  Theory Workshop, which was held on 29 September – 2 October.

Organized by Manuel Drees of Bonn, it sought to combine results from a range of experiments and confront them with theoretical predictions. It is clear that the investigation of the microscopic nature of dark matter has recently entered a decisive phase. Experiments are being carried out around the globe to try to identify traces of the  mysterious dark-matter particles. Since the different theoretical candidates appear to have quite distinctive signatures, there are good reasons to expect that from a combination of all of these efforts a common picture will materialize within the next decade.

Theoretical particle physicists have proposed a whole host of candidates for the constituents of non-baryonic dark matter, with fancy names such as axions, axinos, gravitinos, neutralinos and lightest Kaluza–Klein partners. The best-motivated of these occur in extensions of the Standard Model that have been proposed to solve other problems besides the dark-matter puzzle. The axion, for example, arose in extensions that aim to solve the strong CP problem. It later turned out to be a viable dark- matter candidate if its mass is in the micro-electron-volt range. Gravitinos and neutralinos, on the other hand, are the superpartners of the graviton and the neutral bosons, respectively. They arise in supersymmetric extensions of the Standard Model, which aim at a solution of the hierarchy problem and at a grand unification of the strong and electroweak interactions. In fact, neutralinos are natural candidates for dark matter because they have cross-sections of the order of electroweak interactions and their masses are expected to be of the order of the weak scale (i.e. 100 GeV). This leads to the fact that their relic density resulting from freeze-out in the early universe is just right to account for the observed amount of dark matter.

Neutralinos belong to the class of weakly interacting massive particles (WIMPs). Such particles seem to be more or less generic in extensions of the Standard Model at the tera-electron-volt scale, but their stability (or a long enough lifetime) has to be imposed. This is not necessary for super-weakly interacting massive particles (superWIMPs), such as sterile neutrinos, gravitinos, hidden sector gauge bosons (gauginos) and the axino. For example, unstable but long-lived gravitinos in the 5–300 GeV mass range are viable candidates for dark matter and provide a consistent thermal history of the universe, including successful Big Bang nucleosynthesis.

Detecting dark matter

Owing to their relatively large elastic cross-sections with atomic nuclei, WIMPs such as neutralinos are good candidates for direct detection in the laboratory, yielding up to one event per day, per 100 kg of target material. The expected WIMP signatures are nuclear recoils, which should occur uniformly throughout the detector volume at a rate that shows an annual flux modulation by a few per cent. Intriguingly, the DAMA experiment in the Gran Sasso National Laboratory has seen evidence for such an annual modulation.

However, there is some tension with other direct-detection experiments. Theoretical studies have revealed that interpretation in terms of a low-mass (5–50 GeV) WIMP is marginally compatible with the current limits from other experiments. In contrast to DAMA, which looks just for scintillation light, most of the latter exploit at least two observables out of the set (phonons, charge, light) to reconstruct the nuclear recoil-energy.

CCdes3_03_09

Many different techniques based on cryogenic detectors (e.g. the Cryogenic Dark Matter Search), noble liquids (e.g. the XENON Dark Matter Project) or even bubble chambers, are currently employed to search for WIMPs via direct detection. Detectors with directional sensitivity (e.g. the Directional Recoil Identification From Tracks experiment) may not only have a better signal-to-background discrimination but may also be capable of measuring the local dark-matter, phase-space distribution. In summary, these direct experiments are currently probing some of the theoretically interesting regions for WIMP candidates. The next generation of experiments may enter the era of WIMP (astro) physics.

The axion is another dark-matter candidate for which there are ongoing direct- detection experiments. Both the Axion Dark Matter Experiment (ADMX) in the US and the Cosmic Axion Research with Rydberg Atoms in a Resonant Cavity (CARRACK) experiment in Japan exploit a cooled cavity inside a strong magnetic field to search for the stimulation of a cavity resonance from a dark-matter axion–photon conversion in the microwave frequency region, corresponding to the expected axion mass. While they differ in their detector technology – ADMX uses microwave telescope technology whereas CARRACK employs Rydberg atom technology – both experiments are designed to cover the 1–10 μeV mass range. Indeed, if dark matter consists just of axions then it should soon be found in these experiments. The CERN Axion Solar Telescope, meanwhile, is looking for axions produced in the Sun.

There are also of course possibilities for indirect detection. Dark matter may not be absolutely dark. In fact, in regions where the dark-matter density is high (e.g. in the Earth, in the Sun, near the galactic centre, in external galaxies), neutralinos or other WIMPs may annihilate to visible particle–antiparticle pairs and lead to signatures in gamma-ray, neutrino, positron and antiproton spectra. Moreover, superWIMPs (e.g. gravitinos), may also leave their traces in cosmic-ray spectra if they are not absolutely stable.

CCdes4_03_09

Interestingly, the Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA) satellite experiment recently observed an unexpected rise in the fraction of positrons at energies of 10–100 GeV, thereby confirming earlier observations by the High Energy Antimatter Telescope balloon experiment. In addition, the Advanced Thin Ionization Chamber balloon experiment has reported a further anomaly in the electron-plus positron flux, which can be interpreted as the continuation of the PAMELA excess to about 800 GeV. The quantification of these excesses is still quite uncertain, not least because of relatively large systematic uncertainties. It is well established that they cannot be explained by the standard mechanism, namely the secondary production of positrons arising from collisions between cosmic-ray protons and the interstellar medium within our galaxy. However, a very conventional astrophysical source for them could be nearby pulsars.

On a more speculative level, these observations have inspired theorists to search for pure particle-physics models that accommodate all results. Generically, interpretations in terms of WIMP annihilation seem to be disfavoured, because they require a huge clumpiness of the Milky Way dark-matter halo, which is at variance with recent numerical simulations of the latter. This constraint is relaxed in superWIMP scenarios, where the positrons may be produced in the decay of dark-matter particles (e.g. gravitinos).

It is clear that one of the keys to understanding the origin of the excess in the positron fraction is the accurate, separate measurement of positron and electron fluxes, which can be done with further PAMELA data and with the Alpha Magnetic Spectrometer satellite experiment. Furthermore, distinguishing different interpretations of the observed excesses requires a multimessenger approach (i.e. to search for signatures in the radio range, synchrotron radiation, neutrinos, antiprotons and gamma rays).

Fortunately the Fermi Gamma-Ray Space Telescope is in orbit and taking data. Together with other cosmic-ray experiments it will probe interesting regions of parameter space in WIMP and superWIMP scenarios of dark matter.

Dark matter at colliders

Clearly, at colliders the existence of a dark-matter candidate can be inferred only indirectly from the apparent missing energy, associated with the dark-matter particles, in the final state of the collision. However, such a measurement can be made with precision and under controlled conditions. To extract the properties, such as the mass, of dark-matter particles, these final-state measurements have to be compared with predictions from theoretical models. In a supersymmetric extension of the Standard Model, for example, with the neutralino as the lightest superpartner, experiments at the LHC would search for signatures from the cascade decay of gluinos and squarks into gluons, quarks, leptons and neutralinos. This would show up as large missing transverse-energy in events with some jets and leptons. The endpoints in kinematic distributions could then be used for the determination of the dark-matter candidate’s mass, which could be compared with the mass determined eventually by measurements of recoil energy in direct-detection experiments.

This complementarity between direct, indirect and collider searches for dark matter is essential. Although collider experiments might identify a dark-matter candidate and precisely measure its properties, they will not be able to distinguish a cosmologically stable particle from one that is long-lived but unstable. In turn, direct detection cannot tell definitely what kind of WIMP has been observed. Moreover, in many superWIMP dark matter scenarios a direct detection is impossible, while detection at the LHC may be feasible. For example, if the lightest superpartner is a gravitino (or hidden gaugino) and the next-to-lightest is a charged lepton, experiments at the LHC may search for the striking signature of a displaced vertex plus an ionizing track.

In many cases, however, precision measurements from a future electron–positron collider seem to be necessary to exploit fully the collider–cosmology–astrophysics synergy. In addition “low-energy photon- collider” experiments – such as the Axion-Like Particle Search at DESY, the GammeV experiment at Fermilab and the Optical Search for QED magnetic birefringence, axions and photo regeneration at CERN, where the interactions of intense laser beams with strong electromagnetic fields are probed – may give viable insight into the existence of very lightweight, axion-like, dark-matter candidates.

In summary, there is evidence for non-baryonic dark matter that is not made of any known elementary particle. We are today in the exploratory stage to figure out its microscopic nature. Many ideas are currently being explored in theories and in experiments, and more will come. Nature has given us a few clues that we need to exploit. The data coming soon from accelerators, and from direct and indirect detection experiments, will be the final arbiter.

MINOS maps the deepest secrets of the upper atmosphere

A collaborative study by particle physicists and atmospheric researchers has found the first correlations between daily variations in cosmic-ray muons detected deep below ground and large-scale phenomena in the upper atmosphere. The effect suggests that underground muon-detectors could play a valuable role in identifying certain meteorological events and observing long-term trends.

Scott Osprey and colleagues from the UK’s National Centre for Atmospheric Science and Oxford University have worked with members of the Main Injector Neutrino Oscillation Search (MINOS) collaboration in analysing data collected between 2003 and 2007 by the MINOS Far Detector, located 705 m below ground in a disused iron mine at Soudan, in Minnesota. The MINOS experiment intercepts a neutrino beam that goes 725 km from Fermilab to Soudan and studies long-baseline neutrino oscillations; the penetrating muons appear as background noise.

The teams have found a close relationship between the rate of muons detected in MINOS and upper-air temperatures from the European Centre for Medium Range Weather Forecasts. In particular, they discovered strong correlations between the muon rate and the upper-air temperature during short-term events (of around 10 days) in the upper atmosphere, or stratosphere, in winter.

When primary comic rays strike the Earth’s atmosphere they interact, creating pions and kaons. These mesons in turn decay to produce muons – the most energetic of which penetrate deep below the Earth’s surface. The mesons can also interact before they decay, so the number of muons produced depends on the local density of the atmosphere and varies with temperature. An increase in temperature means a decrease in density and, hence, fewer mesons interact and instead decay, increasing the number of muons. Physicists have known of this effect since the Monopole Astrophysics and Cosmic Ray Observatory first observed a seasonal variation in the rate of muons a decade ago (Ambrosio et al. 1997).

CCnew4_03_09

Most of the mesons that give rise to the muons detected in MINOS occur at altitudes of around 15 km in the region known as the tropopause, where there is little variation in temperature. However, the mesons also occur in the mid-stratosphere – at altitudes where temperatures fluctuate, particularly in winter. For the analysis, the team defined an “effective” temperature based on an average temperature over the altitudes where mesons occur, weighted by the calculated distribution of meson production.

The results show a striking relationship between this temperature and the number of muons, with correlated changes occurring over periods of only a few days (Osprey et al. 2009). The data for the Northern Hemisphere winter of 2004–2005 are particularly interesting. The meteorological data indicate the occurrence of a major phenomenon, known as a sudden stratospheric warming, during February. This was linked to break-up of the winter polar vortex, a polar cyclone that brings cooler weather and which extended over the MINOS site in early February. Prior to that, the 2004–2005 winter had seen the lowest recorded temperatures in the polar stratosphere, and ozone concentration in the polar vortex was anomalously low.

The results show that underground muon data contain information that could identify important short-term meteorological events, over and above the already known seasonal effect. This is interesting for atmospheric researchers, as it provides an independent technique to measure such phenomena. Moreover, physicists have cosmic-ray data from experiments dating back 50 years or more, covering periods when upper-air observations from weather balloons were less reliable than today.

Conference probes the dark side of the universe

During the past decade a consistent quantitative picture of the universe has emerged from a range of observations that include the microwave background, distant supernovae and the large-scale distribution of galaxies. In this “standard model” of the universe, normal baryonic matter contributes only 4.6% to the overall density; the remainder consists of dark components in the form of dark matter (23%) and dark energy (72%). The existence and dominance of dark energy is particularly unexpected and raises fundamental questions about the foundations of modern physics. Is dark energy merely Albert Einstein’s cosmological constant? Is it a new kind of field that evolves dynamically as the universe expands? Or is a new law of gravity needed?

In the search for answers to these questions, more than 250 participants, ranging from senior experts to young students, attended the 3rd Biennial Leopoldina Conference on Dark Energy held on 7–11 October 2008 at the Ludwig Maximilians University (LMU) in Munich. The meeting was organized jointly by the Bonn-Heidelberg-Munich Transregional Research Centre “The Dark Universe” and the German Academy of Sciences Leopoldina, with support from the Munich-based Excellence Cluster “Origin and Structure of the Universe”. The goal of the international symposium was to gain a better understanding of the nature of dark energy by bringing together observers, modellers and theoreticians from particle physics, astrophysics and cosmology to present and discuss their latest results and to explore possible future routes in the rapidly expanding field of dark-energy research.

CCdar1_03_09

Around 60 plenary talks at the conference were held in the central auditorium (Aula) of LMU Munich, with lively discussions following in poster sessions (where almost 100 posters were displayed) and during the breaks in the inner court of the university. There were fruitful exchanges between physicists engaged in a range of observations, from ground-based studies of supernovae to satellite probes of the cosmic microwave background (CMB), and theorists in search of possible explanations for the accelerated expansion of the universe, which was first reported in 1998. This acceleration has occurred in recent cosmic history, corresponding to redshifts of about z ≤ 1.

An accelerating expansion

Brian Schmidt of the Australian National University in Canberra gave the observational keynote speech. He led the High-z Supernova Search Team that presented the first convincing evidence for the existence of dark energy – which works against gravity to boost the expansion of the universe – almost simultaneously with the Supernova Cosmology Project led by Saul Perlmutter of the Lawrence Berkeley National Laboratory and the University of California at Berkeley. Adam Riess, a member of the High-z team, presented constraints on dark energy from the latest supernovae data, including those from the Hubble Space Telescope at redshift z > 1. This is where the acceleration becomes a deceleration, owing to the lessening impact of dark energy at earlier times (figure 1).

Both teams independently discovered the accelerating expansion of the universe by studying distant type Ia supernovae. They found that the light from these events is fainter than expected for a given expansion velocity, indicating that the supernovae are farther away than predicted (figure 2, p18). This implies that the expansion is not slowing under the influence of gravity – as might be expected – but is instead accelerating because of some uniformly distributed, gravitationally repulsive substance accounting for more than 70% of the mass-energy content of the universe – now known as dark energy.

Type Ia supernovae arise from runaway thermonuclear explosions following accretion on a carbon/oxygen white dwarf star and after calibration have an almost uniform brightness. This makes them “standard candles”, suitable as tools for the precise measurement of astronomical distances. Wolfgang Hillebrandt of the Munich Max-Planck Institute for Astrophysics presented 3D simulations of type Ia supernova explosions. It is still a matter of debate how standard these so-called “standard candles” really are. Their colour–luminosity relationship is inconsistent with Milky Way-type dust and, as Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics mentioned, the role of dust is generally underestimated. Future supernova observations in the near infrared hold promise because, at these wavelengths, the extinction by dust is five times lower. Bruno Leibundgut of ESO said that infrared observations using the future James Webb Space Telescope will be crucial in solving the problem of reddening from dust.

As Schmidt pointed out, and others detailed in subsequent talks, measurements of the temperature fluctuations in the CMB provide independent support for the theory of an accelerating universe. These were first observed by the Cosmic Background Explorer in 1991 and subsequently in 2000 by the Boomerang and MAXIMA balloon experiments. Since 2003 the Wilkinson Microwave Anisotropy Probe (WMAP) has observed the full-sky CMB with high resolution. Additional evidence came from the Sloan Digital Sky Survey and 2-degree Field Survey. In 2005 they measured ripples in the distribution of galaxies that were imprinted in acoustic oscillations of the plasma when matter and radiation decoupled as protons and electrons combined to form hydrogen atoms, 380,000 years after the Big Bang. These are the “baryonic acoustic oscillations” (BAOs).

Dark-energy candidates

Eiichiro Komatsu of the Department of Astronomy at the University of Texas in Austin, lead author of WMAP’s paper on the cosmological interpretation of the five-year data, said that anything that can explain the observed luminosity distances of type Ia supernovae, as well as the angular-diameter distances in the CMB and BAO data, is “qualified for being called dark energy” (figure 3). Candidates include energy, modified gravity and an extreme inhomogeneity of space.

CCdar2_03_09

Although the latter approach was presented in several talks, the impression prevailed that the effects of dark energy are too large to be accounted for through spatial inhomogeneities and an accordingly adapted averaging procedure in general relativity. Komatsu – and many other speakers – clearly favours the Lambda-cold-dark-matter (ΛCDM) model, with a small cosmological constant Λ to account for the accelerated expansion. The dark-energy equation of state is usually taken to be w = p/ρ= –0.94 ± 0.1(stat.) ± 0.1 (syst.) with a negative pressure, p; a varying w is not currently favoured by the data. Several speakers presented various versions of modified gravity. Roy Maartens of the University of Portsmouth in the UK acknowledged that ΛCDM is currently the best model. As an alternative he presented a braneworld scenario in which the vacuum energy does not gravitate and the acceleration arises from 5D effects. This scenario is, however, challenged by both geometric and structure-formation data.

Theoretical keynote-speaker Christof Wetterich of Heidelberg University emphasized that the physical origin, the smallness and the present-day importance of the cosmological constant are poorly understood. In 1988, almost simultaneously with but independently from Bharat Ratra and James Peebles, he proposed the existence of a time-dependent scalar field, which gives rise to the concept of a dynamical dark energy and time-dependent fundamental “constants”, such as the fine-structure constant. Although observations may eventually decide between dynamical or static dark energy, this is not yet possible from the available data.

CCdar3_03_09

Yet another indication for the accelerated expansion comes from the investigation of the weak-lensing effect, as Matthias Bartelmann of Heidelberg University and others explained. This method of placing constraints on dark energy through its effect on the growth of structure in the universe relies on coherent distortions in the shapes of background galaxies by foreground mass structures, which include dark matter. The NASA-DOE Joint Dark Energy Mission (JDEM) is a space probe that will make use of this effect, in addition to taking BAO observations and distance and redshift measurements of more than 2000 type Ia supernovae a year. The project is now in the conceptual-design phase and has a target launch date of 2016. ESA’s corresponding project – the Dark UNiverse Explorer – is part of the planned Euclid mission, scheduled for launch in 2017. There were presentations on both missions.

The first major scientific results from the 10 m South Pole Telescope (SPT) initial survey were the highlight of the report by John Carlstrom, principal investigator for the project. The telescope is one of the first microwave telescopes that can take large-sky surveys with precision. It will be possible to use the resulting size-distribution pattern together with information from other telescopes to determine the strength of dark energy.

CCdar4_03_09

Carlstrom described the detection of four distant, massive clusters of galaxies in an initial analysis of SPT survey data – a first step towards a catalogue of thousands of galaxy clusters. The number of clusters as a function of time depends on the expansion rate, which leads back to dark energy. Three of the detected galaxy clusters were previously unknown systems. They are the first clusters detected in a Sunyaev–Zel’dovich (SZ) effect survey, and are the most significant SZ detections from a subset of the ongoing SPT survey. This shows that SZ surveys, and the SPT in particular, can be an effective means of finding galaxy clusters. The hope is for a catalogue of several thousand galaxy clusters in the southern sky by the end of 2011 – enough to rival the constraints on dark energy that are expected from the Euclid Mission and NASA’s JDEM.

The conference was lively and social activities enabled discussions outside the conference auditorium, particularly during the lunch breaks in nearby Munich restaurants. The presentations and discussions all demonstrated that the search for definite signatures and possible sources of the accelerated expansion of the universe continues to flourish and has an exciting future ahead. The results on supernovae and the CMB have led the way, but there is still much to learn. In his conference summary, Michael Turner of the University of Chicago emphasized that “cosmology has entered an era with large quantities of high-quality data”, and that the quest to understand dark energy will remain a grand scientific adventure. Future observational facilities – such as the Planck probe of the CMB, which is scheduled for launch around Easter 2009, the all-sky galaxy-cluster X-ray mission eROSITA, ESA’s Euclid and NASA’s JDEM – are all designed to produce unprecedented high-precision cosmology results that will shed new light on dark energy.

CMS does a full cosmic-data run

Event display of a cosmic muon

On 11 November 2008 the conclusion of a month-long, major data-taking run by the CMS collaboration brought a two-year commissioning phase to a successful close. The aim of the Cosmic Run At Four Tesla (CRAFT) was to run CMS continuously as a complete experiment, 24 hours a day, to gain further operational experience even without LHC beams. Data from 300 million cosmic muons were recorded with the solenoid at its operating point of 3.8 T for detailed detector studies. By the end of the exercise more than 7 million tracks in the strip tracker and around 75,000 tracks in the pixel tracker were available for alignment and other studies. The data volume totalled an impressive 400 TB. Runs were reconstructed at the Tier-0 centre with a typical latency of six hours before shipping to several Tier-1 and Tier-2 centres.

The CMS data flow was stressed during CRAFT in a way similar to what is foreseen for LHC operations, with calibration and/or alignment sequences performed for the electromagnetic calorimeter, the tracker and the muon systems during the run. Random triggers added on top of the cosmic-muon triggers emulated the trigger rates that will be experienced at the LHC. The high-level trigger ran a menu similar to the one used for the LHC start-up, with the installed complement of nearly 4500 filter processors for the CMS filter farm (around 40% of the final number) being deployed for the first time at the end of the run. Along with the main cosmic data, special raw-data streams created for specific calibration and alignment purposes were shipped to the Tier-0 centre. Teams residing at the CMS analysis facility in Meyrin and at remote centres including DESY and Fermilab checked the data quality offline and validated the online quality-assignments of the data-quality monitoring system.

transverse-momentum distribution

The precision of tracker alignment previously obtained with data recorded without a magnetic field is now improving significantly with the data collected during CRAFT because the momentum measurement enables better control of the uncertainty that arises from multiple scattering. The run also allowed an initial alignment of the modules comprising the barrel pixel detector. The collaboration completed a first reprocessing of the CRAFT data, incorporating these newly determined calibration and alignment constants, in early December 2008. Several analysis teams will use these data to perform some basic physics measurements, including measurements of the charge ratio and momentum distribution of cosmic muons.

Residuals for the inner-barrel 7_01_09

The success of the continuous operation of CMS in “LHC-like” conditions marks the end of a commissioning phase that started two years ago: in November 2006 both the underground experimental cavern and the adjoining service cavern (which is now buzzing with all of the off-detector readout electronics) were empty. The CMS teams are looking back with understandable pride at what has been achieved since then and are looking forward to the challenges of operation with colliding LHC beams in 2009. The commissioning programme to improve the readiness for LHC physics will resume after the annual cooling maintenance, which is expected to take place in late January 2009.

Proton-rich nuclei shed light on heavy-element synthesis in cosmos

Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University (MSU) have measured the half-lives of 100Sn and 96Cd, two nuclei with equal numbers of protons and neutrons that are close to the proton drip line – the proton-rich limit of stability. The result for 100Sn narrows the error range of previous half-life measurements, while the half-life of 96Cd, measured here for the first time, casts a light on the role of the isotope in the rapid proton-capture (rp) process – a key part of heavy-element synthesis in the cosmos. The result for 96Cd also implies a new, as-yet-unknown origin for 96Ru in the solar system, where its abundance has long remained unexplained.

Daniel Bazin and colleagues at MSU used the same fast-beam fragmentation scheme to create both species. Using the facility’s coupled cyclotrons, the team generated a primary beam of 120 MeV/nucleon 112Sn and fragmented it on a beryllium target. The resulting radioactive beam was filtered through the A1900 Fragment Separator and newly commissioned RF Fragment Selector. Finally, the filtered secondary beam implanted itself in NSCL’s Beta Counting System, a series of silicon beta-particle detectors flanked by detectors of the laboratory’s segmented germanium array. To track the beta-decay of implanted nuclei, Bazin’s team monitored decay events at the impact site and neighbouring pixels in the detector for 10 s after implantation.

CCnew12_01_09

For 100Sn, the team observed a half-life of 0.55+0.70–0.31s. This result is similar to previous measurements made at GSI and yields an average of 0.86+0.37–0.20s when combined. The increased precision may bolster understanding of this isotope, which is one of the few “doubly magic” nuclei close to the proton drip line. Its protons and neutrons both form a closed-shell configuration, which affords extra stability to the nucleus.

The measured half-life of 96Cd, which was previously unknown, was 1.03+0.24–0.21. This is within the range of several theoretical predictions but it is too short to make 96Cd a critical “waiting point” in the rp process. This process, along with slow neutron capture and rapid neutron capture, probably accounts for many of the universe’s heavy elements. It occurs in supernovae, X-ray bursts and perhaps other astrophysical environments where seed nuclei join with free protons to form nuclei of increasing atomic number. Build-up stalls at specific stages when the binding of another proton is energetically unfavourable. Nuclei accumulate at these so-called waiting points, generating a spike in the observed isotope abundance. Such a spike exists at 96Ru, the product of beta-decay from 96Cd, which suggests a waiting point at 96Cd.

With the result for 96Cd, the half-lives of all expected waiting points along the proton drip line, up to the rp-process’s predicted endpoint, are now known experimentally. However, the half-life that Bazin and collaborators have measured is approximately a tenth of the value required to account for the observed abundance of 96Ru. There must be a different explanation – perhaps an unexplored astrophysical process.

Protons and neutrons cosy up in nuclei and neutron stars

Short range correlation reaction

The structure of nuclei is determined by the nature of the strong force: strong repulsion at short distances and strong attraction at moderate distances. This force, which binds the nucleons together while also keeping the structure from collapsing, makes the nucleus a fairly dilute system. This has allowed for calculations that treat the nucleus as a collection of hard objects in an average or mean field to describe many of the properties of nuclear matter. Of course, this simple picture of the nucleus is inaccurate – the nucleons should really be thought of as waves that can strongly overlap for short periods of time. Indeed, recent experiments have shown that about 20% of all nucleons in carbon are in such a state at any given time.

These states of strongly overlapping wave functions are commonly referred to as nucleon–nucleon short-range correlations (SRC). Calculations indicate that, for short periods, these correlations lead to local densities in the nucleus that are several times as high as the average nuclear density of 0.17 GeV/fm3. Such densities are comparable to those predicted in the core of neutron stars. c– whether extremely small (such as helium nuclei) or extremely large (such as neutron stars).

The distinctive experimental features of two-nucleon SRCs are the large back-to-back relative momentum and small centre-of-mass momentum of the correlated pair, where large and small are relative to the Fermi-sea level of about 250 MeV/c. This is shown in figure 1, where a virtual photon is absorbed by one nucleon in a correlated pair, causing both nucleons to be emitted from the nucleus. The large strength of the nucleon–nucleon interaction at short distances means that the relative motion in the pair should be the same in all nuclei, although the absolute probability of a correlation grows with density – with the probability of a nucleon to be part of a pair reaching 25% for iron and heavier nuclei.

Scaling effects

Isolating the signal of the SRC initial state has been difficult at low and medium energies because other processes (such as final-state interactions and meson-exchange currents) mimic this effect. Nevertheless, there has recently been progress using modern accelerators with high-luminosity and high-momentum transfer – as well as with kinematics, where competing mechanisms are suppressed. For electron scattering, this corresponds to luminosities of 1037 cm–2s–1; a four-momentum transfer, Q2, greater than 1.4 (GeV/c)2; and focusing on kinematics where Bjorken-x, Q2/2mυ, is greater than 1, where υ is the beam energy minus the energy of the scattered electron. For elastic scattering from a free proton, Bjorken-x is exactly 1. At least two nucleons must be involved to have x > 1; x > 2 requires a system with at least three nucleons.

Cross-section ratio

One of the new results has come from inclusive data at high momentum-transfer, Q2 > 1.4 (GeV/c)2, and x > 1 from the Hall B CEBAF Large Acceptance Spectrometer at the US Department of Energy’s Jefferson Laboratory (K S Egiyan et al. 2006). The measurement was made to check the predicted universality of SRCs by measuring the ratio of the inclusive cross-sections off heavy nuclei to those of light nuclei at sufficiently large Q2 and x, where the scattering off slow nucleons in the nucleus does not contribute. The signal predicted to indicate dominance of such correlations is the scaling of the ratios – a weak dependence on x and Q2 for 1 < x <2 – which is clearly observed in the data. Continuing this line of reasoning would suggest that a second scaling region arising from three-nucleon correlations should be observed for x > 2. Indeed, a second scaling region does seem to be present, although the statistics are limited (figure 2). These results reflect the dominance of few-nucleon correlations in the high-momentum component of the nucleus.

While the inclusive data clearly suggest strong local correlations, it has taken exclusive data to confirm that the inclusive scaling arises from SRCs, as well as to measure directly what fraction of nucleon-pair types are involved. In exclusive experiments, using a high-momentum probe to remove one fast nucleon from the nucleus effectively breaks a pair and releases the second nucleon of the correlation. Brookhaven National Laboratory and Jefferson Lab have conducted such tests on the carbon nucleus with a hadronic and electromagnetic probe, respectively. They measured momentum transfers of greater than 1.5 GeV/c and a missing momentum greater than the Fermi momentum of 250 MeV/c.

Fraction of SRC pair combinations

Both experiments have shown that recoiling nucleons, with a momentum above the Fermi-sea level in the nucleus, are part of a correlated pair and both observed the same strength of proton–neutron correlations (Piasetzky et al. 2006; Subedi et al. 2008). This confirms that the process is accessing a universal property of nuclei unrelated to the probe. The Jefferson Lab’s experiment also observed the proton–proton pairs and used matched-acceptance detectors to determine the ratio of neutron–proton to proton–proton pairs as nearly 20, as figure 3 shows. Calculations explain the magnitude of this neutron–proton to proton–proton ratio as arising from the short-range tensor part, or nucleon–nucleon spin-dependent part, of the nucleon–nucleon force (Sargsian et al. 2005; Schiavilla et al. 2007; Alvioli et al. 2008).

Isolating the signatures of SRCs opens new avenues for the exploration of nucleon–nucleon interactions at short distances, particularly in addressing the long-standing question of how close nucleons have to approach before the nucleons’ quarks reveal themselves. Nucleon degrees of freedom can no longer be used to describe the system.

These studies can also influence calculations of the extremely massive. Without SRCs, a large object, such as a neutron star, could be well approximated as a Fermi gas predominantly of neutrons with a small fraction of protons acting as a separate Fermi gas. With SRCs the protons and neutrons interact, strongly enhancing the high-momentum component of the proton momentum distribution, leading to changes in the physical properties of the system (figure 4).

Momentum space illustration

In the future, inclusive short-range-correlation experiments will improve the statistics of the x > 2 data to show definitively whether or not there is indeed a second scaling. These will use targets such as 40Ca and 48Ca to measure the dependence on the initial-state proton–neutron ratio. The future exclusive experiments will focus on 4He (a nucleus where both full and mean-field calculations can come together) and push the limits of the recoil momentum to extend our understanding of the repulsive part of the nucleon–nucleon potential.

The Pauli principle faces testing times

The Pauli exclusion principle (PEP), and more generally the spin-statistics connection, plays a pivotal role in our understanding of countless physical and chemical phenomena, ranging from the periodic table of the elements to the dynamics of white dwarfs and neutron stars. It has defied all attempts to produce a simple and intuitive proof, despite being spectacularly confirmed by the number and accuracy of its predictions, because its foundation lies deep in the structure of quantum theory. Wolfgang Pauli remarked in his Nobel Prize lecture (13 December 1946): “Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had the feeling, and I still have it today, that this is a deficiency. The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems, to me, unavoidable.” Pauli’s conclusion remains basically true today.

CCspi1_01_09

The PEP was a major theme of the workshop “Theoretical and experimental aspects of the spin-statistics connection and related symmetries” at SpinStat 2008, held in Trieste on 21–25 October at the Stazione Marittima conference centre. Some 60 theoretical and experimental physicists attended, as well as a number of philosophers of science. The aim was to survey recent work that challenges traditional views and to put forward possible new experimental tests, including new theoretical frameworks.

A single framework for discussion

On the theoretical side, several researchers are currently exploring theories that may allow a tiny violation of PEP, such as quon theory, the existence of hidden dimensions, geometric quantization and a new spin-statistics connection in the framework of quantum gravity. Others have done several experiments over the past few years to search for possible small violations of the spin-statistics connection, for both fermions and photons. Thus scientists have recently obtained new limits for the validity of PEP for nuclei, nucleons and electrons, as well as for the validity of Bose–Einstein statistics for photons. These results were presented during the workshop and discussed for the first time in a single framework together with theoretical implications and future perspectives. The aim was to accomplish a “constructive interference” between theorists and experimentalists that could lead towards possible new ideas for nuclear and particle-physics tests of the PEP’s validity, including the interpretation of existing results.

CCspi2_01_09

The workshop benefited from the presence of researchers who have devoted a life’s work to the thorough examination of the structure of the spin-statistics connection in the context of quantum mechanics and field theory. In addition, young scientists put forward suggestions and experimental results that may pave the way to interesting future developments.

Oscar W Greenberg of the University of Maryland opened the workshop with a review talk on theoretical developments, with special emphasis on quon theory – which characterizes particles by a parameter q, where q spans the range from –1 to +1 and thus interpolates between fermion and boson – in an effort to develop more general statistics. Greenberg is the originator of this concept and he continues to be a major contributor to its theoretical development, maintaining a high degree of interest in the field. Robert Hilborn of the University of Texas reviewed the past experimental attempts to find a violation. Other theoretical speakers included distinguished scientists such as Stephen Adler, Michael Berry, Aiyalam P Balachandran, Sergio Doplicher, Giancarlo Ghirardi, Nikolaos Mavromatos and Allan Solomon.

CCspi3_01_09

The experimental reports included presentations on spectroscopic tests of Bose–Einstein statistics of photons by Dmitry Budker’s group at the University of California and the Lawrence Berkeley National Laboratory, and studies of spin-statistics effects in nuclear decays by Paul Kienle’s group at the Stefan Mayer Institute for Subatomic Physics in Vienna. Other talks included results from the Borexino neutrino experiment and the DAMA/LIBRA dark-matter detector in the Gran Sasso laboratory, the KLOE experiment at Frascati, the NEMO-2 detector in the Fréjus underground laboratory and the dedicated Violation of the Pauli exclusion principle experiment in the Gran Sasso laboratory. Each talk was followed by lively discussions concerning the interpretation of the results. Michela Massimi of University College London closed the workshop with an excellent talk on historical and philosophical issues.

Another highlight was the event held for the general public: a reading of selected parts of the book by George Gamow and Russell Stannard, The New World of Mr Tompkins, where the professor depicted in Gamow’s book was played by a witty Michael Berry from the University of Bristol. This event was a success, especially among the young students who participated so enthusiastically.

Overall, the workshop showed that the field is full of new and interesting ideas. Although nobody expects gross violations of the spin-statistics connection, there could be subtle effects that may point to new physics in a context quite different from that of the LHC.

The workshop was sponsored jointly by the INFN and the University of Trieste. It received generous contributions from the Consorzio per la Fisica, the Hadron Physics initiative (Sixth Framework Programme of the EU) and Regione Friuli–Venezia Giulia.

Brookhaven finds more rare kaon decays

The E949 collaboration at Brookhaven National Laboratory has observed three new events of the rare kaon decay K+ → π+νν. This brings the total number observed to seven, four of which were found by E949 and three by its predecessor E787. The branching ratio from all seven candidate events is (1.73 + 1.15/–1.05) × 10–10, which is consistent with the Standard Model prediction of (0.85 ± 0.07) × 10–10.

CCnew3_10_08

The decay, K+ → π+νν, which is one of the rarest and most challenging particle decays ever observed, is highly sensitive to physics beyond the Standard Model (SM). The uncertainty of the SM prediction, which involves second-order weak interactions – that is, the exchange of two weak force carrier bosons – is less than 10%. Any deviations uncovered by a precise measurement of this branching ratio could unambiguously signal the presence of new physics effects that are predicted in extensions to the SM.

The experimental signature for K+ → π+νν decay is the detection of a solitary positively charged pion, since the emitted neutrino and anti-neutrino pair interact too weakly to be detected, but unfortunately the sought-after signal resembles many other kaon decay channels. To identify the pion positively and ensure that no other observable decay particles were present, the collaboration created one of the most efficient particle-detection systems ever built. They also employed unbiased “blind” analysis techniques, which were pioneered by E787 and are now frequently used in modern high-energy physics experiments.

The three new events, which were obtained in a sample of 1.7 × 1012 kaon decays, were observed in a low-energy pion region (see figure). This presented an even greater experimental challenge relative to the high-energy pion region, owing to additional processes that can mimic the K+ → π+νν decay signature. The total background expected was 0.93 ± 0.17(Stat.) +0.32/–0.24(syst.) events, primarily from π+ scattering in the stopping target.

The result confirms detailed predictions of the SM at higher orders. Given the level of statistical uncertainty associated with the result, only limitations on new physics beyond the SM can be inferred. However, a new generation of K+ → π+νν measurements from the NA62 experiment at CERN aim for a precision comparable to that of the current SM prediction.

• The E949 and E787 experiments at Brookhaven’s Alternating Gradient Synchrotron included more than 100 collaborators from Canada, China, Japan, Russia and the US.

bright-rec iop pub iop-science physcis connect