Comsol -leaderboard other pages

Topics

Metallic water becomes even more accessible

Water is well known for its astonishing range of unusual properties, and now Thomas Mattsson and Michael Desjarlais of Sandia National Laboratories in New Mexico have suggested yet another one. They found that water should have a metallic phase at temperatures of 4000 K and pressures of 100 Gpa, which are a good deal more accessible than earlier calculations had indicated.

CCEmet1_10-06

The two researchers used density functional theory to calculate from first principles the ionic and electronic conductivity of water across a temperature range of 2000–70,000 K and a density range of 1–3.7 g/cm3. Their calculations showed that as the pressure increases, molecular water turns into an ionic liquid, which at higher temperatures is electronically conducting, in particular above 4000 K and 100 GPa. This is in contrast to previous studies that indicated a transition to a metallic fluid above 7000 K and 250 GPa. Interestingly, this metallic phase is predicted to lie just next to insulating “superionic” ice, in which the oxygen atoms are locked into place but all the hydrogen atoms are free to move around.

Suitable conditions for metallic water should exist on the giant gas planets. In particular, the line of constant entropy (isentrope) on the planet Neptune is expected to lie in the region of temperature and pressure suggested by these studies for the metallic liquid phase.

Trieste focuses on hadrons

The fifth Perspectives in Hadronic Physics conference was held on 22–26 May at the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste. The latest in a series organized every second year by the ICTP and the Italian Istituto Nazionale di Fisica Nucleare (INFN), this year’s conference was also sponsored by the Consorzio per la Fisica, Trieste, and the Department of Physics, University of Perugia. It brought together around 100 theorists and experimentalists for more than 60 plenary talks, focusing on present and future theoretical and experimental activities in hadronic physics and relativistic particle–nucleus and nucleus–nucleus scattering.

CCEtri1_10-06

A major success of the conference was the joint participation of the hadronic and heavy-ion communities. This was reflected in the wide range of topics, from the structure of the hadron at low virtuality, to the investigation of the states of matter under extreme conditions and the possible formation of quark–gluon plasma in high-energy heavy-ion collisions. This article presents a summary of the broad range covered by the speakers.

Hadrons, in vacuo and in the medium

The first part of the conference focused on the study through quantum chromodynamics (QCD) of free hadrons and the properties of hadrons in the medium – that is, in nuclear matter. It covered a broad spectrum of theoretical approaches and experimental investigations, including hadron structure and the quantities that describe it, namely form factors, structure functions and generalized parton distributions (GPDs). In this context there was much emphasis on the appreciable amount of experimental work undertaken at Jefferson Lab and at the Mainz Microtron (MAMI), for example, casting important light on the role of strange quarks in the nucleon.

MAMI has also obtained values of the masses and widths of mesons in the medium, which appear to differ appreciably from the free case. The possibility of a ω-nucleus bound state was suggested. Scalar and axial vector mesons can be generated dynamically within a chiral dynamics approach, which was presented in detail.

Information from Jefferson Lab on the nucleon-spin structure function from almost real photon scattering to the deep-inelastic scattering (DIS) region was reviewed at the meeting, and recent results were reported on the use of semi-inclusive DIS on the proton and the deuteron as a tool for investigating the up and down quark densities. Quark–hadron duality was discussed both in its theoretical and experimental aspects, illustrating how recent data from Jefferson Lab can be used to extract the higher twist contributions to the moments of parton distribution functions, which are sensitive to the quark–gluon correlations in the nucleon. Several talks presented recent results on the calculations of hadron form factors and cross-sections in terms of relativized quark models. These included the consideration of higher Fock components in the hadron wave functions.

Exclusive hard processes were discussed in terms of a new nonperturbative quantity that describes the hadron-to-photon transition, for instance in virtual Compton scattering in the backward region. The usefulness of this approach was illustrated in forward exclusive meson-pair production in γγ* scattering.

GPDs were the subject of detailed discussion at the meeting, with a report on the impressive experimental results from Hall A and the Deeply Virtual Compton Scattering Collaboration at Jefferson Lab. These experiments have accessed the twist-2 term in the proton, which is a linear combination of GPDs, and they find almost no dependence on momentum-transfer squared, Q2. This is in good agreement with the theoretical expectation where the process described by the so-called “handbag” diagram dominates.

Turning to the calculation of GPDs, a meson-cloud model allows their computation at the hadronic scale, while GPDs for a nuclear target have been calculated in a constituent quark model, which shows that nuclear effects prove to be much larger than expected. Theoretical results for Compton scattering and two-photon annihilation into pairs of hadrons within the handbag approach were compared with data from Jefferson Lab and from the Belle experiment at the KEKB facility. The meeting presented indications for the presence and role of two different non-perturbative scales in hadronic structure, while showing that complementary information on the 3D parton structure of the hadron was accessible by measuring multiple parton distributions in hadron–nucleus reactions.

Cold nuclear matter figured in several talks, which presented new experimental and theoretical results. These clearly demonstrated that nowadays our knowledge on nuclei has reached the stage of a quantitative access to nucleon–nucleon correlations.

Eventually it will be possible to explore QCD where the strength of nonlinearities is substantially higher than at DESY’s HERA electron–proton collider.

Mechanisms for quark hadronization and hadron formation in the medium were another important topic. The HERMES collaboration at DESY reported a clear attenuation of various leading hadrons in heavy targets compared with the DIS process on deuterium. Theoretical interpretations of these nuclear effects are based either in terms of inelastic hadron interactions or in terms of quark energy loss. Successful interpretations of the data provide information on the time needed to produce a colour-neutral precursor, which eventually fragments into a leading hadron. Preliminary data at a lower photon energy from the CEBAF Large Acceptance Spectrometer at Jefferson Lab, particularly on the transverse-momentum broadening, are expected to shed new light on these effects owing to the finite formation time of hadrons.

Several presentations showed that eventually it will be possible to explore QCD where the strength of nonlinearities is substantially higher than at DESY’s HERA electron–proton collider. The meeting also discussed ultraperipheral collisions, which will allow the study of structure functions at low Q2, and diffraction at very high energies.

From hadrons to quark–gluon plasma

The heavy-ion part of the conference began with an overview of saturation physics from the Relativistic Heavy Ion Collider (RHIC) to the Large Hadron Collider (LHC). Here emphasis was on experimental signals for the so-called colour glass condensate, followed by the theoretical aspects of saturation and shadowing physics at small values of the variable, Bjorken x.

The modification of the jet shapes in the jet-quenching phenomenon at RHIC seems to provide an efficient tool for probing the soft gluon radiation induced by the produced medium. At both RHIC and the LHC, the ratios of heavy to light mesons at large transverse momentum offer solid possibilities for checking the formalisms for energy loss. Photon-tagged correlations have also been proved to be efficient tools for extracting both the vacuum and the medium-modified fragmentation functions, in proton–proton and nucleus–nucleus scattering. The strength of the jet-quenching process depends on the medium transport coefficient qˆ, but this dependence is weakened by the geometrical bias that favours the production of the hard parton at the periphery of the medium. Nonetheless, its precise value is of considerable importance to interpret the present RHIC data and to foresee the amount of quenching in lead–lead collisions at the LHC. A recent non-perturbative estimate of this quantity, using the anti-de Sitter space/conformal field theory correspondence, was presented.

Recent measurements of J/ψ production in deuteron–gold and gold–gold collisions by the PHENIX collaboration at RHIC seem to be consistent with a weak shadowing effect together with the possible inelastic interaction of the J/ψ meson in cold nuclear matter. In the heavy-ion collisions, the J/ψ suppression at RHIC is remarkably similar to that observed at CERN’s Super Proton Synchrotron (SPS), despite the much larger energy density reached at RHIC. The reason for this is not yet clear and could be owing to the formation of J/ψ states from the statistical recombination of charm quarks in the medium.

The soft-physics side reported recent numerical calculations on plasma instabilities, which attempt to determine the behaviour of an anisotropic non-abelian medium on long time scales. Observables measured in two-pion (Hanbury-Brown/Twiss) interferometry and in pion spectra at RHIC are consistent with the emission of pions from a system that has a restored chiral symmetry. The recent preliminary data from the NA60 experiment on r-meson production in indium–indium collisions at SPS energies were discussed. While the measurements compare well with expectations for the broadening of the ρ width, these data tend to exclude the drop in mass expected from Brown-Rho scaling, which predicts the in-medium mass to be proportional to the qbarq condensate. More detailed presentations complemented overviews of heavy-ion collisions at intermediate energies and the physics programme for the ALICE experiment at the LHC.

The conference then focused on spectroscopic studies and the production of exotic states. The new exotic states discovered at 4 GeV by Belle and the BaBar experiment at SLAC can be understood as diquark–antidiquark (qq–qbarqbar) states. The meeting also covered various problems related to dense hadronic matter – in particular, the high-temperature phase of QCD, bifurcations in the physics of strong gluon fields, and the topological structure of dense hadronic matter, and the possibility of measuring the production of “strangelets” at the LHC using the Centauro And STrange Object Research (CASTOR) detector at the CMS detector.

The conference also discussed the formation and properties of ultra-dense quark matter in the stars, from several different aspects. These included the colour-superconducting quark matter that can be formed in the core of compact stars, the various many-body approaches to the treatment of the equation of state of nuclear matter at baryon densities exceeding the density of normal nuclei by several times, and the quark-deconfinement model of gamma-ray bursts.

The last part of the meeting focused on the presentation of the most relevant plans for future experimental facilities. These included overviews on the future Facility for Antiproton and Ion Research (FAIR) at GSI and the broad programme of physics at the Japan Proton Accelerator Research Complex (J-PARC), as well as physics at Jefferson Lab with the CLAS detector. The possibilities offered by a high-luminosity electron–ion collider were also discussed.

The conference closed with a talk from the 2005 physics Nobel prize winner, Roy Glauber from Harvard, who described his pioneering work on quantum optics and its relationship to heavy-ion physics. The hadronic and heavy-ion communities are now looking forward to the sixth Perspectives in Hadronic Physics ICTP conference.

Can experiment access Planck-scale physics?

Physics on the large scale is based on Einstein’s theory of general relativity, which interprets gravity as the curvature of space–time. Despite its tremendous success as an isolated theory of gravity, general relativity has proved problematic in its integration with physics as a whole, and in particular with the physics of the very small, which is governed by quantum mechanics. There can be no unification of physics that does not include both general relativity and quantum mechanics. Superstring theory and its recent extension to the more general theory of branes is a popular candidate for a unified theory, but the links with experiment are very tenuous. The approach known as loop quantum gravity attempts to quantize general relativity without unification, and has so far received no obvious experimental verification. The lack of experimental guidance has made the issue extremely hard to pin down.

CCEcan1_10-06

One hundred years ago, when Max Planck introduced the constant named after him, he also introduced the Planck scales, which combined his constant with the velocity of light and Isaac Newton’s gravitational constant to give the fundamental Planck time around 10–43 s, the Planck length around 10–35 m and the Planck mass around 10–8 kg. Experiments on quantum gravity require access to these scales, but direct access using accelerators would require machines that reach an energy of 1019 GeV, well beyond the reach of any experiments currently conceivable.

For almost a century it has been widely perceived that the lack of experimental evidence for quantum gravity presents a major barrier to a breakthrough. One possible way of investigating physics at the Planck scale, however, is to use the kind of approach developed by Albert Einstein in his study of thermal fluctuations of small particles through Brownian motion, where he showed that the visible motion provided a window onto the invisible world of molecules and atoms. The idea is to access the Planck scale by observing decoherence in matter waves caused by quantum fluctuations, as first proposed using neutrons more than 20 years ago by CERN’s John Ellis and colleagues (Ellis et al. 1984). Since then, ultra-cold atom technologies have advanced considerably, and armed with the sensitivity of modern atomic matter-wave interferometry we are now in a position to consider using “macroscopic” instruments to access the Planck scales, a possibility that William Power and Ian Percival outlined more recently (Power and Percival 2000).

Our recent work represents a new approach to gravitationally produced decoherence near the Planck scale (Wang et al. 2006). It has been made possible by the recent discovery by one of us of the conformal structure – the scaling property of geometry – of canonical gravity, one of the earliest important approaches to quantum gravity. This leads to a theoretical framework in which the conformal field interacts with gravity waves at zero-point energy using a conformally decomposed Hamiltonian formulation of general relativity (Wang 2005). Working in this framework, we have found that the effects of ground-state gravitons on the geometry of space–time can lead to observable effects by causing quantum matter waves to lose coherence.

The basic scenario is that near the Planck scale, ground-state gravitons constantly stretch and squash the geometry of space–time causing conformal fluctuations in space–time. This process is analogous to the Brownian motion of a pollen particle interacting with ambient molecules of much smaller sizes. It means that information on gravitons near the Planck scale can be extracted by observing the conformal fluctuations of space–time, which can be done by analysing their blurring effects on coherent matter waves.

The curvature of space–time produces changes in proper time, the time measured by moving clocks. For sufficiently short time intervals, near the Planck time, proper time fluctuates strongly owing to quantum fluctuations. For longer time intervals, proper time is dominated by a steady drift due to smooth space–time. Proper time is therefore made up of the quantum fluctuations plus the steady drift. The boundary separating the shorter time-scale fluctuations from the longer time-scale drifts is marked by a cut-off time, τcut-off, which defines the borderline between semi-classical and fully quantum regimes of gravity. It is given by τcut-off = λTPlanck, for quantum-gravity theories, where TPlanck is the Planck time, and λ is a theory-dependent parameter determined by the amplitude of zero-point gravitational fluctuations. A lower limit on λ is given by noting that the quantum-to-classical transition should occur at length scales λLPlanck that are greater than the Planck length LPlanck by a few orders of magnitude, so we can expect λ > 102.

A matter-wave interferometer can be used to measure quantum decoherence due to fluctuations in space–time, and hence provide experimental guidance to the value of λ. In an atom interferometer an atomic wavepacket is split into two wavepackets, which follow different paths before recombining (see “Atom interferometer”). The phase change of each wavepacket is proportional to the proper time along its path, resulting in an interference pattern when the wavepackets recombine. The detection of the decoherence due to space–time fluctuations on the Planck scale would provide experimental access to quantum-gravity effects analogous to accessing to atomic scales provided by Brownian motion.

In our analysis we found an equation that gives λ (See “equation”).

CCEcan2_10-06

M is the mass of the quantum particle; T is the separation time before two wavepackets recombine; and Δ denotes the loss of contrast of the matter wave and is a measure of the decoherence (Wang et al. 2006). Existing matter-wave experiments set limits on the size of λ, their sensitivity depending on both Δ and M. Results using caesium atom interferometers (Chu et al. 1997) and also from a fullerene C70 molecule interferometer (Hackermueller et al. 2004) with its larger value of M, both set a lower bound for λ of the order of 104, well within the theoretical limits of λ > 102. This suggests that the sensitivities of advanced matter-wave interferometers may well be approaching the fundamental level due to quantum space–time fluctuations. Investigating Planck-scale physics using matter-wave interferometry may therefore become a reality in the near future.

Further improved measurements will confirm and refine this bound on λ, pushing it to higher values. An atom interferometer in space, such as the proposed HYPER mission, could provide such improvements. However, the lower bound of λ calculated using current experimental data is already within the expected range. This is a very good sign and strongly suggests that the measured decoherence effects are converging towards the fundamental decoherence due to quantum gravity. Therefore, a space mission flying an atom-wave interferometer with significantly improved accuracy will be able to investigate Planck-scale physics.

As well as causing quantum matter waves to lose coherence at small scales, the conformal gravitational field is responsible for cosmic acceleration linked to inflation and the problem of the cosmological constant. Our formula, which relates the measured decoherence of matter waves to space–time fluctuations, is “minimum” in the sense that ground-state matter fields have not been taken on board. Their inclusion may further increase the estimated conformal fluctuations and result in an improved “form factor” in our formula. In this sense, the implications go beyond quantum gravity to more generic physics at the Planck scale. Furthermore, it opens up new perspectives of the interplay between the conformal dynamics of space–time and vacuum energy due to gravitons, as well as elementary particles. (A well known example of vacuum energy is provided by the Casimir effect.) These may have important consequences on cosmological problems such as inflation and dark energy.

Precision pins down the electron’s magnetism

CCEpre1_10-06

The electron’s magnetic moment has recently been measured to an accuracy of 7.6 parts in 1013 (Odom et al. 2006). As figure 1a indicates, this is a six-fold improvement on the last measurement of this moment made nearly 20 years ago (Van Dyck et al. 1987). The new measurement and the theory of quantum electrodynamics (QED) together determine the fine structure constant to 0.70 parts per billion (Gabrielse et al. 2006). This is nearly 10 times more accurate than has so far been possible with any rival method (figure 1b). Higher accuracies are expected, based upon convergence of many new techniques – the subject of a half-dozen Harvard PhD theses during the past 20 years. A one-electron quantum cyclotron, cavity-inhibited spontaneous emission, a self-excited oscillator and a cylindrical Penning trap contribute to the extremely small uncertainty. For the first time, researchers have achieved spectroscopy with the lowest cyclotron and spin levels of a single electron fully resolved via quantum non-demolition measurements, and a cavity shift of g has been directly observed.

CCEpre2_10-06

Unusual features

A circular storage ring is the key to these greatly improved measurements, but the storage ring is unusual compared with those at CERN, for example. To begin with it uses only one electron, stored and reused for months at a time. The radius of the storage ring is much less than 0.1 µm, and the electron energy is so low that we use temperature units to describe it – 100 mK. Furthermore, the electron does not orbit in a familiar circular orbit even though it is in a magnetic field; instead, it makes quantum jumps between only the ground state and the first excited states of its cyclotron motion – non-orbiting stationary states. It also makes quantum jumps between spin up and spin down states. Blackbody photons stimulate transitions between the two cyclotron ground states until we cool our storage ring to 100 mK to essentially eliminate them. The spontaneous emission of synchrotron radiation is suppressed because of its low energy and by locating the electron in the centre of a microwave cavity. The damping time is typically about 10 seconds, about 1024 times slower than for a 104 GeV electron in the Large Electron–Positron collider (LEP). To confine the electron weakly we add an electrostatic quadrupole potential to the magnetic field by applying appropriate potentials to the surrounding electrodes of a Penning trap, which is also a microwave cavity (figure 2a).

CCEpre3_10-06

The lowest cyclotron and spin energy levels for an electron in a magnetic field are shown in figure 2b. (Very small changes to these levels from the electrostatic quadrupole and special relativity are well understood and measured, though they cannot be described in this short report.) Microwave photons introduced into our trap cavity stimulate cyclotron transitions from the ground state to the first excited state. The long cyclotron lifetime allows us to turn on a detector to count the number of quantum jumps for each attempt as a function of cyclotron frequency νc (figure 3d). A similar quantum jump spectroscopy is carried out as a function of the frequency of a radiofrequency drive at a frequency νa = νs – νc, which stimulates a simultaneous spin flip and cyclotron excitation, where νs is the spin precession frequency (figure 3c). The lineshapes are understood theoretically. One-quantum cyclotron transitions (figure 3b) and spin flips (figure 3a) are detected with good signal-to-noise from the small shifts that they cause to an orthogonal, classical electron oscillation that is self-excited.

CCEpre4_10-06

The dimensionless electron magnetic moment is the magnetic moment in units of the Bohr magneton, ehbar/2m, where the electron has charge –e and mass m. The value of g is determined by a ratio of the frequencies that we measure, g/2 = 1 + νa/νc, with the result that g/2 = 1.00115965218085(76) [0.76 ppt]. The uncertainty is nearly six times smaller than in the past, and g is shifted downwards by 1.7 standard deviations (Odom et al. 2006).

CCEpre5_10-06

What can be learned from the more accurate electron g? The first result beyond g itself is the fine structure constant, α = e2/4πε0hbarc – the fundamental measure of the strength of the electromagnetic interaction, and also a crucial ingredient in our system of fundamental constants. A Dirac point particle has g = 2. QED predicts that vacuum fluctuations and polarization slightly increase this value. The result is an asymptotic series that relates g and α:

(Eq. 1)

g/2 = 1 + C2(α/π) + C4(α/π)2 + C6(α/π)3 + C8(α/π)4
+ … aµτ + ahadronic + aweak

According to the Standard Model, hadronic and weak contributions are very small and believed to be well understood at the accuracy needed. Impressive QED calculations give exact C2, C4 and C6, a numerical value and uncertainty for C8, and a small aµτ. Using the newly measured g in equation 1 gives α–1 = 137.035999710(96) [0.70 ppb] (Gabrielse et al. 2006). The total uncertainty of 0.70 ppb is 10 times smaller than for the next most precise methods (figure 1b), which determine α from measured mass ratios, optical frequencies, together with rubidium (Rb) or caesium (Cs) recoil velocities.

CCEpre6_10-06

The second use of the newly measured electron g is in testing QED. The most stringent test of QED – which is one of the most demanding comparisons of any calculation and experiment – continues to come from comparing measured and calculated g-values, the latter using an independently measured α as an input. The new g, compared with equation 1 with α(Cs) or α(Rb), gives a difference δg/2 < 15 × 10sup>–12 (see Gabrielse 2006 for details and a discussion.) The small uncertainties in g/2 will allow a 10 times more demanding test if ever the large uncertainties in the independent α values can be reduced. The prototype of modern physics theories is thus tested far more stringently than its inventors ever envisioned – as Freeman Dyson remarks in his letter at the beginning of the article – with better tests to come.

CCEpre7_10-06

The third use of the measured g is in probing the internal structure of the electron – limiting the electron to constituents with a mass m* > m/√(δg/2) = 130 GeV/c2, corresponding to an electron radius R <1 × 10–18 m. If this test was limited only by our experimental uncertainty in g, then we could set a limit m* > 600 GeV. This is not as stringent as the related limit set by LEP, which probes for a contact interaction at 10.3 TeV. However, the limit is obtained quite differently, and is somewhat remarkable for an experiment carried out at 100 mK.

The fourth use of the new electron g concerns measurements of the muon g – 2 as a way to search for physics beyond the Standard Model. Even though the muon g values have nearly 1000 times larger uncertainties than the new electron g, heavy particles – possibly unknown in the Standard Model – are expected to make a contribution that is much larger for the muon. However, this contribution would still be very small compared with the calculated QED contribution, which depends on α and must be subtracted out. The electron g provides α and a confidence-building test of the QED, both needed for the large subtraction.

CERN has long embraced particle physics at whatever energy scales are most appropriate for learning about fundamental reality. It is impressive that CERN is replacing the highest energy electron–positron collider, LEP, with the world’s highest energy proton collider, the Large Hadron Collider. Also at CERN, however, the lowest energy antiproton storage rings are also operating. One antiproton cooled to 4.2 K was used to show that the magnitudes of q/m for the proton and antiproton were the same to better than nine parts in 1011 – the most stringent test of CPT invariance with a baryon system.

Now, these low-energy antiproton techniques are being used to make the coldest possible antihydrogen atoms, to be used for higher-precision tests of fundamental symmetries. It is fitting that the new measurement of the electron magnetic moment and the fine structure constant were carried out in the lab of a long-time CERN researcher, since they illustrate the power of low-energy techniques of the sort that we are applying to antihydrogen studies at CERN’s Antiproton Decelerator facility, the unique source of low-energy antiprotons.

D0 finds evidence for WZ pair production

The D0 Collaboration at Fermilab has announced the first measurement of the cross-section for WZ pair production in proton–antiproton collisions. The cross-section times branching ratio for the process is the smallest ever measured at a hadron collider. The data for this result were taken from more than 1 fb–1 of total collision data at the Tevatron, and a sample of 1.5 thousand million events.

CCEdof1_09-06

Making this measurement requires events in which both the W and Z boson decay to leptons, but while such events provide the cleanest signature of WZ events, they constitute only 1.4% of all WZ decays. D0 found 12 events, each containing three charged leptons with high transverse momentum together with missing transverse energy (indicating an undetected neutrino), with an expected background of 3.6±0.2 events. The probability that the background accounts for these 12 events is 4.1 × 10–4, which constitutes 3.3 σ evidence for WZ pair production. With these events D0 measures the WZ production cross-section to be 4.0 +1.9–1.5 pb, which is consistent with the Standard Model prediction of 3.6±0.3 pb.

The coupling of the weak vector bosons is an important consequence of the non-Abelian nature of the Standard Model, and the rate for the associated production of W and Z bosons in proton–antiproton collisions allows this coupling to be probed. The kinematics of the Z boson decay can also be used to characterize the interaction between the W and Z and provide further constraints on the nature of the electroweak force. In addition, measuring the cross-section times branching ratio for Standard Model processes with such low rates is an important stepping stone in the search for the Higgs boson at the Tevatron.

Trap gives precise new value for fine structure constant

A team at Harvard University has made a new precise measurement of the electron magnetic moment, which in turns allows the fine structure constant to be determined with an uncertainty 10 times smaller than previously attained.

CCEtra1_09-06

Gerald Gabrielse and colleagues have measured the value of the constant g of the electron, which relates its magnetic moment to the Bohr magneton, ehbar/2m, where e is the size of the charge on the electron, and m is the electron’s mass. For a Dirac point particle of spin 1/2, g should have a value of 2, but quantum electrodynamics (QED) predicts a value slightly higher, owing to vacuum fluctuations and polarizations effects.

To measure g more precisely than before, the Harvard team has resolved the cyclotron and spin energy levels of an electron confined for several months in a cylindrical Penning trap cooled to 100 mK (Odum et al. 2006). The value they obtained is g/2 = 1.00115965218085(76); the uncertainty of 0.76 ppt is nearly six times lower than the most recent accepted value, measured nearly 20 years ago (Van Dyck et al. 1987).

Working with Cornell University and RIKEN in Japan, Gabrielse and colleagues have used this new value of g with a prediction from QED involving 891 eighth-order Feynman diagrams, to determine a new value for the fine structure constant, α. They obtain α–1 = 137.035999710(96), that is, with an uncertainty of 0.70 ppb – an uncertainty that is about 10 times smaller than for any rival method to determine a (Gabrielse et al. 2006).

ANTARES Collaboration detects its first muons

On 14 February 2006 the first fully instrumented ANTARES detector line was deployed and placed at a predetermined place on the bottom of the Mediterranean Sea, about 40 km off the coast of Toulon, and at a depth of 2500 m. On 2 March, a remote-controlled submarine connected the line to the junction box, the terminal at the end of the 40 km telecommunications cable that leads to the shore station at La Seyne-sur-Mer. On the same day the line recorded the first cosmic-ray tracks. This is the first of 12 lines to be deployed over the next 18 months, after many years of tests that have investigated conditions at the detector site and parts of the detector set-up. An instrumentation line has been taking data smoothly since April 2005 (Aguilar et al. 2006).

ANTARES is one of only a few detectors employing natural seawater or ice as the detector medium to search for neutrinos of extraterrestrial origin. These neutrinos may have been produced in high-energy events in the cosmos, travelling towards us undisturbed by intervening matter or magnetic fields. If their direction can be determined then their origin in the universe can be identified. High-energy neutrinos could also be indicators for certain types of dark matter.

The neutrino is weakly interacting and this sets the scale of the detectors. Only truly gargantuan sizes allow the detection of neutrinos with sufficient sensitivity to be useful. It was an idea of Moissey Markov in 1960 that gave impetus to the possibility of neutrino astronomy. He reasoned that if one concentrated on muon-neutrinos through the detection of a muon produced in a charged-current interaction, then the large range of the muon in matter would allow for large effective volumes. The direction of the muon is closely related to the direction of the neutrino, and if the detection medium is water or transparent ice then the muon can be tracked through its emission of Cherenkov radiation.

For ANTARES, the Mediterranean Sea and the rock below the seabed provide the interaction volume, and the water provides the detection medium. Because of its large scattering length for Cherenkov light, the seawater allows excellent timing and, consequently, good directional accuracy can be obtained. The Mediterranean was chosen because it is in the Northern Hemisphere and provides complementary sky coverage, including the centre of our galaxy, to the AMANDA and IceCube detectors that are operating in the Antarctic ice.

The overall detector consists of storeys suspended at intervals of 14.5 m along a 500 m vertical cable, which is anchored to the sea floor and held vertical by a buoy at the top of the cable. The storeys begin at 100 m above the seabed and there are 25 such storeys on a line. Placing more of these cables at distances of 70 m increases the volume of the detector.

CCEant1_09-06

Figure 1 shows a storey in situ in the sea, with the glass pressure spheres housing the 25 cm diameter photomultiplier tubes (PMTs) that are used to detect the Cherenkov light. The PMTs point downwards at an angle of 45° with respect to the vertical. Two are clearly visible, whereas the third is hidden behind the titanium cylinder that contains the electronics for the readout and control of the storey, and an electronic compass, plus a tilt meter. A hydrophone, used for acoustic positioning, is located at the bottom of the storey.

The PMTs operate at a threshold of 0.4 photoelectrons. All the data produced by the tubes are transferred via an optical cable to the shore, where a farm of computers processes the data to extract interesting events. Because of radioactive potassium present in the seawater, each PMT has a base rate of about 60 kHz; bioluminescent life in the seawater may increase this rate. It is the task of the software running on the computer farm to recognize the presence of a muon track among the background hits. At present the software is able to perform this task up to about five times the base rate. However, conditions at the bottom of the sea can vary significantly. There was a period of relative calm just after deployment, followed by two months of high bioluminescent activity, making data-taking difficult; now the background activity has subsided and normal data-taking has resumed.

The present software selects slices in time and searches for the passage of muons through their time patterns in the PMTs along the string. The final stage of the process is a χ2 minimization fit to the height versus time pattern. The reconstructed data set is dominated by down-going muons originating from high-energy cosmic-ray showers in the atmosphere. The main signature of a neutrino-induced muon is that it originates from below, in which case the Earth acts as a very effective filter against the directly produced muons.

CCEant2_09-06

Figure 2 shows two examples of reconstructed tracks from data from the first ANTARES line. Figure 2a shows a vertical muon track, where the signal propagates down the line with the velocity of the muon, and figure 2b shows a slightly more inclined track identified by the change in the signal’s vertical propagation velocity, before and after the closest approach. So far several thousand tracks of down-going muons have been reconstructed and a few candidates for up-going muons have also been observed.

The ANTARES Collaboration is now in full swing analysing the data coming from this first detector line. The experience of the first line shows that we are on track towards a full neutrino telescope in the Mediterranean and we look forward to several years of data-taking.

Nuclear physics helps unravel the universe

Understanding our universe from basic physics is an ambitious goal involving many disciplines in physics. One key ingredient is nuclear astrophysics, with its focus on explaining energy production and chemical evolution in the universe – topics that are coupled through nuclear reactions that transform elements and may also release energy. The first overview of the synthesis of elements was about 50 years ago, with the work of Geoffrey and Margaret Burbidge, Willy Fowler and Fred Hoyle, and, independently, Al Cameron. Although there had been some important work earlier in the 20th century, this was the defining moment for nuclear astrophysics.

At the end of June, nearly 250 astronomers, astrophysicists, cosmologists and nuclear physicists met at CERN for the ninth Nuclei in the Cosmos meeting to summarize the status of the field. Organized by a team from the Isotope Separator On Line (ISOLDE) and Neutron Time-of-Flight (n_TOF) facilities at CERN, it was dedicated to the memory of Al Cameron, Ray Davis and John Bahcall, all of whom have recently died and had played major roles in helping to understand the production and role of nuclei in the cosmos.

Nuclei, of course, consist of smaller particles and the meeting reviewed recent developments in cosmology and their possible connection to particle physics. While at one time particle physics provided input for calculating the abundances of elements created during the early universe in Big Bang nucleosynthesis, nowadays the results from the Wilkinson Microwave Anisotropy Probe yield the baryonic density of the universe, which is used in calculating the abundances. However some problems in reconciling observations and calculations for the primordial elements remain, in particular for the two stable lithium isotopes, 6Li and 7Li.

Analysing nuclear ashes

Optical observations of stars reveal their element abundances. Old stars are metal-poor (following the convention in astronomy that all elements above helium are metals) and the heavy elements in them appear to be made exclusively by rapid neutron capture – the r-process. Only later in galactic evolution does the s-process – slow neutron capture – begin to contribute as well. The relative abundances for elements above barium fit well with r-process abundances deduced for solar-system material, but for lighter elements there are differences that could indicate the presence of a second (“weak”) r-process. The coming years should bring clarification as the amount of observational data will increase significantly owing to the large-scale surveys, the Hamburg/ESO R-process Enhanced Star survey and the Sloan Digital Sky Survey.

Another fruitful source of abundance data comes from presolar grains embedded in primitive meteorites. Here a recent breakthrough has been the extraction of isotopic ratios for many different grains. Such detailed information about isotope abundance helps in constraining the astrophysical conditions in which the grains were formed.

Refinements in knowledge of element abundances are not restricted to distant stars. An improved modelling of the solar atmosphere indicates that solar abundances of most “metals” should be decreased by more than a third. This change arises from a more careful and more dynamic treatment of the outer solar layers.

More direct evidence for nuclear processes in stars comes from the observation of the radioactive isotopes that are produced. High-resolution gamma-ray spectrometers have operated in space for some years, for example on the INTEGRAL satellite (figure 1). Several galactic radioactive decays have been observed through their gamma-emission lines, such as those of 26Al, 44Ti and 60Fe, although the decay of 22Na and positron emission both remain to be seen. Many of the radioactive isotopes found on Earth also stem from stellar events. A recent addition to the list is 60Fe, which has been identified in deep-ocean material by the highly sensitive accelerator mass spectrometer technique, indicating that a supernova exploded near the Earth about 2.4 million years ago.

CCEnuc1_09-06

Understanding stellar events

Modelling the evolution of stars and their sometimes violent end requires the coordinated work of many people. It is rather like assembling a giant multi-dimensional jigsaw puzzle, but one in which the pieces have first to be found. Some researchers concentrate on finding these pieces, while others focus on how to fit them together to form a coherent picture. However, we have still not identified the important ingredients for all stellar events.

For normal, “quiet” stellar burning, the nuclear reactions take place between stable isotopes, but with extremely small cross-sections that are hard to reproduce in the laboratory. Major efforts during the past decade have improved the situation, and participants at the meeting heard of progress on one of the remaining challenges, the reaction 12C(α,γ)16O, which is a key reaction for the processes responsible for the production of many elements.

CCEnuc3_09-06

At higher temperatures reaction rates increase and reactions involving radioactive nuclei need also to be known. A major highlight at the meeting in terms of nuclear data is a recent good estimate of the reaction rate for the key radiative alpha-capture process on 15O, which has been pursued for more than 20 years. The direct measurement of radiative proton-capture on 26gAl and alpha-capture on 40Ca was also presented. Lastly, reaction rates for several neutron-induced reactions were reported; these will have important implications for understanding the synthesis of elements heavier than iron – the “neutron capture elements”.

Very high temperatures belong to the domain of cosmic explosions – novae, supernovae, X-ray bursts and gamma-ray bursts – which have a fascinating history and rightly continue to attract attention. Numerical results presented at the meeting indicate that first-generation nova explosions occur at higher temperatures than classical novae. They could therefore be more important players in the early universe, but more studies are needed to confirm this.

CCEnuc2_09-06

A highlight of the conference was the presentation of successful computer simulations of supernova explosions (figure 2). Here, the key new ingredient is to follow the two-dimensional hydrodynamic evolution to long durations after the core-collapse, when the inner part has become a highly non-spherical object with significant fluctuations. The violent conditions in a supernova are perfect for cooking elements and the meeting heard about a new mechanism, the np-process, in which the strong neutrino fluxes play a more versatile role in the nucleosynthesis than imagined earlier. An exciting report on quantitative calculations of the r-process suggested that nuclear fission, and in particular neutron-induced fission, might play a very important role for the dynamics in later stages of the r-process.

It was clear from the meeting that nuclear astrophysics is rapidly evolving. The next meeting in the series will provide another snapshot of the status of the field, when it takes place in the US in 2008, hosted by the Michigan State University/National Superconducting Cyclotron Laboratory and the Joint Institute for Nuclear Astrophysics.

Workshop focuses on top-quark physics

Coimbra, in central Portugal, was the country’s capital from 1143 to 1255 and in historical importance ranks behind only Lisbon and Oporto. Its university was founded in 1290 and was the only one in Portugal until the beginning of the 20th century. Its ancient setting contrasted well with the central theme of TOP2006: the top quark, discovered only in 1995 in experiments at Fermilab’s Tevatron.

CCEwor1_09-06

The workshop itself grew from the idea of developing a strong collaboration between theorists and experimentalists who are interested in studying the properties of the top quark. The first properties of this unique particle were measured during Run I of the Tevatron by the CDF and D0 experiments; with Run II more data are now becoming available. Though not yet sufficient to perform the precision tests required to challenge (once again) the Standard Model, the data acquired so far are already providing valuable information on top-quark physics. The knowledge of the physics of the top quark will then enter a totally new phase – the precision era – with the start-up of the Large Hadron Collider (LHC) at CERN, foreseen towards the end of 2007.

CCEwor2_09-06

The top quark is the heaviest quark found (mt = 172.5±2.3 GeV/c2) and is still believed to be a fundamental particle. It completes the third-generation structure of the Standard Model, as the isospin partner of the b (bottom) quark. Why it is so heavy and why its Yukawa coupling to the Higgs field (after spontaneous symmetry breaking) is of the order of 1 is a mystery. Its solution requires an answer to the question: does the top quark play a special role in the electroweak symmetry-breaking mechanism of the Standard Model?

Although mainly produced via the strong interaction at particle colliders (double production via gluon–gluon fusion or qqbar annihilation), the top quark decays through the weak force to a b quark and a W boson with a branching ratio of almost 100%. Because of their large mass and decay rate (Γ = 1.42 GeV at next-to-leading order), top quarks, unlike any other quark, are produced and decay as free particles. With a very short lifetime (around 10–25 s), the top quark decays before hadronization can take place. For the same reason no toponium bound states with sharp binding energy are expected in the Standard Model; any evidence of a ttbar bound state would be a sign of physics beyond the model. The flavour-changing neutral-current decays of the top quark are also highly suppressed in the Standard Model, with branching ratios at the level of around 10–12 to 10–14; any evidence of decays such as t → qZ, qγ or qg would therefore constitute a sign of new physics.

Top-quark properties

The first day of the workshop was dedicated to the current theoretical and experimental status of top-quark physics, in the morning and afternoon sessions, respectively. C P Yuan of Michigan State University recalled the need for a precise measurement of the top-quark mass to constrain the Higgs mass when combined with the measurement of the W mass. Within the context of current theoretical knowledge, the day also covered the importance of the rate of single top production at colliders (not yet observed) as a probe for the element Vtb in the Cabibbo–Kobayashi–Maskawa matrix. He also stressed the fact that the different channels (s, t and Wt) that contribute to single top production are important processes for the search for physics beyond the Standard Model.

Aurelio Juste from Fermilab reviewed the current experimental status of the top quark starting from the total cross-section measurement at the Tevatron, with a relative precision of around 25% in Run I, dominated essentially by statistics. In Run II, with a luminosity of 2 fb–1, this error is expected to be reduced to about 10%. The mass is by far the most precisely measured property of the top quark, with a relative error less than 2%. The top charge, anomalous couplings and single top production were also discussed.

The second day examined the experimental methods used to select top quarks at colliders, and the leading-order and next-to-leading-order generators and theoretical methods available for understanding the data. Evelyn Thomson of the University of Pennsylvania presented the experimental methods that are used in the selection and analysis of top-quark decays at hadron colliders. In particular, she discussed the importance of the trigger, the difficult question of the background rejection and estimation (as W+jets and Z+jets), the need for a detailed calibration and determination of the jet energy scale (a major source of systematic error), and b-tagging, a key tool to reduce the background. She stressed the need to fine-tune the available Monte Carlo to reproduce data accurately. Available top-selection tools involve multivariate analysis and different statistical techniques.

Werner Bernreuther, of RWTH (Rheinisch-Westfälische Technische Hochschule) Aachen, described spin effects in hadronic top-pair production and polarized top decays, ttbar spin correlations (which are transferred to the decay products), and the possible existence of heavy ttbar resonances. As the top polarization is reliably calculable, it is well suited for experimental checks of the predictions of the Standard Model and its extensions. Bernreuther concluded that the top-quark physics is an excellent probe to test electroweak symmetry breaking and that it provides powerful observations to determine the structure of the tbW vertex. Sergey Slabospitsky of the Institute for High Energy Physics, Protvino, and Borut Kersevan of the Josef Stefan Institute presented the status of the important event generators that are being developed and used at the Tevatron and LHC to simulate top production and decays.

Top prospects

The prospects for top physics on the up-coming colliders were discussed on the third day of the workshop. In the morning, Dominique Pallin of Blaise Pascal University presented the expected performance of the LHC as a top factory. In particular, he showed the work going on for early top-quark studies, such as the measurement of the ttbar production cross-section and the top mass, as well as the determination of the W and top polarizations, in the lepton+jets channel. The top quark is a very useful calibration tool for early data (for the jet energy scale, b-tagging, trigger etc), which can also be used to check detector performance. With the increase of luminosity at the LHC many precision measurements of top-quark properties will be possible.

In the afternoon, Lynne Orr of the University of Rochester gave a talk about top physics at the LHC and a future International Linear Collider (ILC). She described the electroweak symmetry breaking mechanism and the hierarchy problem. She also discussed top-quark physics in models beyond the Standard Model, which are possible solutions to this problem: supersymmetry, little Higgs, technicolour and its descendents, and modified space–time models with extra dimensions. Finally, the sensitivity of different top-quark couplings at the LHC and ILC was reviewed. Brian Foster of Oxford University presented the status of the ILC.

Finally John Womersley, of the CCLRC, Rutherford Appleton Laboratory, presented a lively and appealing workshop summary talk. He also covered the status and the open questions in particle and astroparticle physics. All in all, the workshop was a fruitful opportunity for interesting discussions on the exciting subject of top-quark physics. The participants are looking forward to the next workshop, which will probably take place two years from now, where the latest results of the Tevatron’s Run II and the first results from the LHC in top-quark physics will be presented and discussed, and new challenges to the Standard Model will be tested.

MAGIC discovers variable very-high-energy gamma-ray emission from a microquasar

CCEnew3_07-06

The Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) Telescope has discovered variable very-high-energy gamma-ray emission from a microquasar. The telescope, on the island of La Palma, observed the microquasar called LS I +61 303 between October 2005 and March 2006. The observations show a clear variation with time and suggest that gamma-ray production may be a common property of microquasars.

Microquasars are gravitationally bound binary-star systems consisting of a massive ordinary star and a compact object of a few solar masses that is either a neutron star or a black hole. The two stars orbit a common centre and when close enough the mutual tides can cause a sudden transfer of mass from the normal star onto the compact companion. Some of the gravitational energy released in this exchange gives rise to jets of particles ejected at close to the speed of light, together with spectacular emission of radiation. Microquasars appear to be scaled-down versions of quasars, but in this case the small mass of the compact object means that events occur on a much smaller timescale – days rather than years – making them interesting objects to study. They are also a possible source of high-energy cosmic rays.

MAGIC detected LS I +61 303, one of about 20 known microquasars, at a rate of one gamma ray per square metre per month (Albert 2006). The telescope registers gamma rays through the Cherenkov radiation produced by the showers of particles created by the gamma rays as they enter the atmosphere.

LS I +61 303 was observed over six orbital cycles and a clear variability was found that is consistent with the orbital changes in aspect of the compact object (see figure). There is also evidence of periodicity. This shows that the very-high-energy gamma-ray emission is directly related to the interaction between the two stars.

Further reading

J Albert et al. 2006 Science 312 1771.

bright-rec iop pub iop-science physcis connect