Comsol -leaderboard other pages

Topics

Is quantum theory exact or approximate?

Quantum mechanics has puzzled the scientific community from the beginning. One of the major sources of difficulties comes from the measurement problem: why do measurement processes always have definite outcomes, despite the fact that the Schrödinger equation allows for superpositions of states? And why are such outcomes random (distributed according to the Born rule), while the Schrödinger equation is deterministic? New experiments and observations could help to answer such questions by providing a more precise idea of the possible limits of validity of quantum theory (Adler and Bassi 2009).

Most solutions to the measurement problem look for a reinterpretation of the formalism of quantum mechanics. Models in which the wave function collapses spontaneously, however, follow a different route. They purposely modify the Schrödinger equation by adding new nonlinear and stochastic terms, which break quantum linearity above a scale fixed by new parameters. Physically, the wave function is coupled (nonlinearly) to a white-noise classical scalar field, which is assumed to fill space.

By modifying the Schrödinger equation, collapse models make predictions that differ from those of standard quantum mechanics and that can be, in principle, tested. The scale at which deviations from standard quantum behaviour can be expected gives indications of the sensitivity that experiments should reach if they are to provide meaningful tests of collapse models and quantum mechanics.

There have already been experiments that directly or indirectly test collapse models against quantum mechanics and others are proposed for the future. Probably the best known are the diffraction experiments with macromolecules (C60, C70, C30H12F30N2O4), which set an upper bound 13 decades above the most conservative value of the collapse parameter λ (related to the noise strength) and five decades above the strongest value suggested. Other tests include the decay of supercurrents and proton decay, but the upper bounds are even weaker than in the diffraction experiments. One interesting proposal is an experiment that includes a tiny mirror mounted on a cantilever, within an interferometer: it will set an upper bound of 9 (1) decades on the weakest (strongest) value of λ.

The strongest bound, however, comes from the spontaneous emission of X-rays from germanium-76, as predicted by the continuous spontaneous localization (CSL) model, the most popular collapse model. It sets an upper bound of only six decades on the weakest value of λ. The strongest value is disproved by these data, but the bound is weakened if non-white-noise is considered with a frequency cutoff. The data coming from spontaneous X-ray emission are very raw, and several contributions from known sources (e.g. gamma-ray contamination, double beta-decay) have not been subtracted. A dedicated experiment on spontaneous photon emission could set a much stronger upper bound and would represent the most accurate test of quantum mechanics against the rival theory. Such a project is under discussion between the University of Trieste and the INFN, Laboratori Nazionali di Frascati.

Collapse models also make predictions that have cosmological implications. The apparent violation of energy conservation arising from the interaction with the collapsing noise places important upper bounds. The strongest comes from the intergalactic medium: requiring that the heating produced by the noise remains below experimental bounds places an upper bound of 8 (0) decades on the weakest (strongest) value of λ.

Telescopes pin down location of cosmic accelerator

CCnew6_07_09

Teams using imaging atmospheric Cherenkov telescopes to detect very high-energy gamma rays have joined forces with astronomers to reveal the precise location of particle acceleration in the nearby giant radio galaxy Messier 87 (M87). Collaborations on the High Energy Stereoscopic System (HESS), the Major Atmospheric Gamma-Ray Imaging Cherenkov (MAGIC) project and the Very Energetic Radiation Imaging Telescope Array System (VERITAS) have worked together with a team at the Very-Long Baseline Array (VLBA) radio telescope in an unprecedented, co-ordinated, 120-hour observational campaign. Their simultaneous observations at the lowest and highest ends of the electromagnetic spectrum indicate that the active galactic nucleus in M87 accelerates charged particles to very high energies in the immediate vicinity of the central black hole (V A Acciari et al. 2009).

M87 is a giant radio galaxy, 54 million light-years from Earth, with a jet structure – a huge outflow from the central region, which is probably fuelled by accretion of matter onto a massive black hole. In the jet, charged particles can be accelerated to very high velocities, with the inevitable accompanying production of high-energy gamma rays. The first indications for very high-energy gamma radiation from M87 were found in 1998 with the High-Energy Gamma-Ray Astronomy (HEGRA) telescope array – the predecessor of HESS and MAGIC. These observations were confirmed by HESS in 2006 and revealed a fast variability of the gamma-ray flux on a timescale of a few days, implying an exceptionally compact gamma-ray source.

To pinpoint the source more closely, HESS, MAGIC and VERITAS jointly observed M87 from January to May 2008, collecting 120 hours’ worth of data. During this time the galaxy underwent two major outbursts of very high-energy gamma-ray emission. Over the same period, high-resolution radio observations by the 43 GHz M87 Monitoring Team at the VLBA, a system of radio telescopes spanning the US, indicated a strong increase of the flux from the innermost core of M87 in the immediate vicinity of the central black hole. The combination of observations at the two extremes of the electromagnetic spectrum indicate that the site of the high-energy gamma emission, and hence the particle acceleration, in M87 must lie close to the black hole.

NuTeV anomaly supports new effect in bound nucleons

Tevatron

A new theoretical calculation of the effects of the nuclear medium may account for the “NuTeV anomaly”, a puzzling experimental result that disagreed with the Standard Model. The solution may lie with the isovector nuclear force generated by excess neutrons or protons in iron, which produces a subtle change in the quark structure of all of the nucleons.

The NuTeV anomaly arose when the Neutrinos at the Tevatron (NuTeV) collaboration at Fermilab measured the ratio of neutral-current to charged-current reactions in the collisions of high-energy neutrinos (and antineutrinos) with a large steel target (Zeller et al. 2002). The measurements gave a value for the electroweak parameter sin2θW that was three standard deviations higher than predicted by the Standard Model. When analysing the data, however, the collaboration had to make a correction to compensate for the unequal numbers of protons and neutrons in the iron nuclei in the steel target. In the analysis, the effect of the extra neutrons was removed by subtracting the structure functions of a comparable number of free neutrons from the iron nucleus, assuming that the protons and neutrons bound inside the iron nucleus are identical to free protons and neutrons.

Changes in structure functions in bound nucleons are well known through the effect discovered by the European Muon Collaboration (EMC). Now theorists from Tokai University, the University of Washington and Jefferson Lab have revealed a novel isovector EMC effect, arising from a proton or neutron excess. This effect implies an additional correction, of a sign and magnitude that are essentially model independent, which removes at least half of the NuTeV anomaly (Cloët et al. 2009). Moreover, when the new effect is combined with the well known correction for charge symmetry violation in the nucleon itself, the NuTeV data turn out to be in excellent agreement with the Standard Model.

The NuTeV data may be seen as providing crucial evidence for a conceptual change in the understanding of nuclear structure, in which the quark structure of the bound nucleon is fundamentally modified by the medium. Independent experimental confirmation of the isovector EMC effect could be provided by charged-current studies on heavy nuclei at a future electron-ion collider and in parity-violating deep-inelastic scattering experiments at Jefferson Lab following the 12 GeV upgrade.

Cosmic leptons challenge dark-matter detection

CCdar1_07_09

Recent measurements of cosmic-ray leptons – electrons and positrons – have generated a buzz because they might point to unknown astrophysical or exotic cosmic phenomena. A new measurement of the cosmic-ray positron fraction, e+/(e+e+), by the satellite-borne PAMELA detector shows an unambiguous rise between 10 GeV and 100 GeV. This confirms previous claims by the High-Energy Antimatter Telescope (HEAT) and AMS-01 collaborations (figure 1). At the same time, the Advanced Thin Ionization Calorimeter (ATIC), Fermi Gamma-Ray Telescope and HESS collaborations have published new results on the sum e+e+ at higher energies, up to a few tera-electron-volts. Although there are still discrepancies between these three experiments they could indicate the presence of a feature in the energy spectrum of e+e+ between 600 GeV and 1 TeV. Whether it is a bold peak, as ATIC claims, or a more shy bump, as the Fermi data indicate, is still unclear (figure 2). Further work and crosschecks are necessary to reach a definite answer. Another issue concerns whether this feature arises from electrons only or from both electrons and positrons.

CCdar2_07_09

There is nevertheless the hint of a signal in this energy range, which is quite challenging to reproduce with conventional cosmic-ray models. A workshop held in Paris in May, “Testing Astroparticle with the New GeV/TeV Observations. Positrons And electRons: Identifying the Sources (TANGO in PARIS)”, provided the opportunity to discuss and confront the possible interpretations of these results.

Conventional cosmic-ray production

The current understanding is that most cosmic rays are produced in the remnants of supernovae – what is left after the cataclysmic ends to the lives of many stars. Some cosmic-ray species (positrons, antiprotons, boron etc.) do not exist in stars but are instead produced by the spallation reaction of other cosmic rays with the interstellar medium. Once made, cosmic rays diffuse in the galactic magnetic field; they lose energy, are convected and eventually reach Earth.

Even taking into account the uncertainties underlying the state-of-the-art cosmic-ray transport modelling it is not possible to reproduce the PAMELA data, as figure 1 shows (T Delahaye et al. 2009). One solution is that the model for standard astrophysical positrons is mistaken in some way. For instance, the source distribution in the galaxy might be more complex than generally believed and positron production by spallating proton cosmic-rays on interstellar matter might be higher than expected. Such an effect could arise from a local over-density of proton sources (the spiral arms) or of interstellar matter around supernova remnants. However, in these models, it is difficult at the same time not to over-produce other cosmic-ray species, such as antiprotons or boron.

CCdar3_07_09

Another solution is that supernovae and spallating cosmic rays are not alone in the significant production of high-energy charged particles, so that other astrophysical objects also contribute. As electrons and positrons lose a lot of energy as they propagate in the galaxy, one single nearby source could explain the observed feature. Pulsars seem to be a good candidate for such an effect because they may produce electrons and positrons evenly, thus enriching the surrounding positron fraction. Unfortunately, the way that pulsars could produce electron–positron pairs and release them in the galaxy is not yet clear – making predictions difficult. Nevertheless, recent observations from Fermi have revealed that pulsars are more numerous than expected, so there is a high chance that we are missing many of them. Hence explaining the PAMELA/ATIC feature with pulsars is feasible.

The most exciting solution would be that these excesses arise from the effects of dark matter, so allowing a first insight into physics beyond the Standard Model. Indeed, in such a scenario, the mass of our galaxy would be dominated by new non-standard particles, which would annihilate or decay into standard particles, contributing to the cosmic-ray flux.

While it is extremely appealing, the dark-matter solution is puzzling. The natural way to agree with constraints from cosmology (freeze-out of the dark-matter particles in the early universe) is to have a new particle with mass and couplings of the order of the electroweak scale. If this particle could annihilate or decay into Standard Model particles then the corresponding cosmic-ray production rate would be small, which would not allow the reproduction of features as significant as the ones seen by PAMELA, ATIC, Fermi and HESS. To account for them, the dark-matter signal must be magnified with respect to the standard picture in some way, by a factor ranging from 100 to 1000, depending on the model. This is a well known fact – which the models make possible either with some particle-physics effect (for dark-matter particles of masses typically larger than a few tera-electron-volts or so) or as a consequence of local enhancements of the signal caused by dark-matter substructures.

Trouble appears when confronting this interpretation with channels where corresponding excesses should appear, such as cosmic antiprotons and photons. PAMELA recently published fresh measurements of the antiproton flux up to 100 GeV (figure 3), which show no specific feature. Antiprotons are interesting because the theoretical uncertainty associated with the background estimate is lower than for that of positrons – and most models with new physics expect annihilations or decays of dark matter to produce antiprotons. It is therefore possible to put an upper limit to the signal enhancement necessary to explain the leptonic data (Donato et al. 2009). It eventually appears that the antiproton data are incompatible with the large enhancements that are required by leptons for conventional dark-matter candidates.

The only way out is to have either a very heavy particle (of mass larger than 10 TeV) or to suppress the hadronic annihilation or decay modes of the dark-matter particle. In the first case, an excess of antiprotons should appear in future higher-energy data; in the second, no hadrons are produced by this so-called “leptophilic” dark matter. In both cases the properties of the new particle are different from those usually expected. Within minimal supersymmetric dark-matter models, for instance, large masses imply a loss of naturalness and direct electron/positron production in the annihilation is suppressed. In addition, when confronting models that survive the antiproton constraints to photon observations, the net tightens even more. Indeed, all of these electrons and positrons should also be produced in places where large magnetic fields are present (e.g. at the galactic centre) and consequently produce sizable radio emission, which is in general above the measured values (at least in the most standard galactic models).

The previous considerations assume a particle-physics type enhancement – i.e. an overall enhancement of the production of exotic cosmic leptons – regardless of the location in the galaxy. However, one could ask if these cosmic-ray features are the same everywhere in the galaxy. An interesting possibility is that a nearby clump of dark matter is responsible for some local excesses (Brun et al. 2009). In this case, the antiproton constraints may be less stringent and the ones from photon observations are totally avoided. The main feature responsible for the local lepton anomalies would then be a nearby (closer than a few kiloparsecs), bright clump. (As electrons and positrons do not propagate over large distances, just one massive clump could contribute sufficiently). In fact, dark-matter haloes are expected to form by successive mergers of small structures. Large haloes, such as the one of our galaxy, should contain a lot of smaller subhaloes (up to 20% of the total halo mass). Large numerical simulations can model the formation of these structures and calculate the probability of finding a configuration that fulfils the requirements to account for the lepton excesses in a halo of the size of the Milky Way. Unfortunately, this probability is found to be extremely low; usually fewer than 1% of the simulations exhibit such a favourable scenario. If such a clump does exist, however, the gamma-ray satellite Fermi has enough sensitivity to detect the associated gamma-ray emission.

Epilogue?

CCdar4_07_09

It is definitely possible to reproduce the observed cosmic-ray data with the help of dark-matter signals. Within this hypothesis, however, there will always be some tension between the different channels and observables or quite a high level of fine tuning. It could be that we are circling the properties of dark-matter particles but it is more likely that the bulk of the observed leptons come from a nearby astrophysical source that produces a large fraction of electron–positron pairs. In this case, the signal would constitute an additional background for indirect searches for dark matter through lepton channels that had not previously been accounted for.

A big step forward will be the measurement of the small anisotropy in the arrival directions of the cosmic-ray leptons, if any. If it is observed and points towards a known pulsar, then the conclusion will be clear. It is also urgent to separate electrons from positrons at higher energies and to increase statistics in all channels. Future results from PAMELA, and especially AMS-02, on leptons and also on fluxes of all nuclei will be of great help in feeding the cosmic-ray propagation models. The indirect searches for dark matter through charged channels can then continue, in particular looking for fine structure in the spectra. It will then be interesting (and challenging) to interpret future data and weigh them against results from the LHC and direct-detection experiments.

Whatever the nature of the source, we might be witnessing the first direct observation of a nearby source of cosmic rays with energies in the range of giga- to tera-electron-volts. These are exciting times and we might have to wait a little longer for the solution to this cosmic puzzle. The answer(s) will certainly come from a convergence of information from different messengers. Thanks to its large field of view, the Fermi telescope should reveal something about a nearby source, should it be a pulsar or something more exotic. Eventually, future large neutrino and gamma-ray observatories (such as KM3NeT and the Cherenkov Telescope Array) will certainly offer a great opportunity to take a deeper look into this brainteaser.

• The presentations slides and videos the TANGO talks are available at http://irfu.cea.fr/Meetings/TANGOinPARIS.

Gargamelle: the tale of a giant discovery

Example of the leptonic neutral current

On 3 September 1973 the Gargamelle collaboration published two papers in the same issue of Physics Letters, revealing the first evidence for weak neutral currents – weak interactions that involve no exchange of electric charge between the particles concerned. These were important observations in support of the theory for the unification of the electromagnetic and weak forces, for which Sheldon Glashow, Abdus Salam and Steven Weinberg were to receive the Nobel Prize in Physics in 1979. Their theory became a pillar of today’s Standard Model of particles and their interactions, but in the early 1970s, it was not so clear that it was the correct approach and that the observation of neutral currents was a done deal.

The story of the discovery has been told in many places by many people, including in the pages of CERN Courier, notably by Don Perkins in the commemorative issue for Willibald Jentschke, who was CERN’s director-general at the time of the discovery, and more recently in the issue that celebrated CERN’s 50th anniversary, in an article by Dieter Haidt, another key member of the Gargamelle Collaboration (CERN Courier October 2004 p21).

The huge bubble chamber, named Gargamelle after the giantess created 400 years earlier in the imagination of François Rabelais, took its first pictures in December 1970 and a study of neutrino interactions soon started under the leadership of André Lagarrigue. The first main quest, triggered by recent hints from SLAC of nucleon structure in terms of “partons”, was to search for evidence of the hard-scattering of muon-neutrinos (and antineutrinos) off nucleons in the 18 tonnes of liquid Freon inside Gargamelle. Charged-current (CC) events in which the neutrino transformed into a muon would be the key. So the collaboration, spread over seven institutes in six European countries, set to work on gathering photographs of neutrino and antineutrino interactions and analysing them for CC events to measure cross-sections and structure functions.

The priorities changed in March 1972, however, when the collaboration saw first hints that hadronic neutral currents might exist. It was then that they decided to make a two-prong attack in the search for neutral-current (NC) candidates. One line would be to seek out potential leptonic NC events, involving the interaction with an electron in the liquid; the other to find hadronic neutral currents in which the neutrino scattered from a hadron (proton or neutron). In both cases the neutrino enters invisibly, as usual, interacts and then moves on, again invisibly. The signal would be a single electron for the leptonic case, while for hadronic neutral currents the event would contain only hadrons and no lepton (figures 1 and 2).

Neutral Current Event

The leptonic NC channel was particularly interesting because previous neutrino experiments had shown that the background was very small and also because Martin Veltman and his student Gerard ‘t Hooft had recently demonstrated that electroweak theory was renormalizable. ‘t Hooft was able to calculate exactly the cross-sections for NC interactions involving only leptons, with the input of a single free parameter, sin2θw, where θw is the Weinberg angle. Theorists at CERN – Mary K Gaillard, Jacques Prentki and Bruno Zumino – encouraged the Gargamelle Collaboration to hunt down both types of neutral current.

Such leptonic NC interactions would, however, be extremely rare. By contrast hadronic NC events would be more common but it was not yet clear how the theory worked for quarks. In this case the process was not easy to calculate, although Weinberg published some estimates during 1972. In addition there was the problem of a background coming from neutrons that are produced in CC interactions in the surrounding material and could imitate a neutral current signal.

By March 1973 there were as many as 166 hadronic NC candidates

Over the following year various teams carefully measured and analysed candidate events from film produced previously in several runs. The first example of a single-electron event was found in December 1972 by Franz-Josef Hasert, a postgraduate student at Aachen. Fortunately he realized that an event marked by a scanner as “muon plus gamma ray” was in fact something more interesting: the clear signature of an electronic NC interaction written in the tracks of an electron knocked into motion by the punch of the unseen projectile (figure 1). This was a “gold-plated” event because it was found in the muon-antineutrino film in which any background is extremely small. Its discovery gave the collaboration a tremendous boost, strengthening the results that were beginning to roll in from the analyses of the hadronic NC events. However it was only one event, while by March 1973 there were as many as 166 hadronic NC candidates (102 neutrino events and 64 antineutrino events) although the question of the neutron background still hung over their interpretation.

Members of the team then began a final assault on the neutron background, which was finally conquered three months later, as Haidt and Perkins describe in their articles in CERN Courier. On 19 July 1973, Paul Musset presented the results of both hadronic and leptonic analyses in a seminar at CERN. The paper on the electron event had already been received by Physics Letters on 2 July (F J Hasert et al. 1973a); the paper on the hadronic events followed on 23 July (F J Hasert et al. 1973b). They were published together on 3 September.

Weak Neutral Current papers

It was an iconoclastic discovery, leaving many unconvinced. This was mainly because of the stringent limits on strangeness-changing neutral currents and the lack of understanding of the new electroweak theory. Gargamelle continued to increase the amount of data and by the summer of 1974, after the well known controversy described by Haidt and Perkins, several experiments in the US confirmed the discovery. From this time on the scientific community recognized that the Gargamelle Collaboration had discovered both leptonic and hadronic neutral currents.

Thirty-six years later the European Physical Society (EPS) has decided to award its 2009 High Energy and Particle Physics Prize to the Gargamelle Collaboration for the “Observation of weak neutral currents” (Prize time in Krakow at EPS HEPPP 2009). However, it somewhat confounded the collaboration in citing only the authors of the hadronic neutral-current paper, thus neglecting the contributions of the five who signed the electronic paper, but not the hadronic paper (Charles Baltay, Helmut Faissner, Michel Jaffre, Jacques Lemonne and James Pinfold). Though the collaboration is honoured to receive the prize, its members feel that the award should not rewrite history. They feel, and rightly so, that the two papers were of equal importance in the discovery of neutral currents. Also, like many other physicists and the EPS prize committee, they feel that it was perhaps the greatest discovery of CERN. The prize was collected on behalf of the collaboration at the EPS HEP 2009 Conference in Krakow by Antonino Pullia and Jean-Pierre Vialle. Sometime in September the medal will be attached to the Gargamelle chamber, which now stands in CERN’s grounds, and a reunion dinner for the collaboration will follow.

Element 112 receives official recognition

CCnew3_06_09

Element 112, discovered at GSI Darmstadt, has been officially recognized as a new element by the International Union of Pure and Applied Chemistry (IUPAC). IUPAC confirmed this recognition in an official letter to the head of the discovery team, Sigurd Hofmann. The letter also asks the discoverers to propose a name for the new element, which is the heaviest so far in the periodic table. Once the proposed name has been thoroughly assessed by IUPAC, the element will receive its official name.

A team of 21 scientists from Germany, Finland, Russia and Slovakia was involved in the experiments that discovered the new element. They created the first atom of 112 in 1996 when they directed a beam of zinc ions onto a target of lead at the accelerator at GSI; a second example followed in 2002. Subsequent accelerator experiments at the Japanese RIKEN accelerator facility produced more atoms of element 112, unequivocally confirming GSI’s discovery.

Since 1981, accelerator experiments at GSI have yielded six new chemical elements, which carry the atomic numbers 107 to 112. GSI has already named the officially recognized elements 107 to 111: element 107 is called bohrium, element 108 hassium, element 109 meitnerium, element 110 darmstadtium and element 111 is named roentgenium.

GSI reveals new magic numbers in nuclei

In two recent experiments at the accelerator facility at GSI Darmstadt, groups led by Reiner Krücken of the Technical University Munich and Rituparna Kanungo of St Mary’s University, Halifax, in collaboration with international teams, revealed further evidence for new magic shell closures at the limit of nuclear existence in the neutron-rich isotopes 24O and 54Ca.

The shell structure of atomic nuclei with its magic numbers (2, 8, 20, 28, 50, 82, 126) for protons and neutrons corresponding to an enhanced binding is a cornerstone in understanding the structure and dynamics of nuclei. The explanation of the magic numbers in 1949 as a result of the strong spin-orbit interaction was awarded the Nobel Prize in 1963. Until recently these magic numbers were assumed to remain universal across the whole nuclear chart, but mounting experimental evidence and theoretical predictions indicate that the shell gaps associated with the numbers are not universal. Instead they can change locally under the influence of variations in the effective interaction of the nucleons in the nucleus. Such changes in the shell structure can have dramatic effects on the production of elements in stellar explosions.

The experiments used precise momentum measurements to study the dynamics of reactions where a single neutron is knocked out from a neutron-rich nucleus. The results provide crucial information about the energies and occupation of the neutron single-particle orbitals in the respective nuclei. In the experiment with 24O (8 protons and 16 neutrons), the measurements revealed the spherical nature of the shell closure for the 16 neutrons, thus establishing 24O as a doubly magic nucleus, with a new magic number of 16 (R Kanungo et al. 2009). The second experiment studied one-neutron knockout in 56Ti (22 protons and 34 neutrons). It confirmed that shell-model calculations predicting a new shell closure in 54Ca (20 protons and 34 neutrons) correctly describe the single-particle structure in the neighbouring nucleus 55Ti (P Maierbeck et al. 2009).

The experiments were highly challenging because 24O and 56Ti form unstable radioactive beams, which can only be produced with a yield of a few particles a second, compared with the 109 ions a second that is typical of experiments with stable nuclei. The results also demonstrate the capability of the fragment separator, FRS, at GSI for high-precision momentum measurement with such extremely rare isotopes. This capability will be developed further in the near future at the Facility for Antiproton and Ion Research in Darmstadt.

XMM-Newton observes emission from matter around a black hole

CCast1_06_09

A recent observation by the XMM-Newton satellite revealed two prominent emission lines in the X-ray spectrum of the Seyfert galaxy 1H 0707-495. These lines are attributed to iron fluorescence and appear skewed towards lower energies as expected from relativistic effects in the close vicinity of a black hole. This is the strongest evidence yet for matter swirling just outside the event horizon of a super-massive black hole.

Seyfert galaxies are the less luminous analogues of quasars. They are named after Carl Seyfert, who in 1943 published the properties of 12 galaxies with peculiar optical emission lines emanating from the nucleus. These lines are now known to be emitted by atoms in gas clouds located light-weeks away from super-massive black holes.

Another emission line, this time in X-rays, has fascinated astronomers for more than a decade. Emitted at an energy of 6.4–7.0 keV, it arises from the fluorescent de-excitation of K-shell electrons in iron atoms. Excitement arose in 1995 when the Japanese Advanced Satellite for Cosmology and Astrophysics observed such a line strongly skewed towards lower energies. This was consistent with the relativistic distortion expected for matter orbiting a black hole.

With the potential to probe the inner-most stable orbit around a black hole, the precise characterization of the iron K line was an important scientific justification for ESA’s XMM-Newton satellite launched in December 1999. The superior spectral resolution of this mission enabled the identification of a rapidly spinning black hole in the galactic source XTE J1650-500 based on the shape of the iron K line. But the detailed XMM-Newton spectra also brought some confusion to the field with several studies showing evidence that the observations in some Seyfert galaxies can be interpreted without invoking a relativistically broadened iron line. The detection of similarly looking iron lines around neutron stars and even white dwarfs is also puzzling the community.

Is the relativistic broadening scenario a misinterpretation of the data?

The latest, extremely accurate observations by XMM-Newton of the Seyfert galaxy 1H 0707-495, published by Andy Fabian from the Cambridge Institute of Astronomy and collaborators, give renewed and unprecedented evidence for the relativistic interpretation. Besides the usual iron K line, for the first time they detect a second line attributed to iron L-shell transitions at an energy just below 1 keV. Both lines are so strongly distorted towards lower energies that they imply a black hole spinning at an almost maximum rate. A measured delay of about 30 s in the variations of the iron L line with respect to the continuum emission gives additional evidence for the relativistic scenario. The two iron lines would thus originate from the illumination of the inner accretion disk about one gravitational radius away from the horizon of the black hole by an X-ray continuum source located a little further out.

Precise mass measurements may help decode X-ray bursts

Researchers at the Michigan State University (MSU) National Superconducting Cyclotron Laboratory (NSCL) have made precise mass measurements of four proton-rich nuclei, 68Se, 70Se, 71Br and an excited state of 70Br. The results may make it easier to understand type I X-ray bursts, the most common stellar explosions in the galaxy.

These bursts occur in the hot and dense environment that arises when a neutron star accretes matter from a companion star in a binary system. In these circumstances, rapid burning of hydrogen and helium occurs through a series of proton captures and beta decays known as the rp process, releasing an energy of 1032–1033 J in the form of X-rays in a burst 10–100 s long. Generally the capture-decay sequence happens in a matter of seconds or less, but “waiting points” occur at the proton dripline, where the protons become too weakly bound and the slower beta-decays intervene.

One of the major waiting points involves 68Se, which has 34 neutrons and 34 protons, and closely related nuclei. The lifetimes of these nuclei influence the light curve of the X-ray burst as well as the final mix of elements created in the burst process. The lifetimes of the waiting points in turn depend critically on the masses of the nuclei involved, which also influence the possibility for double-proton capture that can bypass the beta-decay process and hence the waiting point.

The experiment at NSCL, conducted by Josh Savory and colleagues, used the Low Energy Beam and Ion Trap facility, LEBIT, for the mass measurements of the four nuclei. The nuclides themselves were produced by projectile fragmentation of a 150 MeV/u primary 78Kr beam and separated in flight by the A1900 separator. LEBIT takes isotope beams travelling at roughly half the speed of light and then slows and stops the isotopes for highly accurate mass measurements via Penning-trap mass spectrometry.

The experiment was able to reach uncertainties as low as 0.5 keV for 68Se to 15 keV for 70mBr, with up to 100 times improvement in precision (for 71Br) in comparison with previous measurements. The team then used the new measurements as input to calculations of the rp process and found an increase in the effective lifetime of 68Se, together with more precise information on the luminosity of a type I X-ray burst and on the elements produced.

Borexino homes in on neutrino oscillations

CCnew9_05_09

The mystery of the “missing” solar neutrinos arose in the 1970s when the pioneering experiment by Raymond Davis and colleagues in the Homestake Mine in South Dakota detected only one-third or so of the number of electron-neutrinos from the Sun that they expected. It was 30 years before this puzzle was solved, when the Sudbury Neutrino Observatory (SNO) confirmed the proposal that the neutrinos change type on their way from the centre of the Sun, reducing the number of electron-neutrinos arriving at the Earth. Such oscillations from one type to another can only occur if the neutrinos detected are mixtures of states with some difference in mass, in turn implying that neutrinos must have mass – a finding that lies beyond the Standard Model of particle physics.

Solar neutrinos have for the past 40 years been detected either by exploiting radiochemical techniques or by the detection of Cherenkov radiation. The Homestake detector exemplified the radiochemical method, with electron-neutrinos interacting with 37Cl to produce 37Ar, which was then extracted and detected through its radioactive decay. SNO, on the other hand, used heavy water to detect Cherenkov radiation from charged particles that were produced by neutrino interactions in the liquid. The results from all of the various experiments are best described by the theoretical description of neutrino oscillation developed by Stanislav Mikheyev, Alexei Smirnov and Lincoln Wolfenstein (MSW), and in particular the solution with a large mixing angle (LMA) between the mass states.

Towards the MSW-LMA scenario

CCrex1_05_09

To explain the flux of electron-neutrinos relative to the total flux of solar neutrinos observed in SNO, as well as the results from Homestake and other experiments, the MSW-LMA mechanism requires two different regimes for neutrino oscillation: resonant, matter-enhanced oscillations in the dense core of the Sun for energies above 5 MeV (as in SNO); and vacuum-driven oscillations for low energies, below 2 MeV (as in the gallium radiochemical experiments, GALLEX, its successor the Gallium Neutrino Observatory and SAGE). Now, for the first time, the Borexino Experiment at the Gran Sasso National Laboratories has found experimental evidence for the transition between these two oscillation regimes by detecting in real time both low-energy (0.862 MeV) and high-energy (3–16 MeV) solar neutrinos, from 7Be and 8B, respectively. These nuclei are both formed in certain branches of the principal chain of reactions that converts hydrogen to helium at the Sun’s core – the so-called proton–proton (pp) chain, which starts with the pp process, p+p→d+e++ νe. While the 7Be neutrinos form 7% of the neutrinos that emanate from the Sun, the 8B neutrinos above 5 MeV correspond to only 0.006% of the total flux.

Borexino consists of an unsegmented liquid-scintillator detector with a target mass of 278 t of pseudocumene (C9H12) doped with 1.5 g/l of PPO (2,5 diphenyloxazole). The scintillator is contained inside a thin (125 μm) nylon vessel that is shielded against external background by a second nylon vessel and about 1 kt of buffer, which consists of pseudocumene mixed with 5 g/l of light quencher (dimethylphthalate). A total of 2212 8-inch photomultipliers mounted on a 13.7 m diameter stainless-steel sphere (SSS) detect the scintillator light. The SSS works as a containment vessel for both the scintillator and the buffer. It is installed inside a tank containing 2100 t of high-purity water.

The 7Be measurement

CCrex2_05_09

One of the main research goals for Borexino is the detection of the solar neutrinos emanating from the electron-capture reactions of 7Be, which occurs in 15% of the conversions through the proton–proton chain. The 7Be neutrinos are monoenergetic (0.862 MeV, with a 90% branching ratio) and in Borexino they are detected via elastic scattering between neutrinos and electrons. The 7Be solar neutrinos offer a unique way to tag events: the kinematic Compton-like edge at 0.665 MeV. This is an important feature because solar-neutrino interactions cannot be disentangled from the residual beta-decay radioactivity arising from natural contaminants that are present in the scintillator. Figure 1 shows the expected solar-neutrino spectrum in Borexino, emphasizing the signal from the 7Be neutrinos.

CCrex3_05_09

The intrinsic radiopurity level of the scintillator is the main experimental challenge for such a detector. In Borexino, after five years of R&D, we developed purification methods that allowed us to achieve excellent purity, with intrinsic 238U and 232Th contamination levels of less than 1 in 1017. This level of radiopurity – a record in the field – allows us to study neutrino interactions in real time at, and below, 1 MeV. It also opens up new research windows such as:

• the possibility of detecting, in real time, neutrinos from the pep reaction and the CNO chain

• measuring low-energy 8B neutrinos through the reaction 13C(νe,e)13N

• searching for rare processes with very high sensitivity, such as probing the Pauli-exclusion principle at the level of >1030 y–1 by searching for non-Paulian transitions in 12C nuclei (Derbin 2008).

Borexino has been taking data since May 2007. After a few months a clear signal in the energy spectrum of events detected in the fiducial mass of about 80 t revealed the first detection of 7Be solar neutrinos (Borexino Collaboration, Arpesella et al. 2008). This observation allowed the first direct determination of the electron-neutrino survival probability, Pee, below 1 MeV. The MSW-LMA model predicts two regimes for Pee: namely, below 1 MeV, with Pee ˜0.6; and above 2 MeV, with Pee ˜0.3. Prior to Borexino only radiochemical experiments could probe the energy region below 1 MeV and they all measured an integrated solar-neutrino flux above a certain threshold – the threshold for the electron-neutrino capture interaction. The observation of 7Be neutrinos by Borexino provides a result of Pee=0.56±0.10 at 0.862 MeV, which is in good agreement with the MSW-LMA prediction (Borexino Collaboration, Alimonti et al. 2008).

This measurement casts light on another unresolved aspect of the physics of the solar core: the ratio of helium production via the pp chain and a cycle that involves carbon, nitrogen and oxygen (the CNO cycle). When taken all together, the integrated rates measured by Homestake and the gallium experiments are a function of the fluxes of solar neutrinos from pp, 7Be, the CNO cycle and the decay of 8B. Therefore, using the Borexino result on 7Be neutrinos, it is possible to study the correlation between the pp and CNO fluxes. Figure 2 shows contours at the 68%, 90% and 99% confidence levels for the combined estimate of the pp and CNO fluxes, normalized to the predictions of the Solar Standard Model (SMM). The 8B flux is fixed by the Cherenkov experiments (Super-Kamiokande and SNO).

CCrex4_05_09

As figure 2 shows, the measurement of 7Be neutrinos is important for the study of a fundamental parameter, the flux of pp neutrinos, which are the most abundant solar neutrinos produced in the core of the Sun. The theory for beta-decay, with some extension, allows the calculation of the basic pp→d+e+νe cross-section, which at 1 MeV is around 10–47 cm2. Measuring such a small value is beyond the reach of current technology, so the cross-section for this important process – which drives the evolution of the Sun – can only be determined theoretically. A check of the flux predicted by the SMM for pp neutrinos is therefore important.

Figure 2 makes use of the luminosity constraint – a specific linear combination of solar-neutrino fluxes that corresponds to the measured photon-solar luminosity, assuming that nuclear-fusion reactions are responsible for generating energy inside the Sun. It leads to a value of fpp=1.04+0.13–0.19 with the luminosity constraint; without the constraint fpp=1.005+0.008–0.020. These are the best measurements of the pp solar-neutrino flux. The result on fCNO translates into a CNO contribution to the solar luminosity of <5.4% (90% CL); the current SMM predicts a contribution of order 1%.

Borexino has also recently performed a measurement of the 8B solar-neutrino flux above 3 MeV, which was possible because of the high radiopurity achieved. Prior to Borexino, 8B neutrinos were measured above 5 MeV using Cherenkov detectors. The results from these experiments agree well with Borexino’s measurement.

CCrex5_05_09

The measurement of the 8B flux allows a determination of the corresponding value of Pee at an effective energy (taking into account the spectrum of 8B neutrinos) of 8.6 MeV. So by detecting 8B neutrinos Borexino has measured Pee simultaneously at 0.862 MeV and at 8.6 MeV (figure 3). Disregarding systematic effects, which are the same for the measurement of Pee at low and high energies, the result determines a difference at about 2σ for Pee for 7Be and 8B neutrinos. The measured ratio of the survival probability for 7Be and 8B neutrinos is currently 1.60±0.33 (Borexino collaboration, Bellini et al. 2008). Using other solar-neutrino observations it is also possible to determine Pee for pp neutrinos, which figure 3 also shows. Combined, these results confirm for the first time the vacuum-matter transition predicted by the MSW-LMA scenario at today’s accuracy.

• Borexino at the Gran Sasso Laboratory is an international collaboration funded by INFN (Italy); NSF (US) for Princeton University, Virginia Tech, University of Massachusetts Amherst; BMBF and DFG (Germany) for MPI für Kernphysik Heidelberg, TU München; Rosnauka (Russia) for RRC Kurchatov Institute and JINR; MNiSW (Poland) for Institute of Physics Jagellonian University; and Laboratoire APC Paris.

bright-rec iop pub iop-science physcis connect