Comsol -leaderboard other pages

Topics

A global network listens for ripples in space–time

Albert Einstein predicted the existence of gravitational waves, faint ripples in space–time, in his general theory of relativity. They are generated by catastrophic events of astronomical objects that typically have the mass of stars. The most predictable generators of such waves are likely to be binary systems, black holes and neutron stars that spiral inwards and coalesce. However, there are many other possible sources, such as: stellar collapses that result in neutron stars and black holes (supernova explosions); rotating asymmetric neutron stars, such as pulsars; black-hole inter-actions; and the violent physics at the birth of the early universe.

In a similar way that a modulated radio signal can carry the sound of a song, these gravitational-wave ripples precisely reproduce the movement of the colliding masses that generated them. Therefore, a gravitational-wave observatory that senses the space–time ripples is actually transducing the motion of faraway stars. The great challenge is that these ripples are a strain of space (a change in length for each unit length) of the order of 10–22 to 10–23 – tremors so small that they are buried by the natural vibrations of everyday objects. As inconspicuous as a rain drop in a waterfall, they are difficult to detect.

CCvir_10_07

During the past few years, a handful of gravitational-wave projects have dared to make a determined attempt to detect these ripples. The Italian–French Virgo project (figure 1), the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US, the British–German GEO600 project and the TAMA project in Japan have all constructed gravitational-wave observatories of impressive size and ambition. Several years ago, the GEO team joined the LIGO Scientific Collaboration (LSC) to analyse the data collected from the two interferometers. Recently the Virgo Collaboration has been reinforced by a Dutch group, from Nikhef, Amsterdam.

CCgwi_10_07

The gravitational-wave observatories are based on kilometre-length, L-shaped Michelson interferometers in which powerful and stable laser beams precisely measure this differential distance using heavy suspended mirrors that act as test masses (see figure 2). The space–time waves “drag” the test masses in the interferometer, in effect transducing (at a greatly reduced scale) the movement of the dying stars millions of parsecs away, much in the way that ears transduce sound waves into nerve impulses.

The main problem in gravitational-wave detection is that the largest expected waves from astrophysical objects will strain space–time by a factor of 10–22, resulting in movements of 10–18 to 10–19 m of the test masses on the kilometre-length interfer-ometer arms. This is smaller than a thousandth of a proton diameter, and this signal is so small that Einstein, after predicting the existence of gravitational waves, also predicted that they would never be susceptible to detection.

To surmount this challenge requires seismic isolators with enormous rejection factors to ensure that the suspended mirrors are more “quiet” than the predicted signal that they lie in wait to perceive (the typical motion of the Earth’s crust, in the absence of earthquakes, is of the order of a micrometre at 1 Hz). The instruments use extremely stable and powerful laser beams to measure the mirror separations with the required precision, but without “kicking” them with a radiation pressure exceeding the sought-after signal. The mirrors being used are marvellous, state-of-the-art constructions, and the thermal motion on their surface is more hushed than the signal amplitude. All of this is housed in large diameter (1.2 m for Virgo and LIGO) ultra-high vacuum (UHV) tubes that bisect the countryside. In fact, the vacuum pipes represent the largest UHV systems on Earth, and their volume dwarfs the volume of the longest particle-collider pipes.

CCsen_10_07

The detectors are extremely complex and difficult to tune. The installation of the LIGO interferometers finished in 2000 and complete installation of Virgo followed in 2003. Both instruments then went through several years of tuning, with LIGO reaching its design sensitivity about a year ago and Virgo fast approaching its own design target (figure 3).

CCarm_10_07

On 22 May, a press conference in Cascina, Pisa, announced the first Virgo science run, as well as the first joint Virgo–LIGO data collection. That week also saw the first meeting of Virgo and the LSC in Europe. It was a momentous occasion for the entire field of gravitational-wave detection. Although the LIGO network was already into its fifth science run, which had started in November 2005, many in the community saw the announcement as marking the birth of a global gravitational-wave detector network.

CC2bh_10_07

The collaborative first and fifth science runs of Virgo and LIGO, respectively, ended on 1 October. The effort proved to be a tremendous success, demonstrating that it is possible to operate gravitational-wave observatories with excellent sensitivity and solid reliability for prolonged periods. The accumulated LIGO data amount to an effective full-year of observation at design sensitivity with all three LIGO interferometers in coincident operation, and the four-month-long joint run produced coincidence data between the two observatories with high efficiency. Although no gravitational waves were detected, the collected data are being analysed and will produce upper limits and other astrophysically significant results.

Running gravitational-wave detectors in a network has a fundamental importance related to the goal of opening up the field of gravitational-wave astronomy. Gravitational waves are best thought of as fluctuating, transversal strains in space–time. In an oversimplified analogy, they can be likened to sound waves, that is, as pressure waves of space–time travelling in vacuum at the speed of light. Like sound waves, gravitational waves come in the acoustic frequency band and, since gravitational-wave interferometers act essentially as microphones and lack directional specificity, the waves can be “heard” rather than “seen”, This means that a single observatory may detect a gravitational-wave burst but would have difficulty pinpointing its source. Just as two ears at opposite sides of the head are necessary to locate the origin of a sound, a network of several detectors at distant points around the Earth can triangulate the sources of gravitational waves in space. This requires a global network and is critical for pinpointing the location of the source in the sky so that other instruments, such as optical telescopes, can provide additional information about the source. In addition, signals as weak as gravitational waves require coincidence detection in several distant locations to confirm their validity by rejecting spurious events generated by local noise.

Before Virgo joined, the LSC already consisted of two major gravitational-wave observatories in the US – one in Livingston, Louisiana, and the other in Hanford, Washington – as well as the smaller European GEO600 observatory in Germany. The addition of the Virgo interferometer to this network has greatly reinforced its detection and pointing capabilities. The Livingston observatory hosts a single interferometer with a pair of 4 km arms. Hanford has two instruments: a 4 + 4 km interferometer like that at Livingston and a smaller 2 + 2 km one. GEO has a single 0.6 + 0.6 km interferometer, while Virgo operates a 3 + 3 km one. The introduction of Fabry–Perot cavities in the arms boosts the sensitivity of the three larger interferometers, extending the effective lengths of the arms to hundreds of kilometres. GEO600 increases its effective arm length to 1.2 km by using a folded beam.

CCmir_10_07

Japan has a somewhat smaller (0.3 + 0.3 km) interferometer, known as TAMA, located in Mitaka, near Tokyo. It is currently being refurbished with advanced seismic isolators and should soon join the growing gravitational-wave network. Japan is also considering the construction of the Large-scale Cryogenic Gravitational-Wave Telescope (LCGT). This would be a 3 + 3 km, underground interferometer with cryogenic mirrors, which would later become part of the global array of gravitational-wave interferometers. Australia is also developing technologies for gravitational-wave instruments and is planning to build an interferometer.

The effectiveness of gravitational-wave observatories is characterized both by their “reach” and by their duty cycle. The reach is conventionally defined as the maximum distance at which the inspiral signal of two neutron stars (each 1.4 solar masses) would be detectable with the available sensitivity. In gravitational-wave astronomy, improvements in sensitivity achieve more than they do in optical astronomy. Doubled sensitivity equals double reach, resulting in an eightfold increase in the observed cosmic volume and in the expected event rate. Similarly, because of the coincidence requirements between multiple interferometers, the duty cycle of a gravitational-wave observatory is more important than in a stand-alone optical instrument.

During the fifth science run, the two larger LIGO interferometers showed a detection range for the inspiral of two neutron stars of 15–16 Mpc, while the half-length interferometer had a reach of 6–7 Mpc. Virgo, with its partial commissioning, achieved a reach of 4.5 Mpc during the recent first joint run with LIGO. Virgo did not yet reach its design sensitivity in this run. However, its seismic isolation system helped it to achieve a superior seismic event immunity, resulting in longer “locks” (with a record of 95 hours) and an excellent observational duty factor. (The interferometer can only take data when it is “locked” – when all of the mirrors are controlled and held in place to a small fraction of a wavelength of light.) LIGO reached a duty cycle of 85–88% at the end of its fifth science run, while Virgo reached the same level on its first run.

The duty cycle of the triple coincidence of Virgo, LIGO–Hanford and LIGO–Livingston exceeded 58% (or 54% when including the smaller, second Hanford interferometer) and 40% in conjunction with GEO600. This was an amazing achievement given the tremendous technical finesse required to maintain all of these complex instruments in simultaneous operation.

The gold-plated gravitational-wave event would be the detection of a neutron star (NS) or black hole (BH) inspiral. However, even at the present design sensitivity, the Virgo–LIGO network has a relatively small chance of detecting such events. Currently, LIGO could expect a probability of a few per cent each year of ever detecting NS–NS inspirals, with perhaps a larger probability of detecting NS–BH and BH–BH inspirals, and an unknown probability of detecting supernova explosions (only if asymmetric and in nearby galaxies) and rotating neutron stars (only if a mass distribution asymmetry is present).

As scientists analyse the valuable data just acquired from the successful science run, the three main interferometers will now undergo a one- to two-year period of moderate enhancements and final tuning, and then resume operation for another year of joint data acquisition with greater sensitivity and an order of magnitude better odds of detection. At the end of that period, the interferometers will undergo a more drastic overhaul to boost their sensitivity by an order of magnitude with respect to the present value. A tenfold increase in sensitivity will result in a thousandfold increase of the listened-to cosmic volume, and correspondingly, up to a thousand times improvement in detection probability.

At that point (expected in 2015–16), the network will be sensitive to inspiral events within at least 200 Mpc and we can expect to detect and map several such events a year, based on our current understanding of populations of astrophysical objects. This will mark the beginning of gravitational-wave astronomy, a new window to explore the universe in conjunction with the established electromagnetic-wave observatories and the neutrino detectors.

Beyond this already ambitious programme, the gravitational-wave community has begun tracing a roadmap to design even more powerful observatories for the future. Interferometers based on the surface of the Earth can operate with high sensitivity only above 10 Hz, as they are limited by the seismically activated fluctuations of Earth’s Newtonian attraction. This limits detection to the ripples generated by relatively small objects (tens to hundreds of solar masses) and to “modest” distances (redshift Z = 1). Third-generation observatories built deep underground, far from the perturbations of the Earth’s surface, would be able to detect gravitational waves down to 1 Hz, and be sensitive enough to detect the lower-frequency signals coming from more massive objects, such as intermediate-mass black holes. Finally, space-based gravitational-wave detection interferometers such as the Laser Interferometer Space Antenna (LISA) are being designed to listen at an even lower frequency band. LISA would detect millihertz signals coming from the supermassive black holes lurking at the centre of galaxies. The aim is to launch the interferometer around 10 years from now, as a collaboration between ESA and NASA.

Although gravitational waves have not yet been detected, the gravitational-wave community is poised to prove Einstein right and wrong: right in his prediction that gravitational waves exist, wrong in his prediction that we will never be able to detect them.

Physics in the multiverse

Is our entire universe a tiny island within an infinitely vast and infinitely diversified meta-world? This could be either one of the most important revolutions in the history of cosmogonies or merely a misleading statement that reflects our lack of understanding of the most fundamental laws of physics.

CCnbb_10_07

A self-reproducing universe. This computer-generated simulation shows exponentially large domains, each with different laws of physics (associated with different colours). Peaks are new “Big Bangs”, with heights corresponding to the energy density.
Image credit: simulations by Andrei and Dimitri Linde. The idea in itself is far from new: from Anaximander to David Lewis, philosophers have exhaustively considered this eventuality. What is especially interesting today is that it emerges, almost naturally, from some of our best – but often most speculative – physical theories. The multiverse is no longer a model; it is a consequence of our models. It offers an obvious understanding of the strangeness of the physical state of our universe. The proposal is attractive and credible, but it requires a profound rethinking of current physics.

At first glance, the multiverse seems to lie outside of science because it cannot be observed. How, following the prescription of Karl Popper, can a theory be falsifiable if we cannot observe its predictions? This way of thinking is not really correct for the multiverse for several reasons. First, predictions can be made in the multiverse: it leads only to statistical results, but this is also true for any physical theory within our universe, owing both to fundamental quantum fluctuations and to measurement uncertainties. Secondly, it has never been necessary to check all of the predictions of a theory to consider it as legitimate science. General relativity, for example, has been extensively tested in the visible world and this allows us to use it within black holes even though it is not possible to go there to check. Finally, the critical rationalism of Popper is not the final word in the philosophy of science. Sociologists, aestheticians and epistemologists have shown that there are other demarcation criteria to consider. History reminds us that the definition of science can only come from within and from the praxis: no active area of intellectual creation can be strictly delimited from outside. If scientists need to change the borders of their own field of research, it would be hard to justify a philosophical prescription preventing them from doing so. It is the same with art: nearly all artistic innovations of the 20th century have transgressed the definition of art as would have been given by a 19th-century aesthetician. Just as with science and scientists, art is internally defined by artists.

For all of these reasons, it is worth considering seriously the possibility that we live in a multiverse. This could allow understanding of the two problems of complexity and naturalness. The fact that the laws and couplings of physics appear to be fine-tuned to such an extent that life can exist and most fundamental quantities assume extremely “improbable” values would appear obvious if our entire universe were just a tiny part of a huge multiverse where different regions exhibit different laws. In this view, we are living in one of the “anthropically favoured” regions. This anthropic selection has strictly teleological and no theological dimension and absolutely no link with any kind of “intelligent design”. It is nothing other than the obvious generalization of the selection effect that already has to be taken into account within our own universe. When dealing with a sample, it is impossible to avoid wondering if it accurately represents the full set, and this question must of course be asked when considering our universe within the multiverse.

The multiverse is not a theory. It appears as a consequence of some theories, and these have other predictions that can be tested within our own universe. There are many different kinds of possible multiverses, depending on the particular theories, some of them even being possibly interwoven.

CCexp_10_07

The most elementary multiverse is simply the infinite space predicted by general relativity – at least for flat and hyperbolic geometries. An infinite number of Hubble volumes should fill this meta-world. In such a situation, everything that is possible (i.e. compatible with the laws of physics as we know them) should occur. This is true because an event with a non-vanishing probability has to happen somewhere if space is infinite. The structure of the laws of physics and the values of fundamental parameters cannot be explained by this multiverse, but many specific circumstances can be understood by anthropic selections. Some places are, for example, less homogenous than our Hubble volume, so we cannot live there because they are less life-friendly than our universe, where the primordial fluctuations are perfectly adapted as the seeds for structure formation.

General relativity also faces the multiverse issue when dealing with black holes. The maximal analytic extension of the Schwarzschild geometry, as exhibited by conformal Penrose–Carter diagrams, shows that another universe could be seen from within a black hole. This interesting feature is well known to disappear when the collapse is considered dynamically. The situation is, however, more interesting for charged or rotating black holes, where an infinite set of universes with attractive and repulsive gravity appear in the conformal diagram. The wormholes that possibly connect these universes are extremely unstable, but this does not alter the fact that this solution reveals other universes (or other parts of our own universe, depending on the topology), whether accessible or not. This multiverse is, however, extremely speculative as it could be just a mathematical ghost. Furthermore, nothing allows us to understand explicitly how it formed.

A much more interesting pluriverse is associated with the interior of black holes when quantum corrections to general relativity are taken into account. Bounces should replace singularities in most quantum gravity approaches, and this leads to an expanding region of space–time inside the black hole that can be considered as a universe. In this model, our own universe would have been created by such a process and should also have a large number of child universes, thanks to its numerous stellar and supermassive black holes. This could lead to a kind of cosmological natural selection in which the laws of physics tend to maximize the number of black holes (just because such universes generate more universes of the same kind). It also allows for several possible observational tests that could refute the theory and does not rely on the use of any anthropic argument. However, it is not clear how the constants of physics could be inherited from the parent universe by the child universe with small random variations and the detailed model associated with this scenario does not yet exist.

One of the richest multiverses is associated with the fascinating meeting of inflationary cosmology and string theory. On the one hand, eternal inflation can be understood by considering a massive scalar field. The field will have quantum fluctuations, which will, in half of the regions, increase its value; in the other half, the fluctuations will decrease the value of the field. In the half where the field jumps up, the extra energy density will cause the universe to expand faster than in the half where the field jumps down. After some time, more than half of the regions will have the higher value of the field simply because they expand faster than the low-field regions. The volume-averaged value of the field will therefore rise and there will always be regions in which the field is high: the inflation becomes eternal. The regions in which the scalar field fluctuates downward will branch off from the eternally inflating tree and exit inflation.

CCyau_10_07

A tri-dimensional representation of a quadri-dimensional Calabi–Yau manifold. This describes the geometry of the extra “internal” dimensions of M-theory and relates to one particular (string-inspired) multiverse scenario.
Image credit: simulation by Jean-François Colonna, CMAP/École Polytechnique. On the other hand, string theory has recently faced a third change of paradigm. After the revolutions of supersymmetry and duality, we now have the “landscape”. This metaphoric word refers to the large number (maybe 10500) of possible false vacua of the theory. The known laws of physics would just correspond to a specific island among many others. The huge number of possibilities arises from different choices of Calabi–Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. Among other enigmas, the incredibly strange value of the cosmological constant (why are the 119 first decimals of the “natural” value exactly compensated by some mysterious phenomena, but not the 120th?) would simply appear as an anthropic selection effect within a multiverse where nearly every possible value is realized somewhere. At this stage, every bubble-universe is associated with one realization of the laws of physics and contains itself an infinite space where all contingent phenomena take place somewhere. Because the bubbles are causally disconnected forever (owing to the fast “space creation” by inflation) it will not be possible to travel and discover new laws of physics.

This multiverse – if true – would force a profound change of our deep understanding of physics. The laws reappear as kinds of phenomena; the ontological primer of our universe would have to be abandoned. At other places in the multiverse, there would be other laws, other constants, other numbers of dimensions; our world would be just a tiny sample. It could be, following Copernicus, Darwin and Freud, the fourth narcissistic injury.

Quantum mechanics was probably among the first branches of physics leading to the idea of a multiverse. In some situations, it inevitably predicts superposition. To avoid the existence of macro-scopic Schrödinger cats simultaneously living and dying, Bohr introduced a reduction postulate. This has two considerable drawbacks: first, it leads to an extremely intricate philosophical interpretation where the correspondence between the mathe-matics underlying the physical theory and the real world is no longer isomorphic (at least not at any time), and, second, it violates unitarity. No known physical phenomenon – not even the evaporation of black holes in its modern descriptions – does this.

CCete_10_07

These are good reasons for considering seriously the many-worlds interpretation of Hugh Everett. Every possible outcome to every event is allowed to define or exist in its own history or universe, via quantum decoherence instead of wave function collapse. In other words, there is a world where the cat is dead and another one where it is alive. This is simply a way of trusting strictly the fundamental equations of quantum mechanics. The worlds are not spatially separated, but exist more as kinds of “parallel” universes. This tantalizing interpretation solves some paradoxes of quantum mechanics but remains vague about how to determine when splitting of universes happens. This multiverse is complex and, depending on the very quantum nature of phenomena leading to other kinds of multiverses, it could lead to higher or lower levels of diversity.

More speculative multiverses can also be imagined, associated with a kind of platonic mathematical democracy or with nominalist relativism. In any case, it is important to underline that the multiverse is not a hypothesis invented to answer a specific question. It is simply a consequence of a theory usually built for another purpose. Interestingly, this consequence also solves many complexity and naturalness problems. In most cases, it even seems that the existence of many worlds is closer to Ockham’s razor (the principle of simplicity) than the ad hoc assumptions that would have to be added to models to avoid the existence of other universes.

Given a model, for example the string-inflation paradigm, is it possible to make predictions in the multiverse? In principle, it is, at least in a Bayesian approach. The probability of observing vacuum i (and the associated laws of physics) is simply Pi = Piprior fi where Piprior is determined by the geography of the landscape of string theory and the dynamics of eternal inflation, and the selection factor fi characterizes the chances for an observer to evolve in vacuum i. This distribution gives the probability for a randomly selected observer to be in a given vacuum. Clearly, predictions can only be made probabilistically, but this is already true in standard physics. The fact that we can observe only one sample (our own universe) does not change the method qualitatively and still allows the refuting of models at given confidence levels. The key points here are the well known peculiarities of cosmology, even with only one universe: the observer is embedded within the system described; the initial conditions are critical; the experiment is “locally” irreproducible; the energies involved have not been experimentally probed on Earth; and the arrow of time must be conceptually reversed.

However, this statistical approach to testing the multiverse suffers from severe technical short cuts. First, while it seems natural to identify the prior probability with the fraction of volume occupied by a given vacuum, the result depends sensitively on the choice of a space-like hypersurface on which the distribution is to be evaluated. This is the so-called “measure problem” in the multiverse. Second, it is impossible to give any sensible estimate of  fi. This would require an understanding of what life is – and even of what consciousness is – and that simply remains out of reach for the time being. Except in some favourable cases – for example when all the universes of the multiverse present a given characteristic that is incompatible with our universe – it is hard to refute explicitly a model in the multiverse. But difficult in practice does not mean intrinsically impossible. The multiverse remains within the realm of Popperian science. It is not qualitatively different from other proposals associated with usual ways of doing physics. Clearly, new mathematical tools and far more accurate predictions in the landscape (which is basically totally unknown) are needed for falsifiability to be more than an abstract principle in this context. Moreover, falsifiability is just one criterion among many possible ones and it should probably not be over-determined.

CCpcd_10_07

When facing the question of the incredible fine-tuning required for the fundamental parameters of physics to allow the emergence of complexity, there are few possible ways of thinking. If one does not want to use God or rely on an unbelievable luck that led to extremely specific initial conditions, there are mainly two remaining possible hypotheses. The first would be to consider that since complexity – and in particular, life – is an adaptive process, it would have emerged in nearly any kind of universe. This is a tantalizing answer, but our own universe shows that life requires extremely specific conditions to exist. It is hard to imagine life in a universe without chemistry, maybe without bound states or with other numbers of dimensions. The second idea is to accept the existence of many universes with different laws where we naturally find ourselves in one of those compatible with complexity. The multiverse was not imagined to answer this specific question but appears “spontaneously” in serious physical theories, so it can be considered as the simplest explanation to the puzzling issue of naturalness. This of course does not prove the model to be correct, but it should be emphasized that there is absolutely no “pre-Copernican” anthropocentrism in this thought process.

It could well be that the whole idea of multiple universes is misleading. It could well be that the discovery of the most fundamental laws of physics will make those parallel worlds totally obsolete in a few years. It could well be that with the multiverse, science is just entering a “no through road”. Prudence is mandatory when physics tells us about invisible spaces. But it could also very well be that we are facing a deep change of paradigm that revolutionizes our understanding of nature and opens new fields of possible scientific thought. Because they lie on the border of science, these models are dangerous, but they offer the extraordinary possibility of constructive interference with other kinds of human knowledge. The multiverse is a risky thought – but, then again, let’s not forget that discovering new worlds has always been risky.

When white dwarfs collide…

A peculiar supernova discovered last year could be the first that is known to have originated from the coalescence of two white dwarfs. These compact stars orbiting each other slowly spiralled inward until they merged, triggering the giant explosion.

A supernova is the explosion of a star that, for several days, becomes as luminous as about a thousand million stars similar to the Sun. They are bright enough to be detected in remote galaxies (CERN Courier October 2007 p13). Astronomers classify them according to whether their spectrum shows evidence of hydrogen (Type II), or not (Type I). This difference in spectrum reflects a completely different explosion mechanism. Type II supernovae (SN II) originate from the core collapse of massive, short-lived stars running out of nuclear power, whereas the most common Type I supernovae (SN Ia) occur when catastrophic nuclear fusion blasts apart a white dwarf that has accreted too much gas from a normal companion star.

White dwarfs are the remaining cores of stars not large enough to end their lives in SN II explosions. In about five thousand million years, the Sun will become such a compact star, as small as the Earth and mainly composed of carbon and oxygen. The possibility of producing a supernova by the merger of two white dwarfs remained purely theoretical until now. However, there is strong evidence that a supernova discovered on 26 September 2006, called SN 2006gz, comes from such a peculiar origin. This is at least the conclusion of a team of astronomers led by Malcolm Hicken, a graduate student of the Harvard-Smithsonian Center for Astrophysics (CfA). They found three observational features that suggest that this explosion – first classified as a Type Ia supernova – was caused by some different mechanism. The most important evidence for this is that SN 2006gz has the strongest signature of unburnt carbon ever reported. Merging white dwarfs are expected to have carbon in their outer layer that would be pushed off by the explosion from the inside. The spectrum of SN 2006gz also shows evidence for silicon that would have been compressed by the shock wave rebounding from the surrounding layers of carbon and oxygen. Additionally, SN 2006gz was brighter than expected, indicating that its progenitor exceeded the 1.4 solar-mass Chandrasekhar limit – the upper bound for a single white dwarf. Only one other potential example of a super-Chandrasekhar supernova has been seen (SN 2003fg), but this supernova did not show the carbon and silicon spectral characteristics of SN 2006gz, which are predicted for merging white dwarfs by computer models.

As a first example of a different kind of supernova, the observations of SN 2006gz will allow astronomers to distinguish these more powerful explosions more clearly from the single white-dwarf blasts. This is particularly important for cosmology, as the similarities among normal Type Ia supernovae were used to reveal the accelerated expansion of the universe thought to be driven by dark energy (CERN Courier September 2003 p23).

J-PARC accelerates protons to 3GeV

On 31 October a team at the Japan Proton Accelerator Research Complex (J-PARC) accelerated a proton beam to the design energy of 3 GeV in the new Rapid-Cycling Synchrotron (RCS). This is an important step for this joint project between KEK and the Japan Atomic Energy Agency.

The team began beam commissioning the RCS during the run that started on 10 September. The linac was once again in operation and on 2 October the beam was successfully transported from the linac to the RCS. Two days later, the H beam was transported to the H-0 dump located at the injection section of the RCS without the charge-exchange foil.

The charge-exchange foil was installed during the following scheduled two-week shutdown. On 25 October the proton beam produced by the stripping of two electrons from the H ions in the foil was transported through one arc of the RCS and extracted to the beam transport to the muon and neutron production targets, known as 3BNT. As the targets are not yet ready, the beam currently goes to a 4 kW beam dump just beyond the extraction system. The following day, the beam circulated in the RCS and was extracted to 3NBT. Finally, on 31 October, the team accelerated a beam in the RCS to the design energy of 3 GeV and extracted it to the 3NBT dump via the kicker system.

One aim during commissioning has been to minimize the radioactivation of the accelerator components, because the team will have to replace items such as the charge-exchange foil-replacement system after the beam commissioning. To achieve this the team did the commissioning with one shot of the linac beam with a peak current of 5 mA and a pulse length of 50 μs. This allowed it to accumulate useful beam data “shot by shot” with a minimum radioactivation of the accelerator components.

OPERA takes first photographs

The first neutrino event of the 2007 run of the CERN Neutrinos Gran Sasso (CNGS) facility was recorded on 2 October, when one of the many millions of neutrinos in the beam from CERN interacted in the OPERA detector in the Gran Sasso National Laboratory, 730 km away in Italy. The interaction occurred in one of nearly 60,000 “bricks” already installed in the detector and provided the first detailed event image in high-precision emulsion.

CCnew6_10_07

There is now plenty of evidence that neutrinos oscillate between three “flavour” states, associated with the charged leptons: electron, muon and τ. Several experiments have observed the disappearance of the initial neutrino flavour but “direct appearance” of a different flavour remains a major missing piece of the puzzle. The CNGS beam consists of muon-neutrinos, and the observation in OPERA of a few τ-neutrino interactions among many muon-neutrino events will provide the long-awaited proof of neutrino oscillation.

In 2006, OPERA collected about 300 neutrino events during the commissioning run of the CNGS facility (Acquafredda et al. 2006). However, these did not include information about the event-vertex recorded in the thousands of small “bricks”, each made of a sandwich of lead plates and nuclear emulsion films, which make up the “heart” of OPERA. The emulsion technique allows the collaboration to measure the neutrino interaction vertices with high precision. Installation of the bricks continues daily and the total is nearing the halfway mark, ultimately reaching 150,000 bricks with a total mass of 1300 tonnes.

CCnew7_10_07

The event of 2 October was the first to be registered in a brick, and some 37 more events occurred in the following days. An automated system immediately removed the bricks containing these events from the detector. They were then dispatched to the various laboratories of the OPERA collaboration, which are equipped with the automatic microscopes required to scan the emulsion films and make relevant measurements. Figure 2 shows the microscope display for one of these events, representing a volume of only a few cubic millimetres but rich in valuable information for the OPERA physicists.

This is a crucial milestone in an enterprise that started about 10 years ago. The OPERA detector was designed and realized by a large team of researchers from all over the world (Belgium, Bulgaria, Croatia, France, Germany, Israel, Italy, Japan, Korea, Russia, Switzerland, Tunisia and Turkey), with strong support from CERN, INFN, Japan and the main European funding agencies. Numerous hi-tech industrial companies were also involved in the supply of the many parts of the equipment necessary for building the large detector.

Second LHC transfer line passes beam test with flying colours

Three years after the initial commissioning of the SPS-to-LHC transfer line, TI 8, the second transfer line, TI 2, has passed its first test with beam. Just as for TI 8 in October 2004, a low-intensity proton beam travelled down the entire new line at the first attempt.

CCt12_10_07

The two LHC injection lines have a combined length of 5.6 km and comprise around 700 warm (normally conducting) magnets. While around 70 magnets were recuperated from earlier installations at CERN, the majority were produced by the Budker Institute for Nuclear Physics in Novosibirsk, as part of Russia’s contribution to the LHC project.

TI 2 leads from the extraction in long straight section 6 (LSS6) of the SPS to the injection point on the LHC for clockwise beam, which is near the interaction region for the ALICE experiment at Point 2. To transfer beam to the LHC, LSS6 has a new fast extraction, which underwent commissioning in 2006 and further testing in 2007. Following the extraction, the beam passes for 150 m through the TT60 line, formerly used for transfer to the West Area, before entering the TI 2 line in its purpose-built 3 m diameter tunnel.

Installation of the TI 2 beamline started at the beginning of 2005 in the upstream part of the new tunnel, followed by initial hardware commissioning that summer. However, as the downstream part of the TI 2 tunnel served as a transport path for the main LHC magnets, it had to remain free of beamline elements until the LHC magnets were all in place. With the descent of the last magnet in April 2007, installation of the TI 2 line resumed, reaching completion at the beginning of August. This was followed by some eight weeks of hardware commissioning for the whole beamline.

The beam test, which took place over a 22 hour period, started early on 28 October. The commissioning team first prepared a single-bunch beam of 5 × 109 protons and set the TI 2 line to the SPS energy before tuning the SPS extraction. Then, when they retracted the beam dump near the extraction, the beam travelled without any steering straight through the 2.7 km of beamline components to the temporary dump installed near the end of the TI 2 tunnel. During the time remaining, the team made a range of basic measurements; after an initial analysis the basic parameters are looking good, indicating that there are no major errors in the line.

LHCb installs its fragile precision silicon detector

One of the most fragile detectors for the LHCb experiment has been successfully installed in its final position. Installing the Vertex Locator (VELO) in the underground experimental cavern at CERN proved to be a challenging task for the collaboration.

The VELO is a precise particle-tracking detector that surrounds the collision point inside the LHCb experiment. At its heart are 84 half-moon-shaped silicon sensors, each connected to its electronics via a system of 5000 bond wires. These sensors are located close to the collision point, where they will play a crucial role in detecting b quarks.

The sensors are grouped in pairs to make a total of 42 modules arranged in two halves around the beamline in the VELO vacuum tank. A 0.3 mm thick aluminium sheet provides a shield between the silicon modules and the primary beam vacuum, with no more than 1 mm of leeway to the silicon modules. Custom-made bellows enable the VELO to retract from its normal position 5 mm from the beamline to a distance of 35 mm. This flexibility is crucial during the commissioning of the LHC beam.

The VELO project involves several institutes of the LHCb collaboration, including Nikhef, EPFL Lausanne, Liverpool, Glasgow, CERN, Syracuse and MPI Heidelberg. In particular, the sensor modules were constructed at the University of Liverpool, and Nikhef provided the special foil that interfaces with the LHC vacuum.

Robert Aymar seals the final main magnet interconnection

At a brief ceremony on 7 November deep in the LHC tunnel CERN’s director-general, Robert Aymar, sealed the last interconnection between the collider’s main magnet systems. This is the latest milestone in commissioning the LHC, which is scheduled to start up in 2008. The ceremony marks the end of a two-year programme of work to connect all of the main dipole and quadrupole magnets in the machine, a complex task that included both electrical and fluid connections.

CCinc_10_07

The 27 km circumference LHC is divided into eight sectors, each of which can be cooled down to the operating temperature of 1.9 K and powered up independently. The first sector was cooled down and powered up in the first half of 2007 and has now been warmed up for minor modifications. This was an important learning process, allowing subsequent sectors to be commissioned more quickly. Four more sectors will be cooling by the end of 2007 and the remaining three sectors that have not been cooled will begin the process early in 2008.

To cool the magnets, more than 10,000 tonnes of liquid nitrogen and 130 tonnes of liquid helium will be brought into use through a cryogenic system, which includes more than 40,000 leak-tight welds.

If all goes well, the first beams could be injected into the LHC in May 2008, and circulating beams established by June or July. With a project of this scale and complexity, however, the transition from construction to operation is a lengthy process. Every part of the system has to be brought on stream carefully, with each subsystem and component tested and repaired, if necessary. “If for any reason we have to warm up a sector, we’ll be looking at the end of summer rather than the beginning,” warns project leader Lyn Evans.

Neutrino mixing at Daya Bay

On 13 October, members of the Daya Bay Collaboration and government officials from China and the US Department of Energy held a groundbreaking ceremony for the Daya Bay Reactor Neutrino experiment at the Daya Bay Nuclear Power Facility, located in Shenzhen, Guangdong Province, about 55 km north-east of Hong Kong in Southern China. The experiment is poised to investigate the least well known sector of the recently discovered phenomenon of neutrino mixing.

CCnew2_10_07

In recent years, several experiments have discovered that the three flavours of neutrino can oscillate among themselves – a result of the mixing of mass eigenstates. Among the three mixing angles required to describe the oscillation, θ13 is the least well known. Besides determining the amount of mixing between the electron-neutrino and the third mass eigenstate, θ13 is a gateway to the future study of CP violation in neutrino oscillation.

To date, the best limit on θ13 is sin213 < 0.17, reported by the CHOOZ reactor neutrino experiment using one detector on a baseline of 1.05 km. However, the current understanding of neutrino oscillation indicates that the disappearance of reactor antineutrinos at a distance of about 2 km would provide an unambiguous determination of θ13. This is the goal of a new generation of reactor neutrino experiments utilizing at least two detectors at different baselines. Such a near–far configuration eliminates most of the reactor-related systematic errors and some of the detector-related systematic uncertainties.

CCday_10_07

The Daya Bay experiment should discover neutrino oscillation due to θ13 mixing and measure sin213 to an unprecedented sensitivity of better than 0.01 at 90% CL – an order of magnitude better than the present limits. The experiment will look for electron antineutrinos from the reactors via the inverse beta-decay reaction in a gadolinium-doped liquid scintillator target (figure 1). In the reaction, an electron-antineutrino interacts with a proton (hydrogen in the scintillator), producing a positron and a neutron. The energy of the antineutrino is determined by measuring the energy loss of the positron in the scintillator. The collaboration will extract the value of sin213 by comparing the fluxes and energy distributions of the observed antineutrino events in the near and the far halls (figure 2).

The ceremony on 13 October marks the beginning of civil construction near the Daya Bay and Ling Ao reactors, the sources of the electron-antineutrinos for the experiment. When the Ling Ao II nuclear power plant is commissioned by 2011, the three pairs of reactors will be one of the most powerful nuclear-energy facilities in the world. Three underground experimental halls connected by long tunnels will be excavated in the nearby mountains, which will shield the experiment from cosmic rays. In each hall, the antineutrino detectors (two in each near hall and four in the far site) will be deployed in a water pool to protect the detectors from ambient radiation. Together with resistive plate chambers above, the water pool also serves as a segmented Cherenkov counter for identifying cosmic-ray muons.

The project is now ready to begin manufacturing and mass production of the detector components. The first experimental hall is scheduled to be ready by the end of 2008. Commissioning of the detectors in this hall will take place in 2009. Construction will continue for about two years, with installation of the last detector scheduled for 2010.

• The Daya Bay Collaboration consists of 35 institutions with more than 190 collaborators from three continents. The project is supported by the funding agencies in China and the US, and is one of the largest co-operative scientific projects between the two countries. Additional funding is being provided by the other countries and regions, including Hong Kong, Taiwan, the Czech Republic and Russia.

Pierre Auger Observatory pinpoints source of mysterious highest-energy cosmic rays

The Pierre Auger Collaboration has discovered that active galactic nuclei are the most likely candidate for the source of the ultra-high-energy (UHE) cosmic rays arriving on Earth. Using the world’s largest cosmic-ray observatory, the Pierre Auger Observatory (PAO) in Argentina, the team of 370 scientists from 17 countries has found that the sources of the highest-energy particles are not distributed uniformly across the sky. Instead, the results link the origins of these mysterious particles to the locations of nearby galaxies that have active nuclei at their centres (Pierre Auger Collaboration 2007).

CCait_10_07

Low-energy charged cosmic rays (by far the majority) lose their initial direction when travelling through galactic or intergalactic magnetic fields, and therefore cannot reveal their point of origin when detected on Earth. UHE particles, by contrast, with energies of more than 40 EeV (4 × 1019 eV) are only slightly deflected, so they come almost straight from their sources. These are the particles that the Auger Observatory was built to detect.

When UHE cosmic rays hit nuclei in the upper atmosphere, they create cascades of secondary particles that can spread across an area of around 30 km2 as they arrive at the Earth’s surface. The PAO records these extensive air showers using an array of 1600 particle detectors placed 1.5 km apart in a grid spread across 3000 km2. A group of 24 specially designed telescopes record the emission of fluorescence light from excitation of the atmospheric nitrogen by the air shower, and water tanks record shower particles arriving at the Earth’s surface by detecting Cherenkov radiation. The combination of particle detectors and fluorescence telescopes provides an exceptionally powerful instrument for determining the energy and direction of the primary UHE cosmic ray.

While the observatory has recorded almost a million cosmic-ray showers, the Auger team can link only the rare, highest-energy cosmic rays to their sources with sufficient precision. The observatory has so far recorded 81 cosmic rays with energy of more than 40 EeV – the largest number of cosmic rays at these energies ever recorded. At these ultra-high energies, there is only a degree or so uncertainity in the direction from which the cosmic ray arrived, allowing the team to determine the location of the particle’s source.

The Auger Collaboration discovered that the 27 highest-energy events, with energy of more than 57 EeV, do not come from all directions equally. Comparing the clustering of these events with the known locations of 381 active galactic nuclei (AGNs), the collaboration found that most of these events correlated well with the locations of AGNs in some nearby galaxies, such as Centaurus A. Astrophysicists believe that AGNs are powered by supermassive black holes that are devouring large amounts of matter. They have long been considered sites where high-energy particle production might take place, but the exact mechanism of how AGNs can accelerate particles to such high energies is still a mystery.

These UHE events are rare and, even with its large size, the PAO can record only about 30 of them each year. The collaboration is already developing plans for the construction of a second, larger installation in Colorado. This will extend coverage to the entire sky while substantially increasing the number of high-energy events recorded; there are, it turns out, even more nearby AGNs in the northern sky than in the southern sky visible from Argentina.

bright-rec iop pub iop-science physcis connect