Comsol -leaderboard other pages

Topics

Superstrings reveal the interior structure of a black hole

A research group at KEK has succeeded in calculating the state inside a black hole using computer simulations based on superstring theory. The calculations confirmed for the first time that the temperature dependence of the energy inside a black hole agrees with the power-law behaviour expected from calculations based on Stephen Hawking’s theory of black-hole radiation. The result demonstrates that the behaviour of elementary particles as a collection of strings in superstring theory can explain thermodynamical properties of black holes.

CCnew6_02_08

In 1974, Stephen Hawking at Cambridge showed theoretically that black holes are not entirely black. A black hole in fact emits light and particles from its surface, so that it shrinks little by little. Since then, physicists have suspected that black holes should have a certain interior structure, but they have been unable to describe the state inside a black hole using general relativity, as the curvature of space–time becomes so large towards the centre of the hole that quantum effects make the theory no longer applicable. Superstring theory, however, offers the possibility of bringing together general relativity and quantum mechanics in a consistent manner, so many theoretical physicists have been investigating whether this theory can describe the interior of a black hole.

Jun Nishimura and colleagues at KEK established a method that efficiently treats the oscillation of elementary strings depending on their frequency. They used the Hitachi SR11000 model K1 supercomputer installed at KEK in March 2006 to calculate the thermodynamical behaviour of the collection of strings inside a black hole. The results showed that as the temperature decreased, the simulation reproduced behaviour of a black hole as predicted by Hawking’s theory (figure 1).

CCnew7_02_08

This demonstrates that the mysterious thermodynamical properties of black holes can be explained by a collection of strings fluctuating inside. The result also indicates that superstring theory will develop further to play an important role in solving problems such as the evaporation of black holes and the state of the early universe.

Constructing HERA: rising to the challenge

Inside the HERA tunnel

Ideas for an electron–proton collider based on storage rings first arose after the famous experimental results on deep inelastic electron–proton scattering from SLAC in 1969, which indicated a granular structure for the proton. Using two storage rings to collide electrons and protons head-on, rather than directing an electron beam at a proton target, would allow for higher centre-of-mass energies. This would in turn result in a better resolution for measure­ments of the internal structure of the proton. So, in the early 1970s, several laboratories – Brookhaven, CERN, DESY, Fermilab, IHEP (Moscow), Rutherford Laboratory, SLAC and TRIUMF – began to think about building an electron–proton collider.

Bjørn Wiik

At DESY, Bjørn Wiik in particular was a major advocate for the construction of an electron–proton collider. In 1972, Horst Gerke, Helmut Wiedemann, Günter Wolf and Wiik wrote a first report in which they proposed using the existing double-storage ring DORIS for electron–proton collisions. Then, in 1981, after several workshops organized by the European Committee for Future Accelerators (ECFA), DESY submitted a proposal to the government of the Federal Republic of Germany (FRG) for the construction of a completely new machine called HERA. It was to be an electron–proton collider with a circumference of 6.3 km and it had the strong support of the European high-energy physics community and ECFA.

Early discussions on electron–proton colliders had already considered the use of superconducting magnets for the proton ring. Then, in the 1980s, this demanding technology became feasible for large systems, thanks to the courageous and pioneering work at Fermilab on superconducting magnets for the construction of the Tevatron. When it came into operation in 1983, the Tevatron was the world’s first superconducting synchrotron at high energies.

DESY had no major experience in this technology, so in 1979 Hartwig Kaiser and Siegfried Wolff were sent to work with colleagues at Fermilab and profit from their know-how. The successful dipole and quadrupole magnets developed at Fermilab naturally influenced the design of the superconducting accelerator magnets for HERA, and the first dipoles built at DESY were basically copies of the Fermilab magnets. However, with increasing experience, the physicists and engineers at DESY started to add major improvements of their own, leading to the characteristic design of the HERA magnets, which proved extremely successful over the lifetime of the accelerator. As the superconducting magnet ring was the most challenging part of HERA, this article will focus on its design in particular.

The superconducting coil is the most critical component of a superconducting magnet. Coils several metres long are fabricated with cross-sections accurate to a few hundredths of a millimetre. This demanding task was solved at Fermilab by using laminated tooling for the production and curing of the coils. These are surrounded by collars punched from stainless-steel sheets, which provide the precise coil geometry and sustain the huge magnetic forces. Only special types of steel, which do not become brittle or magnetic at cryogenic temperatures, are suitable. For the coils of the HERA dipoles, the collars are made from an aluminium alloy with high yield-strength, thus eliminating magnetic effects.

the first electron–proton collisions in HERA

In the HERA dipoles, this collaring is reinforced by the iron yoke, which, unlike its Fermilab counterpart, is located inside the cryostat. This “cold iron” concept has several advantages. First, it leads to an additional gain of 12% in the central magnetic field, as the iron is closer to the coil. Second, the cryogenic load at 4 K is reduced as a result of the longer support rods. Finally, a passive protection scheme with parallel diodes can protect the coil against damage from the stored energy should it become normally conducting (quench). The resulting larger cold mass leads to longer cool-down and warm-up times of about five days. However, this turned out to be no drawback as there were only a few occasions during the whole lifetime of HERA, outside regular shutdowns, when the magnets had to be warmed up. Hartwig Kaiser, Karl Hubert Mess and Peter Schmüser were the main people responsible for this development.

In a superconducting magnet ring, the protection of the coils against quenches is of utmost importance and is a challenging technology in itself. It involves both the detection of a quench (by monitoring the voltage over the coils) and the installation of quench heaters to force the quenching coil to become normally conducting, thus distributing the energy deposited by the magnet current over its whole length. As many magnet coils are powered serially in long strings, the current coming from the power supply has to be bypassed around the quenching magnet and its stored energy safely dissipated in a resistive load. A switch is required to bypass the magnet. At Fermilab, this was in the form of a thyristor mounted outside the vacuum vessel, which had to be triggered in case of a quench. The current leads to the thyristor were connected to the coil at 4 K, thus adding to the cryogenic load.

For the HERA magnets, Mess applied a different idea, first considered at Brookhaven, in which a “cold” diode inside the cryostat at 4 K automatically switches the current of about 5000 A in case of a quench. This was one of the most innovative and courageous technological steps of the HERA project. First, a suitable diode had to be found. Of course, no commercially available diodes were made for such an application. Mess did eventually find one that promised to be up to the task, but only after extensive searching and testing. Then, the mechanical mounting and electrical connections of the diodes had to be devised in such a way as to guarantee their reliable operation inside the helium, where they were exposed to rapid and extreme thermal cycles. Comprehensive tests of all of the diodes were carried out to qualify them and validate the design of the mounting – an example of innovative engineering at its best.

To save costs and keep the cryogenic load at 4 K as low as possible, the various corrector magnets were connected via super­conducting cables inside the 4 K helium pipe over an octant of the ring. The cables were held in a special fixture between the magnets and had to be soldered together before the 4 K helium tubes were joined by a welding sleeve. To make sure that all of the 20 or so cables were correctly connected, clever clamping devices – which supplied electrical contact to all of the wires simultaneously – were installed at two intersections. By applying voltages to the various contacts, a computerized central measuring station determined whether the cables were connected correctly in a time-effective way. There are many other cases that required ingenious ideas, such as solving the problem with persistent currents in the super­conducting coils as Schmüser and his students did, but unfortunately it is impossible to cover these in a short article.

Schematics of the Tevatron dipole

There were many systems for the superconducting magnet ring where little or no experience existed at DESY. One example is the huge cryogenic system with the cryogenic plant and the magnet cryostats, various cryogenic boxes, and transfer lines and pipes for cold and room-temperature helium, respectively. One pipe, the quench gas-collection pipe, was connected to the 4 K helium ­volume of the dipole magnets but separated by a special valve, named the Kautzky valve after its inventor at Fermilab. This valve opens automatically when the pressure inside the cryostat exceeds a preset value. It is sealed by a conical plastic piece inside a conical body. However, some of these valves would start to rattle during a quench, indicating that they were closing and opening in rapid succession. This effect quite often cracked the plastic cone, so the valves would no longer seal for normal operation and had to be exchanged. Despite intensive studies and tests of various materials – the high radiation level in the HERA tunnel meant that the Teflon of the original design could not be used – the problem was never solved. It did not become an operational problem thanks to the small number of quenches. This was an example where the work on HERA did not evolve from the heritage of Fermilab.

One clear evolutionary step, however, was the strong involvement of industry in the production of the superconducting magnets for HERA. For European industry in particular, HERA presented a unique opportunity: it was the first time that companies had an opportunity to gain experience in superconducting technologies and cryogenics on such a large scale. This step was beneficial for both DESY and the industrial companies, and also for later projects using ­superconducting-magnet technology.

Another step forward, this time in terms of financing and organizing large research projects, was the construction of HERA in collaboration with research laboratories from other countries: the so-called “HERA model”. It is to the credit of both Wiik and Volker Soergel that they brought the collaboration together with contributions from Canada, France, Israel, Italy, the Netherlands and the US, with additional manpower provided by institutes in China, Czechoslovakia, Poland, Switzerland, the UK, and USSR as well as institutes from both the FRG and the German Democratic Republic. The particularly large contribution by Italy of half of the superconducting dipoles cannot be overemphasized, and was to the great merit of Antonino Zichichi, who made this happen.

At DESY, we clearly stood on the shoulders of Fermilab’s pioneering work when realizing HERA, and the experiences and technological advancements made at HERA were valuable for later projects, such as RHIC at Brookhaven and the LHC at CERN. When DESY began the adventure of constructing the superconducting magnet ring for HERA, several people were worried that there would be problems with such a novel system and that its operation would become very difficult. Fortunately, none of the worries were substantiated and the operation of the “cold” ring essentially went without problems. I am sure that people at CERN now have similar worries concerning the LHC. I would like to express my best wishes to them, with the hope that they might be as fortunate and successful with the LHC as we were with HERA. 

HERA leaves a rich legacy of knowledge

The HERA facility at DESY was unique: it was the only accelerator in the world to collide electrons (or positrons) with protons, at centre-of-mass energies of 240–320 GeV. In collisions such as these, the point-like electron “probes” the interior of the proton via the electroweak force, while acting as a neutral observer with regard to the strong force. This made HERA a precision machine for QCD – a “super electron microscope” designed to measure precisely the structure of the proton and the forces within it, particularly the strong interaction. HERA’s point-like probes also gave it an advantage over proton colliders such as the LHC: while protons can have a much higher energy, they are composite particles dominated by the strong force, which makes it much more difficult to use them to resolve the proton’s structure. The results from HERA, many of which are already part of textbook knowledge, promise to remain valid and unchallenged for quite some time.

Into the depths of the proton

The proton’s structure can be described by using various structure functions, each of which covers different aspects of the electron–proton interaction. HERA was the world’s only accelerator where physicists could study the three structure functions of the proton in detail. During the first phase of operation (HERA I), the colliding-beam experiments H1 and ZEUS already provided surprising new insights into F2, which describes the distribution of the quarks and antiquarks as a function of the momentum transfer (Q2) and the momentum fraction (x) of the proton’s total momentum. When HERA started up in 1992, physicists already knew that the quarks in the proton emit gluons, which give rise to other gluons and to quark–antiquark pairs in the virtual “sea”. However, the general assumption was that, apart from the three valence quarks, there were only very few quark–antiquark pairs and gluons in the proton.

Thanks to HERA’s high centre-of-mass energy, H1 and ZEUS pushed forward to increasingly shorter distances and smaller momentum fractions, and measured F2 over a range that spans four orders of magnitude of x and Q2 – two to three orders of magnitude more than were accessible with earlier experiments (figure 1). What the two experiments discovered came as a great surprise: the smaller the momentum fraction, the greater the number of quark–antiquark pairs and gluons that appear in the proton (figure 2). The interior of the proton therefore looks much like a thick, bubbling soup in which gluons and quark–antiquark pairs are continuously emitted and annihilated. This high density of gluons and sea quarks, which increases at small x, represented a completely new state of the strong interaction – which had never been investigated until then.

The proton sea, however, comprises not only up, down and strange quarks. Thanks to the high luminosity achieved during HERA’s second operating phase (HERA II), the experiments for the first time revealed charm and bottom quarks in the proton, with charm quarks accounting for 20–30% of F2 in some areas, and bottom quarks accounting for 0.3–1.5%. It appears that all quark flavours are produced democratically at extremely high momentum-transfers, where even the mass of the heavy quarks becomes irrelevant. The analysis of the remaining data will further enhance the precision and lead to a better understanding of the generation of heavy quarks, which is particularly important for physics at the LHC.

During HERA II, H1 and ZEUS also used longitudinally polarized electrons and positrons. This boosted the experiments’ sensitivity for the structure function xF3, which describes the interference effects between the electromagnetic and weak interactions within the proton. These effects are normally difficult to measure, but their intensity increases with the polarization of the particles, making them clearly visible.

Shortly before HERA’s time came to an end, the accelerator ran at a reduced proton energy for several months (460 and 575 GeV, instead of 920 GeV). Measurements at different energies, but under otherwise identical kinematic conditions, filter out the third structure function FL, which provides information on the behaviour of the gluons at small x. These measurements are without parallel and are particularly important for the understanding of QCD.

HERA provided another surprise soon after it went into operation. In events at the highest Q2, a quark is violently knocked out of the proton. In 10–15% of such cases, instead of breaking up into many new particles, the proton remains completely intact. This is about as surprising as if 15% of all head-on car crashes left no scratch on the cars. Such phenomena were familiar at low energies, and were generally described using the concepts of diffractive physics, which involve the pomeron, a hypothetical neutral particle with the quantum numbers of the vacuum. However, early HERA measurements showed that this concept did not hold up, failing completely in the hard diffraction range.

To conform with QCD, at least two gluons must be involved in a diffractive interaction to make it colour-neutral. Could hard diffraction therefore be related to the high gluon density at small x? The H1 and ZEUS results were clear: the colour-neutral exchange is indeed dominated by gluons. These observations at HERA led to the development of an entire industry devoted to describing hard diffraction, and the analyses and attempts at interpretation continue unabated. There have been some successes, but the results are not yet completely understood. It is therefore important to analyse the HERA data from all conceivable points of view to assess all theoretical interpretations appropriately.

The fundamental forces of nature

A special characteristic of the strong interaction is its unusual behaviour with respect to distance. While the electromagnetic interaction becomes weaker with increasing distance, the opposite is true for the strong force. It is only when the quarks are close together that the force between them is weak (asymptotic freedom); the force becomes stronger at greater distances, thus more or less confining the quarks within the proton. While other experiments have also determined the strong coupling constant αs as a function of energy, H1 and ZEUS for the first time demonstrated the characteristic running of αs over a broad range of energies in a single experiment (figure 3). Thus, the HERA results impressively confirmed the special behaviour of the strong force that David Gross, David Politzer and Frank Wilczek predicted 20 years ago – a prediction for which they won the Nobel Prize for Physics in 2004.

Although the collaborations used HERA mostly for QCD studies, the aim of studying the electroweak interaction was part of the proposal for the machine. For instance, H1 and ZEUS measured the cross-sections of neutral and charged-current reactions as a function of Q2. At low-momentum transfers, i.e. large distances, the electromagnetic processes occur significantly more often than the weak ones because the electromagnetic force acts much more strongly than the weak force. At higher Q2, and thus smaller distances, both reactions occur at about the same rate, i.e. both forces are equally strong. H1 and ZEUS thus directly observed the effects of electroweak unification, which is the first step towards the grand unification of the forces of nature.

The longitudinal polarization of the electrons in HERA II also opened up new possibilities for studying the electroweak force. For example, theory predicts that because only left-handed neutrinos exist in nature, the transformation of a right-handed electron into a right-handed neutrino via the weak interaction should be impossible. H1 and ZEUS measured the charged currents as a function of the various polarization states, and proved that there are indeed no right-handed currents in nature, even at the high energies of HERA (figure 4).

Particle collisions at the highest Q2 are comparatively rare. Yet it is here, at the known limits of the Standard Model, that any new effects should appear. Thanks to the higher luminosity of HERA II, the collaborations can study this realm with enhanced precision. They have to date not observed any significant deviations from the Standard Model. The results from HERA substantially broaden the Standard Model’s range of validity and restrict the possible phase space for new phenomena, so refining the insights of the Standard Model all the way up to the highest momentum transfers.

The nucleon-spin puzzle

Another important contribution to our understanding of the proton is provided by another HERA experiment, HERMES, which was designed to study the origin of nucleon spin. In the mid-1980s, experiments at CERN and SLAC discovered that the three valence quarks account for only around a third of the total nucleon spin. Starting in 1995, the HERMES collaboration aimed to find out where the other two-thirds come from, by sending the longitudinally polarized electrons or positrons from HERA through a target cell filled with polarized gases.

During HERA I, HERMES completed its first task, which was to determine the individual quark contributions to the nucleon spin. Using measurements on longitudinally polarized gases, the HERMES collaboration provided the world’s first model-independent determination of the separate contributions made to the nucleon spin by the up, down and strange quarks (figure 5). The results revealed that the largest contribution to the nucleon spin comes from the valence quarks, with the up quarks making a positive contribution, the down quarks a negative one. The polarizations of the sea quarks are all consistent with zero. The HERMES measurements therefore proved that the spin of the quarks generates less than half of the spin of the nucleon, and that the quark spins that do contribute come almost exclusively from the valence quarks – a decisive step toward the solution of the spin puzzle.

The HERMES team then turned its attention to gluon spin, making one of the first measurements to give a direct indication that the gluons make a small but positive contribution to the overall spin. The analysis of the latest data will yield more detailed information. Until recently, it was impossible to investigate the orbital angular momentum of the quarks experimentally. Now, using deeply virtual Compton scattering (DVCS) on a transversely polarized target, the HERMES team has made the first model-dependent extraction of the total orbital angular momentum of the up quarks. Analysis of the data taken with a new recoil detector in 2006–2007 will perfect the knowledge of DVCS and enable HERMES to make a key contribution to improving the models of generalized parton distributions, in the hope of soon identifying the total orbital angular momentum of the up quarks.

Parton distribution functions characterize the nucleon by describing how often the partons – quarks and gluons – will be found in a certain state. There are three fundamental quark distributions: the quark number density, which the H1 and ZEUS experiments have measured with high precision; the helicity distribution, which was the main result of measurements by HERMES with longitudinally polarized gases; and the transversity distribution, which describes the difference in the probabilities of finding quarks in a transversely polarized nucleon with their spin aligned to the nucleon spin, and quarks with their spin anti-aligned. Using data on transversely polarized hydrogen, the HERMES collaboration can now determine this transversity distribution for the first time. The measurements also provide access to the Sivers function, which describes the distribution of unpolarized quarks in a transversely polarized nucleon. As the Sivers function should vanish in the absence of quark orbital angular momentum, its measurement marks an additional important step in the study of orbital angular momentum in the nucleon. Analysis of the initial data shows that the Sivers function seems to be significantly positive, which indicates that the quarks in the nucleon do in fact possess a non-vanishing orbital angular momentum.

Although HERMES focuses on nucleon spin, the physics programme for the experiment extends much further, including, for example, studies of quark propagation in nuclear matter and quark fragmentation, tests of factorization and searches for pentaquark exotic baryon states. Analysis of the data collected up until the shutdown in June 2007 will provide unique insights here as well.

The LHC and beyond

In 2008, the LHC will start colliding protons at centre-of-mass energies about 50 times higher than those at HERA. The results provided by HERA are essential for the interpretation of the LHC data: the proton–proton collisions at the LHC are difficult to describe, involving composite particles rather than point-like ones. It is therefore crucial to have the most exact understanding possible of the collisions’ initial state. This comes from HERA, for example, in the form of precise parton distribution functions of the up, down and strange quarks, and also the charm and bottom quarks (figure 6). An accurate knowledge of these distributions is vitally important, particularly for predictions of Higgs particle production at the LHC.

Many of these LHC-relevant measurements could only be carried out at HERA. To support the transfer of knowledge and create a long-term connection that takes account of the overlapping physics interest at HERA and the LHC, DESY and CERN have intensified their co-operation in this area. Many researchers from HERA, along with many students and PhD candidates, are already participating in the LHC experiments.

Over the past 15 years, HERA has enabled us to uncover a wealth of different – and partly unexpected – aspects of the proton and the fundamental forces. The analysis of the data recorded up until HERA’s closure in June 2007 is expected to last well into the next decade. The HERA collaborations will be melding these aspects into a vast and cohesive whole – a comprehensive description of strongly interacting matter at small distance scales and short time scales. Given HERA’s unique nature, this picture will endure for a long time and define for years, and possibly decades, our understanding of the dynamics of the strong interaction.

With their results, the HERA teams are now handing the baton over to the LHC collaborations, and also to the theorists. From the outset, the results from HERA have stimulated a large amount of theoretical work, particularly in the field of QCD, where an intensive and fruitful collaboration between theory and experiments has arisen. Thus, the knowledge of the proton and the fundamental forces gained from HERA forms the basis, not only for future experiments, but also for many current developments in theoretical particle physics – a rich legacy indeed.

CPT ’07 goes in quest of Lorentz violation

Lorentz symmetry and the closely related CPT symmetry, which combines charge conjugation (C), parity reversal (P) and time reversal (T), are well tested properties of nature. Nevertheless, efforts to find experimental evidence of Lorentz and CPT violation have increased in number, motivated in part by the quest for a theory to unite quantum mechanics and gravity. Further impetus has come from the introduction of a framework for Lorentz and CPT violation known as the Standard Model Extension (SME) which encompasses the panorama of physical theories. Since its development by Alan Kostelecký and co-workers at Indiana University in the 1990s, the SME has been used widely to guide experimental efforts and allow comparisons of results from different experiments.

CCcpt1_01_08

This field is the topic of a series of meetings that have run triennially since 1998 at the Indiana University physics department, bringing together researchers to share results and ideas. In 2007, the fourth meeting on Lorentz and CPT Symmetry (CPT ’07) was held on 8–11 August, with contributed and invited talks, and a poster session during the conference reception.

The meeting opened with a welcome from Bennett Bertenthal, dean of the university’s College of Arts and Sciences. Ron Walsworth of Harvard University and the Harvard Smithsonian Center for Astrophysics gave the first scientific talk, in which he reflected on the progress in experimental studies of Lorentz violation since 1997, when the SME coefficients first appeared in their current form. He also discussed his group’s current work to upgrade its noble-gas maser, with the aim of improving the sensitivity to a variety of SME coefficients.

Accelerator-based tests

The SME has opened a rich variety of possibilities for Lorentz violation in the context of neutrino oscillations. Recent work has shown that some, or perhaps even all, of the oscillation effects seen in existing data may be attributable to Lorentz violation. Talks included a presentation by Rex Tayloe of the MiniBooNe collaboration at Fermilab, who provided an overview of the recent data and considered their relation to earlier results from the Liquid Scintillator Neutrino Detector experiment at Los Alamos. He also discussed the three-parameter “tandem” model based on SME coefficients.

The physics of neutral-meson oscillations provides an abundance of theoretical possibilities for Lorentz and CPT violation that can be tested in current and planned experiments. Antonio Di Domenico of the KLOE collaboration showed the first results of a search for CPT-violating effects using K mesons at the DAΦNE collider in Frascati. New results also came from David Stoker of University of California, Irvine, who presented the first constraints on all four coefficients for CPT violation in the Bd system, based on results from the BaBar experiment at SLAC.

CCcpt2_01_08

Possible signals of Lorentz violation in the muon system hinge on variations in the anomaly frequency, which could be detected by performing instantaneous frequency comparisons and sidereal-variation searches. The results described by Lee Roberts from Boston University and the Muon (g-2) Collaboration at Brookhaven represent the highest-sensitivity test of Lorentz and CPT violation for leptons, and improve previous results with muonium, electrons and positrons by about an order of magnitude.

Lorentz and CPT violation may be detectable using antihydrogen spectroscopy. Such tests would involve looking for sidereal changes in the spectra, or looking for direct differences between the spectra of antihydrogen and conventional hydrogen. Theoretical considerations have shown that the hyperfine transitions are of particular interest in these tests. Ryugo Hayano, spokesperson for the ASACUSA collaboration at CERN, provided details of his group’s progress towards tests of this type (see “ASACUSA moves towards new antihydrogen experiments“). Niels Madsen of Swansea gave an overview of the status of the ALPHA experiment at CERN, which has the potential to test Lorentz and CPT symmetry in a variety of ways using trapped antihydrogen.

Gravitational and astrophysical effects

There have been extensive studies of signals of Lorentz violation in the gravitational sector during the past few years, and the results include the identification of 20 coefficients for Lorentz violation in the pure-gravity sector of the minimal SME. The meeting featured presentations of the first ever measurements of SME coefficients in the gravitational sector by two experimental groups. Holger Müller of Stanford University announced measurements of seven such coefficients for Lorentz violation, based on work with Mach–Zehnder atom interferometry. Other first results were unveiled by James Battat of Harvard University, placing limits on six gravitational-sector SME coefficients from the analysis of more than three decades of archival lunar-laser-ranging data from the McDonald Observatory in Texas and the Observatoire de la Côte d’Azur in France. The Apache Point Observatory in New Mexico should achieve further improvements in lunar-laser ranging, down to a sensitivity level of 1 mm, as Tom Murphy of the University of California at San Diego explained.

The fundamental nature of Lorentz symmetry means that there are subtle conceptual issues to be addressed. Roman Jackiw of MIT considered one example, and showed that symmetry breaking may be a mask for co-ordinate choice in a diffeomorphism invariant theory. Robert Bluhm of Colby College in Maine provided a comprehensive discussion of the Nambu–Goldstone and massive fluctuation modes about the vacuum in gravitational theories. Other theoretical topics included approaches to deriving the Dirac equation in theories that violate Lorentz symmetry, presented by Claus Lämmerzahl of Bremen University, and Chern–Simons electromagnetism, which Ralf Lehnert of MIT described.

Satellites offer unique opportunities to probe Lorentz symmetry in a low-gravity environment. Tim Sumner of Imperial College gave an overview of approaches to space-based experiments with high-sensitivity instruments, and looked at ESA’s upcoming plans. James Overduin of Stanford University likewise reviewed the ongoing analysis of data from the Gravity Probe B satellite.

Cosmological and astrophysical sources also provide a number of intriguing possibilities for testing the fundamental laws and symmetries of nature. Matt Mewes of Marquette University presented recent work using the cosmic microwave background to place limits on Lorentz-violation coefficients in the renormalizable and non-renormalizable sectors of the SME. The Pioneer anomaly – apparent deviations in the paths of spacecraft in the outer solar system, such as the Pioneer 10 and 11 – provides a possibility for new physics, as Michael Nieto of Los Alamos described, giving perspectives on the underlying physics that may be responsible for the observations. Synchrotron radiation and inverse Compton scattering from high-energy astrophysical sources may also show sensitivity to a variety of SME coefficients, as Brett Altschul of the University of South Carolina explained.

Atomic-physics tests

There have been tests of the electromagnetic sector of the SME for several years using low-energy experiments that include optical and microwave cavity oscillators, torsion pendulums, atomic clocks, and interferometric techniques. Experimental innovations have led to steadily improving resolutions and the ability to access better the geometrical components of the SME coefficient space.

Achim Peters of the Humboldt University in Berlin announced improvements by a factor of 30 on certain photon-sector SME coefficients using a cryogenic precision optical resonator on a rotating turntable. Another test in the photon sector has been performed by Michael Tobar of the University of Western Australia, who has used a Mach–Zehnder interferometer to improve the sensitivity to one particular coefficient by six orders of magnitude. The team at Princeton University has recently developed innovations for a second-generation comagnetometer. Sylvia Smullin and group leader Mike Romalis described this work, and also presented the results from the experiment’s first-generation predecessor.

The Eöt-Wash torsion pendulum group at the University of Washington in Seattle has made major contributions to the search for Lorentz violation, including several of the tightest constraints on SME coefficients in the electron sector, which they recently generated using a spin-polarized torsion pendulum with a macroscopic intrinsic spin. Group member Blayne Heckel described preliminary results achieving yet greater sensitivity to a number of electron coefficients using a further refined version of the apparatus.

In all, the 2007 meeting on CPT and Lorentz symmetry highlighted the intense efforts of the physics community in testing Lorentz symmetry and other fundamental properties of nature. Should the minuscule traces of Lorentz violations be found, it would be a paradigm-changing event, leading to profound alterations to our current theories describing the forces of nature.

Camera captures image of two-proton decay

In work that harks back to the early days of nuclear physics, an international team of researchers at Michigan State University’s National Superconducting Cyclotron Laboratory (NSCL) has used a novel detector incorporating a CCD camera to record optically the tracks of charged particles emitted in the two-proton decay of iron-45 (45Fe). The technique has allowed the first measurement of correlations between the two protons, demonstrating that the process is indeed a three-body decay. Besides shedding light on a novel form of radioactive decay, the technique could lead to additional discoveries about short-lived rare isotopes, which may hold the key to understanding processes inside neutron stars and determining the limits of nuclear existence.

CCnew4_01_08

Although it is more than 100 years since Henri Becquerel opened the door to nuclear physics with his discovery of radioactivity, there are still open questions that continue to nag experimentalists. One such example is the mechanism underlying the two-proton emission of neutron-deficient nuclei, first observed in the 1980s.

Now Krzysztof Miernik and colleagues from Poland, Russia and the US have taken several steps towards an answer, by peering closely at the radioactive decay of a rare iron isotope at the edge of the known nuclear map (Miernik et al. 2007). The researchers set out to obtain a better understanding of two-proton emission from 45Fe, which has a nucleus of 26 protons and 19 neutrons; in comparison, the stable form of iron most abundant on Earth has 30 neutrons. One possibility was that the neutron-deficient 45Fe might occasionally release a diproton – an energetically correlated pair of protons. It was also possible that the two protons, whether emitted in quick succession or simultaneously, were unlinked.

The experiment’s key device was the novel imaging detector built by Marek Pfutzner and colleagues from Warsaw University – the Optical Time Projection Chamber (OPTC). This consists of a front-end gas chamber that accepts and slows down rare isotopes in a beam from the NSCL Coupled Cyclotron Facility. Electrons from the ionized tracks drift in a uniform electric field to a double amplification structure, where UV emission occurs. A luminescent foil converts these photons to optical wavelengths, for detection by a CCD camera. In this way, the camera records the projection of the particle tracks on the luminescent foil. A photomultiplier tube also detects the photons from the foil to provide information on the drift time of the electrons, and hence the third dimension, normal to the plane of the CCD.

Analysis of these images ruled out the proposed diproton emission and indicated that the correlations between emitted protons are best described by a three-body decay. A theory of this process has been described by Leonid Grigorenko, a physicist at JINR, and a co-author of the paper.

The experiment itself recalls the early days of experimental nuclear physics in which visual information served as the raw data, with tracks recorded in photographic emulsion. Indeed, this was the process that lay behind Becquerel’s discovery of radioactivity. The new result may represent the first time in modern nuclear physics that fundamental information about radioactive decays has been captured in a camera image and in a digital format. Usually, nuclear physics experiments provide digitized data and numerical information of various types, but not images.

The drip line: nuclei on the edge of stability

What combinations of neutrons and protons can form a bound nucleus? The long-elusive answer continues to stimulate nuclear physicists. Even now, decades after most of the basic properties of stable nuclei have been discovered, a fundamental theory of the nuclear force is still lacking, and theoretical predictions of the limits of nuclear stability are unreliable. So, the task of finding these limits falls to experimentalists – who continue to find surprises among super-heavy isotopes of elements immediately beyond oxygen.

At the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University we recently discovered several new neutron-rich isotopes: 44Si, 40Mg, 42Al and 43Al (Tarasov et al. 2007 and Baumann et al. 2007;). These are at the neutron drip line – the limit in the number of neutrons that can bind to a given number of protons. The result confirms that these exotic neutron-rich nuclei gain stability from an unpaired proton, which narrows the normal gaps between shells and provides the opportunity to bind many more neutrons (Thoennessen 2004). This feature was firmly established in 2002 by the significant difference between the heaviest isotopes of oxygen (24O16) and fluorine (31F22). However, our observation of such ostensibly strange behaviour is still novel, since in stable nuclei, the attractive pairing interaction generally enhances the stability of “even–even” isotopes with even numbers of protons and neutrons.

CCiso_10_07

The recent experiment at NSCL clearly identified three events of 40Mg in addition to many events of the isotopes previously observed, namely 31F, 34Ne, 37Na and 44Si (figure 1). It also confirmed that 30F, 33Ne and 36Na are unbound as there were no events; the lack of events corresponding to 39Mg indicates that it too is unbound. Furthermore, the 23 events of 42Al establish its discovery. The data also contain one event consistent with 43Al. Owing to the attractive neutron pairing interaction, the firm observation of the odd–odd isotope 42Al29 supports the existence of 43Al30 and lends credibility to the interpretation of the single event as evidence for the existence of this nucleus.

CCnuv_10_07

The discovery of the even–even isotope 40Mg28 is consistent with the predictions of two leading theoretical models, as well as with the experimentally confirmed staggered pattern of the drip line in this region (figure 2). It is interesting to note that if this experiment had not observed 40Mg, the drip line might have been considered to have been determined up to magnesium. However, with the observation of 40Mg, the question remains open as to whether 31F, 37Ne, 40Na and 40Mg are in fact the last bound isotopes of fluorine, neon, sodium and magnesium, respectively.

More important than the observation of the even–even 40Mg is the discovery of the odd–odd 42Al, which two leading theoretical models predicted to be unbound. The latest observation breaks the pattern of staggering at the drip line, somewhat akin to the situation at fluorine. In fact, it now appears possible that heavier nuclei up to 47Al may also be bound.

For many decades, the point at which the binding energy for a proton or a neutron goes to zero has been a clear-cut benchmark for models of the atomic nucleus. The drip line is the demarcation line between the last bound isotope and its unbound neighbour and each chemical element has a lightest (proton drip line) and a heaviest (neutron drip line) nucleus.

The proton drip line is relatively well established for most of the elements because the Coulomb repulsion among protons has a dramatic destabilizing effect on nuclei with significantly fewer neutrons than protons. On the other hand, the neutron-binding energy only gradually approaches zero as the neutron number increases. Subtle quantum-mechanical effects such as neutron pairing and energy-level bunching end up determining the stability of the heaviest isotope of each element. The weak binding of the most neutron-rich nuclei leads to the phenomena of neutron “skins” and “halos”, which give these nuclei some unusual properties.

The only method available at present to produce nuclei at, or near, the neutron drip line is through the fragmentation of stable nuclei followed by the separation and identification of the products in less than a microsecond (Thoennessen 2004). The fragmentation reactions produce a statistical distribution of products with a large range of excitation energies. The excitation energy is dissipated by particle emission through strong decay (primarily neutrons and protons) and then by electromagnetic decay before the fragments reach the detectors. The Coulomb force also favours the emission of neutrons, suppressing the production of the most neutron-rich products.

Current knowledge of the neutron drip line is limited to only the lightest nuclei. The portion of the chart of nuclides in figure 2 shows the known geography of the drip line and the variation in the predictions from two widely respected theoretical models. Researchers first observed the heaviest bound oxygen isotope, 24O, in 1970. However, it was much later before experiments showed that the nuclei 25O through 28O are unbound with respect to prompt neutron emission. Only in 1997 did nuclear physicists consider the drip line for oxygen to be established. Subsequently, the isotopes 31F, 34Ne and 37Na have been observed. Although no experiment has established that 33F, 36Ne, and 39Na are unbound, these heavier isotopes probably do lie beyond the neutron drip line. These earlier experiments also failed to observe the even–even nucleus 40Mg, and researchers even speculated that 40Mg might be unbound.

On the theoretical side, the finite-range droplet model (FRDM) uses a semi-classical description of the macroscopic contributions to the nuclear binding energy, which is augmented with microscopic corrections arising from local single-particle shell structure and the pairing of nucleons (Möller et al. 1995) – this gives the solid black line in figure 2. Another theoretical framework, the fully microscopic Hartree–Fock–Bogoliubov model (HFB-8), is a state-of-the-art quantum-mechanical calculation that puts the nucleons into a mean-field with a Skyrme interaction with pairing (Samyn et al. 2004). This is the dashed green line in figure 2. Although in many cases both models correctly predict the location of the neutron drip line, they cannot account for the detailed interplay of valence protons and neutrons, even among the oxygen and fluorine isotopes. The discrepancies between the models is still more apparent in the magnesium to silicon region.

The recent observations at NSCL required high primary beam intensity, high collection efficiency, high efficiency for identification and – perhaps most importantly – a high degree of purification, as the sought-after rare isotopes are produced infrequently, in approximately 1 in 1015 reactions. Currently, the worldwide nuclear science community is anticipating several new facilities, including the Facility for Antiproton and Ion Research in Germany, the Radioisotope Beam Factory in Japan and the Facility for Rare-Isotope Beams in the US. The facilities are needed for many reasons, including advancing the study of rare isotopes and investigating the limits of existence of atomic nuclei.

The result from NSCL is one among many that hints at scientific surprises associated with the ongoing pursuit of exotic, neutron-rich nuclei. A thorough and nuanced understanding of the nuclear force may remain beyond the collective understanding of nuclear science, but the drip line beyond oxygen – even if further out than previously expected – continues to beckon.

TD Lee: a bright future for particle physics

On 10 December 1957, two young Chinese-Americans arrived in Stockholm to collect the Nobel Prize in Physics – the first Chinese to achieve this award. Tsung-Dao Lee and Chen Ning Yang received their share of the best-known prize in science for work done in the summer of 1956, in which they proposed that the weak force is not symmetric with respect to parity – the reversal of all spatial directions. The bestowal of the Nobel was a fitting end to a tumultuous year for physics, which began with results from an experiment on nuclear beta-decay in which Chien-Sung Wu and collaborators had shown that Lee and Yang were correct: nature violates parity symmetry in weak interactions.

CClee1_10_07

Half a century later, Lee continues to focus on understanding the basic constituents of matter and, in particular, symmetry in fundamental particles, though much has changed in the intervening years. “Our concept of what matter is made of, 50 years later, is very different,” he points out. “Today, we now know that all matter is made of 12 particles: six quarks and six leptons. The constituents of all matter – not dark matter, not dark energy but our kind of matter – every star, our Milky Way, all the galaxies in the universe are made of these 12.”

These 12 particles, divided into four families, each with three particles of the same charge, form the basis of the current Standard Model of particle physics. They are what students first learn about the subject. In 1957, however, physicists had a clear knowledge of only two of these – the electron and the muon, both charged leptons. Quarks lay in the future, and the neutrino associated with nuclear beta-decay had been detected for the first time only the previous year. Since then, the field has blossomed with the discovery of a total of six kinds of quark, a third charged lepton (the τ) and three kinds of neutrino. Five decades ago, Lee explained: “We knew a form of neutrino, but we didn’t know how to make a coherent mixture of all these three.” Now, one of Lee’s main interests lies in the phenomenon of mixing in the leptons and quarks, described by two 3 × 3 matrices, which he calls the cornerstones of particle physics.

Lee is fiercely proud of the progress in particle physics, and believes that the second half of the 20th century was as rich as the latter part of the 19th century. The 1890s saw the discovery of the electron, and Ernest Rutherford opened the door to a new world with his work on alpha, beta and gamma radiation. Here already, Lee observes, was much of 20th-century physics – alpha, beta and gamma decay, respectively, occur through the strong, weak and electromagnetic interactions, which underpin the Standard Model. “Now, 100 years later, we realize that all of our kind of matter is made of 12 particles, divided into four families, each of three particles of the same charge – that is fantastic.” He believes that the field is poised to lead to more great physics out of a better understanding of these basic constituents.

Lee’s contributions to particle physics during the past 50 years have been equally impressive. Growing up in China, he proved to be an excellent student and in 1946 he was able to go to the University of Chicago on a Chinese government scholarship. He gained his PhD under Enrico Fermi in 1950 and in 1953 was appointed assistant professor of physics at Columbia University, where he remains an active member of the faculty. His work in particle physics ranges from the ethereal, almost insubstantial world of the weakly interacting neutrinos to the rich, dense soup of the strongly interacting quark–gluon plasma.

At Columbia in the late 1950s, Lee’s enthusiasm for studying weak interactions at higher energies than in particle decays helped to inspire an experimentalist colleague, Melvin Schwartz, to work out how to make a beam of neutrinos. This led to the famous experiment at Brookhaven National Laboratory in 1962, which showed that there are two different neutrinos associated with the electron and the muon. Two years later, Brookhaven was again the scene of a groundbreaking experiment, when James Cronin, Val Fitch and colleagues discovered that the combined symmetry of charge-conjugation and parity (CP) is violated in the decays of neutral kaons. This phenomenon was eventually understood in the context of six kinds of quark and their 3 × 3 mixing matrix – a major focus of Lee’s current work.

CCtd2_10_07

At around this time, Lee also made a seminal contribution to field theory, which would ultimately be an important part of QCD, the theory of strong interactions. What is now known as the Kinoshita–Lee–Nauenberg theorem deals with a problem of infrared divergences in gauge theories. In QCD, this underlies our understanding of the production of jets from quarks and gluons – a topic of key importance at particle colliders, from the early days of SPEAR at SLAC to the LHC, about to start up at CERN.

However, it is in the physics of hot, dense QCD matter in the form of a quark–gluon plasma that Lee has made one of his most important marks, pushing people to realize that it was indeed possible to observe this exciting new state of matter. In 1974, at a time when experimentalists were concentrating on smaller and smaller scales, he put forward the idea that “It would be interesting to explore new phenomena by distributing high energy or high nuclear density over relatively large volume.” In particular, he saw the possibility of restoring broken symmetries of the vacuum in collisions of heavy nuclei. This was one of the inspirations behind those who pushed for the RHIC collider at Brookhaven, and Lee witnessed the results emerging on strongly interacting quark–gluon plasma during the past few years with excitement. He also sees a possible link between the physics of heavy-ion collisions and the physics of dark energy. Both could involve a collective field – a scalar – that in the presence of a matter field can generate a negative pressure. “I believe the heavy-ion programme at the LHC will be very important to explore this possibility.”

During Lee’s recent visit to CERN, he saw the enormous effort now going into preparations for the LHC. So what does he think the experiments there will find? While he expects the LHC to make important discoveries, including evidence for new particles such as the Higgs boson, his continuing thoughts about symmetry in the universe lead to more personal predictions.

CCtd3_10_07

He believes that asymmetries in parity, charge conjugation and time reversal are not asymmetries of the fundamental laws of physics. Instead, he thinks it is likely that they are “asymmetries of the solutions, namely the Big Bang universe we live in – it is the solution that is not symmetrical”. In other words, he sees CP violation as an effect of spontaneous symmetry-breaking. In this case, says Lee, there is a possibility of finding right-handed W and Z particles to match the left-handed Ws and Z already known. Other new particles could be massive partners for the massless graviton, just as the massless photon has heavy partners in the W and Z. “They will have to be uncovered, and the LHC might also be the first window on that.”

The promise that the LHC holds for the future fits well with Lee’s overall view of the state of particle physics. “It will be a turning point. By what we discover here we will also know what to do as a next step. We expect that the LHC will give us a world of discoveries that will set the route for our future explorations,” he says. Half a century after his Nobel prize, he retains an inspiring optimism: “I believe that the beginning of the 21st century will be as important for physics as the beginning, the first 50 years, of the 20th century, and the LHC is going to be the machine to make the first discovery – so it is very lucky to be here.”

Cockcroft’s subatomic legacy: splitting the atom

In April 1932 John Cockcroft and Ernest Walton split the atom for the first time, at the Cavendish Laboratory in Cambridge in the UK. Only weeks earlier, James Chadwick, also in Cambridge, discovered the neutron. That same year, far away in California, Carl Anderson discovered the positron while working on cosmic rays. So 1932 was a veritable annus mirabilis in which experiments discovered, and worked with, nucleons; exploited Albert Einstein’s relativity and energy-mass equivalence principle; took advantage of the newly emerging quantum mechanics and its prediction of “tunnelling” through potential barriers; and even verified the existence of “antimatter” predicted by Paul Dirac’s relativistic quantum theory of the electron. It is hard to think of a more significant year in the annals of science.

CCtri_10_07

Ernest Rutherford (centre) encouraged Ernest Walton (left) and John Cockcroft (right) to build a high-voltage accelerator to split the atom. Their success marked the beginning of a new field of subatomic research.
Image credit: AIP Emilio Segrè Visual Archives. The experiment by Cockcroft and Walton split the nucleus at the heart of the atom with protons that were lower in energy than seemed possible, by virtue of quantum mechanical tunnelling – a phenomenon new to physics. In 1928 George Gamow had applied the new quantum mechanics to show how particles could tunnel through potential barriers, and how this could explain the decay of nuclei through alpha emission. He also realized that tunnelling could lower the energy required for an incident positively charged particle to overcome the Coulomb barrier of a target nucleus. It was this insight that underpinned the commitment of Cockcroft and Walton.

The entire sequence of events that led to the pioneering experiment (the specification of particle beam parameters based on contemporary theoretical application and phenomenology; the innovation and development of the necessary technologies to create such beams; and the use of the beams to do experiments on a subatomic scale to achieve a deeper understanding of the structure and function of matter) have been repeated many times as high-energy physics has advanced with the construction of accelerators to the current Standard Model of particles and forces. That Cockcroft realized the immense potential of accelerators in research, and in particular for progress in fundamental physics, is manifest in his instrumental role in later years to establish large accelerator laboratories, in particular CERN in 1954.

CCcoc_10_07

Cockcroft was born on 27 May 1897 to a family of cotton manufacturers in Todmorden, straddling the Lancashire–Yorkshire border in northern England. In his early years he experienced a varied educational background. He studied mathematics at Manchester University in 1914–1915, but the First World War interrupted his studies with service in the Royal Field Artillery. After the war, he returned instead to the College of Technology in Manchester to study electrical engineering. Later he joined the Metropolitan Vickers (“Metrovick”) Electrical Company as an apprentice for two years, but subsequently went to St John’s College, Cambridge, and took the Mathematical Tripos in 1924. This wide-ranging education served him well in later years. Nowadays, modern accelerator science and engineering relies on such a broad application of skill and innovation.

Such a diverse and formidable combination of training in mathematics, physics and engineering, plus practical experience with a local electrical company, primed Cockcroft for his future success. He joined Ernest Rutherford, who had recently moved from Manchester to the Cavendish Laboratory and with whom he had worked as an apprentice back in Manchester. Initially Cockcroft worked with Peter Kapitsa in the high-magnetic-field laboratory, where he used his industrial links to obtain the necessary large-scale equipment.

CCcha_10_07

At the time, Cockcroft was in many ways the Cavendish Laboratory’s only true “theoretician”, bringing his mathematical abilities as well as his pragmatic engineering skills to a group that was strong in the experimentalist tradition of Rutherford. Cockcroft realized in 1928, before anyone else, the implications of Gamow’s tunnelling theory, namely that an energy of 300 keV might be sufficient for protons to penetrate a nucleus of boron, and even less for lithium. In a seminar at the Cavendish Laboratory in 1928 the young Soviet physicist Georgij Gamov (who became better known as George Gamow) reported on his calculations of potential-barrier tunnelling, its successful application to alpha-decay, and its importance for barrier penetration. Encouraged by Rutherford, Cockcroft initiated the high-voltage accelerator programme, and was joined by a student, Ernest T S Walton from Ireland.

Walton was a Methodist minister’s son, born in 1903 and educated in Belfast and Dublin. He was very much the lead experimentalist, though the junior partner. The aim was to build an accelerator to achieve an energy up to 1 MeV in order to be sure to penetrate the nuclear potential barrier. Walton had abandoned work on a circular accelerator for his thesis topic and now pursued the linear solution with Cockcroft. They took advantage of strong links with Cockcroft’s old employer in Manchester, Metrovick, which at the time was pioneering equipment for the UK electrical grid at transmission voltages up to 130 kV. Metrovick supplied the high-voltage transformers for what became the Cockcroft–Walton generator. So even at the start of the nuclear age, academic–industrial collaboration underpinned progress.

There were formidable challenges to overcome in each component: motor, generator and transformer; rectifier; 40 kV proton source; glass vacuum vessel; and so on. To this day working with such voltages, even below 500 kV, causes difficulties, as witnessed by the performance issues faced in DC photo-injectors at Jefferson Laboratory and Daresbury Laboratory. The interesting story of scrounging for the proper ceramic tubes to be used in the ultimate Cockcroft–Walton generator is a saga in itself.

Records show that life at the Cavendish Laboratory under Rutherford did not begin early in the day and finished strictly at 6.00 p.m. Rutherford insisted that it was to preserve health and to aid contemplation. Perhaps it partly explains the relatively slow progress by Cockcroft and Walton between 1929 and the ultimate triumph in 1932, although perhaps also it was, by all accounts and like all experimentalists, because both enjoyed the fun of building and perfecting their new experimental “toy”. Another reason was the relocation of their laboratory and a rebuild of their apparatus to a nominal 800 keV rating, primarily because of their own lack of confidence in the predictions of the new tunnelling calculations.

CClee_10_07

The day that transformed subatomic physics was 14 April 1932 when Cockcroft and Walton split the lithium atom with a proton beam. Accounts have it that Rutherford had become frustrated at the lack of results from the generator, which was Cockcroft and Walton’s pride and joy, and insisted that they get some results. Initially they used a beam of 280 keV, but later demonstrated atom splitting by a beam with energy below 150 keV. The experimenters closeted themselves in a lead-lined wooden hut in the accelerator room, and then peered through a microscope to look for scintillations due to alpha particles, which they counted by hand. If a zinc sulphide screen hanging on the wall glowed, they added a little more lead – so much for health and safety 75 years ago. Of course, they found scintillations, thereby observing the splitting of lithium nuclei by the incident protons, to form two alpha particles.

Ironically, as Gamow’s idea of barrier penetration proved to be correct, the experiment could have been performed at least a year earlier in a previous version of the apparatus. This is also true of a successful experiment in October 1932 at the Kharkov Institute, Ukraine, and for Ernest Lawrence’s cyclotron in Berkeley, California, soon after Cockcroft and Walton’s results. (In early August 1931, Gamow, and later Cockcroft, had visited the Kharkov Institute and discussed the new idea.) Many laboratories repeated and added to the work of the Cavendish Laboratory during the following six months, leading to a flood of experiments around the world. But it was Cockcroft and Walton who first split the atom, albeit later than might have been.

The so-called Cockcroft–Walton multiplier, based on a ladder-cascade principle to build up the voltage level by switching charge through a series of capacitances, is still in use today. Only in 2005, for example, was the version used on the injector for ISIS, the spallation neutron source at the Rutherford Appleton Laboratory, replaced by a new 665 keV RF quadrupole. The old multiplier will soon be on display at the entrance to the UK’s newly created Cockcroft Institute of Accelerator Science and Technology in Cheshire. The original version used by Cockcroft and Walton was, in fact, a refinement of a much earlier circuit by M Schenkel, a German engineer, which Heinrich Greinacher had already improved and so could never be patented.

CCson_10_07

Cockcroft and Walton naturally had close links with Chadwick, whose Nobel prize-winning discovery of the neutron occurred only a few weeks earlier in the same laboratory, making 1932 an extra-ordinary year for an extraordinary laboratory. Chadwick eventually built the first synchrocyclotron at the University of Liverpool, which was then reproduced at CERN at its inception in the early 1950s.

Cockcroft took over the Magnet Laboratory in Cambridge in 1934 following the departure of Kapitsa, and Walton moved to Trinity College, Dublin. In 1939, Cockcroft started work on radar systems for defence. In 1944 he became director of the Chalk River Laboratory, Canada. Two years later he was back in the UK, where he was appointed inaugural director of the Atomic Energy Research Establishment (AERE), Harwell, and played a major leader-ship role in ensuring the eventual operation of the world’s first nuclear power station at Windscale. He was also influential with the newly founded Indian government, whose foreign diplomat Vijayalakshmi Pandit visited Cockcroft in the UK for advice on the creation of an atomic-energy enterprise in India under physicist Homi J Bhabha’s leadership and initiative.

The week of 8–12 October 2007 marked the 50th anniversary of an event that was as notorious in its day as Three Mile Island in Pennsylvania or Chernobyl: the fire in the reactor at Wind-scale on the coast in north-western England. This was an environmental disaster that followed a standard procedure to release Wigner (thermal) energy stored in the graphite pile. The cause is still controversial. However, it would have been much worse without a feature widely known as “Cockcroft’s folly”, which was added late in the construction of the reactor. Cockcroft was head of AERE at the time and he intervened to insist on filters on the chimneys, which were retrofitted and were therefore at the top, so giving the chimneys their distinctive shape with large concrete bulges. Cockcroft’s intervention undoubtedly saved a much bigger disaster.

Cockcroft took charge of the AERE at a time when it was almost the sole repository of particle-accelerator expertise in Europe. In addition to early linear-accelerator construction, there was pioneering work on what was then a new and exciting device, the synchro-tron. Several small 30 MeV rings were built and larger ones designed for universities at Oxford, Glasgow and Birmingham. Then when planning started for CERN – which was not greeted with much enthusiasm by some in the UK who already had their own machines – it was Cockcroft who appointed Frank Goward from Harwell to assist Odd Dahl and Kjell Johnsen in the design of the PS. Soon afterwards he also encouraged two other important figures from Harwell to join in, with lasting impact on CERN: Donald Fry and John Adams.

In 1951, Cockcroft and Walton shared the Nobel Prize in Physics for the “transmutation of atomic nuclei by artificially accelerated particles”. Why had it taken so long to recognize the achievement, when Lawrence was instantly rewarded in 1932 for the invention of the cyclotron? The reason seems to be that there was a long list of giants still waiting to be recognized – Heisenberg among them – before Cockcroft and Walton could take their proper place. The awarding of the Nobel prize to Lawrence for the cyclotron helped to establish the pattern of rewarding instrument building for its own sake, introducing “innovation” into the criteria of the Nobel committee, in addition to “discovery”.

Cockcroft later held many important and influential scientific and administrative positions. He was president of the UK Institute of Physics and the British Association for the Advancement of Science, and was chancellor of the Australian National Academy, Canberra. His work was also acknowledged in many ways, including honorary doctorates and membership of many scientific academies. In 1959 he was appointed master of Churchill College, Cambridge. He died, aged 70, on 18 September 1967 – a year after the celebration of Chadwick’s 75th birthday at the newly created Daresbury Laboratory. It is at Daresbury that another important step forward for accelerator physics has begun, with the Cockcroft Institute named in honour of the accelerator “giant” who, along with Walton, first split the atom 75 years ago.

EPS conference sets the scene for things to come

The English summer, renowned for being fickle, smiled kindly on the organizers of the 2007 European Physical Society (EPS) conference on High Energy Physics (HEP), which was held in Manchester on 19–25 July. In a city that is proud of both its industrial heritage and a bright commercial future, HEP 2007 surveyed the state of particle physics, which also seems to be at a turning point. While certain areas of the field pin down the details of the 20th-century Standard Model, others seek to prise open new physics as the LHC prepares to open a new frontier.

CCman_10_07

The conference had a packed programme of 12 plenary sessions and 69 parallel sessions. In his opening talk, CERN’s John Ellis took a lead from Paul Gauguin’s painting Life’s Questions, and interpreted the questions in terms of the status of the Standard Model (where are we coming from?), searches beyond the Standard Model (where are we now?) and the search for a “theory of everything” (where are we going?). More than 400 talks covered all three aspects, in particular the status of the Standard Model and the current and future efforts to go beyond it. This report summarizes some of the highlights within these broad themes.

A beautiful model

The success of the Standard Model underpinned the 2007 award of the EPS High Energy and Particle Physics prize to Makoto Kobayashi of KEK and Toshihide Maskawa of the University of Tokyo for their work in 1972 that showed that CP violation occurs naturally if there are six quarks, rather than the original three. Kobayashi was at the conference to receive the prize and to give a personal view of the early work and the current understanding of CP violation. The idea of six quarks began to attract attention with the discovery of the τ lepton in 1976. The rest, as they say, is history, and the Cabibbo–Kobayashi–Maskawa (CKM) matrix describing six quarks is now a key part of the Standard Model.

CCkob_10_07

Moving to the present, Kobayashi pointed to the work of the experiments at the B-factories – BaBar at the PEPII facility at SLAC and Belle at KEK-B. They have played a key role in pinning down the well known triangle that expresses the unitarity of the CKM matrix. The two experiments have shown that the three sides of the triangle really do appear to close – a leitmotif that ran throughout the conference. Measurements of sin2β (sin2φ1) now give a clear value of 0.668 ± 0.028 – a precision of 4% – and even measurements of the angle γ (φ3) are becoming quite good thanks to the performance of B-factories.

Both facilities have provided high beam currents and small beam sizes, leading to extremely high luminosities. With a peak luminosity of 1.21 × 1034 cm–2 s–1 – four times the design value – PEPII has delivered a total luminosity of 460 fb–1 but is now feeling the stress of the high currents. Nevertheless, there are plans to try for still-higher luminosity and deliver the maximum possible before the facility closes down at the end of September 2008. KEK-B, with a peak luminosity of 1.71 × 1034 cm–2 s–1, has reached a total of 715 fb–1 and there are also plans for increasing luminosity in this machine, using the recently tested “crab crossing” technique, to bring the angled beams into a more direct collision.

CCell_10_07

The extra luminosity is important now that the experiments are moving on to a new phase, searching for new physics. This may be manifest in small deviations from the Standard Model at the 1% level, although guidelines from theory are made difficult – not least by uncertainties in QCD. The charmless B decay, B → φ K0 – where at the quark level b → sss – currently shows a small systematic deviation from theory. However, many agree with Kobayashi’s opinion that it is premature to derive any conclusion. “Super B-factories”, as proposed for example at KEK, will probably be necessary to clarify this and other hints of new physics.

B physics is not only the preserve of the B-factories, nor is interest in heavy flavours restricted only to B physics. The CDF and DØ experiments at Fermilab’s Tevatron have measured Bs oscillations for the first time, in a 5 σ effect with ΔMs = 17.77 ± 0.10 ± 0.07 ps–1. This result presents no surprises, but the award of the 2007 EPS Young Physicist prize reflected its importance. This went to Ivan Furic of Chicago, Guillelmo Gomez-Ceballos of Cantabria/MIT, and Stephanie Menzemer of Heidelberg, for their outstanding contributions to the complex analysis that provided the first measurement of the frequency of Bs oscillations. In the physics of the lighter charm particles, the BaBar and Belle experiments have made the first observations of D mixing, at the level of about 4 σ, with no evidence for CP violation. Neither Bs nor D mixing is easy to measure, the first being very fast, the second being very small. Moreover, D mixing is difficult to calculate as the charm quark is neither heavy nor particularly light. On the other hand, the Standard Model clearly predicts no CP violation. Elsewhere in the heavy-flavour landscape, CDF and DØ have found new baryons that help to fill the spaces remaining in the multiplets of various quark combinations.

The electroweak side of the Standard Model has known precision for many years, with the coupling constants α and GF, and more recently the mass of the Z boson, Mz, available as precise input parameters for calculations of a range of observables. Now with a steadily increasing total integrated luminosity in Run II – 2.72 fb–1 in DØ, for example, by the time of the conference – the mass of the W boson, MW, is measured with similar precision at both the Tevatron and LEP, and is known to ± 25 MeV. CDF and DØ also continue to pin down other observations, in particular in the physics of top, with studies of top decays and measurements of the top mass, Mt, with a latest value of 170.9 ± 1.8 GeV/c2. DØ also has evidence for the production of single top – produced from a W rather than a gluon – which gives a handle on ⃒Vtb2 in the CKM matrix. A comparison of Mw and Mt from the Tevatron with the results from LEP and the SLAC Linear Collider provides a powerful check on the Standard Model – Mt is measured at the Tevatron, whereas it was inferred at the e+e colliders – and constrains the mass of the Higgs boson. There will be no beam in the LHC until 2008, so the Tevatron is currently the only hunting ground for the Higgs; with upgrades planned to take its total luminosity to at least 6 fb–1, there are interesting times ahead.

While the Tevatron is still going strong, HERA – the first and only electron–proton collider – shut down for the last time this past summer, having written the “handbook” on the proton. HERA provided a unique view inside the proton through deep inelastic scattering, which is still being refined as analysis continues. Once the final pages are written they will provide vital input, in particular on the density of gluons, for understanding proton collisions at the LHC. This effort continues at the Tevatron, where the proton–antiproton collisions provide a complementary view to HERA, in particular regarding what is going on underneath the interesting hard scatters. Additionally, the HERMES experiment at HERA, COMPASS at CERN and experiments at RHIC are investigating the puzzle of what gives rise to the spin of the proton (or neutron) in terms of gluons or orbital angular momentum.

Measurements at HERA and the Tevatron have challenged the strong arm of the Standard Model by testing QCD with precision measurements that involve hadrons in the initial state, not just in the final state, as at LEP. In particular, they provide a testing ground for perturbative QCD (pQCD) in hard processes where the coupling strength is relatively weak, and show good agreement with theoretical predictions. The challenge now is to apply the theory to the more complex scenario of collisions at the LHC, in particular to calculate processes that will be the backgrounds to Higgs production and new physics.

QCD enters a particularly extreme regime in the relativistic collisions of heavy ions, where hundreds of protons and neutrons coalesce into a hot, dense medium. Results from RHIC at Brookhaven National Laboratory (BNL) are already indicating the formation of deconfined quark–gluon matter in an ideal fluid with small viscosity. Here the anti-de Sitter space/conformal field-theory correspondence offers an alternative view to pQCD, with predictions for the higher energies at the LHC.

Beyond the Standard Model

Experiments at the Tevatron and HERA have all searched for physics beyond the Standard Model and find nothing beyond 2 σ. At HERA, however, the puzzle remains of the excess of isolated leptons, which H1 still sees with the full final luminosity (reported at the conference only three weeks after the shutdown), although ZEUS sees no effect. This excess will have to be seen elsewhere to demonstrate that it is new physics, and not nature being unkind.

While the high-energy collider experiments see no real signs of new physics, at least neutrino physics is beginning to provide a way beyond the Standard Model. Neutrinos have long been particles about which we know hardly anything, but as Ken Peach from the University of Oxford commented in his closing summary talk, at least now we “clearly know what we don’t know”. Research has established neutrino oscillations, and with them neutrino mass. However, we still need to know more about the amounts of mixing of the three basic neutrino states to give the flavour states that we observe, and about the mass scale of those basic states.

CCorc_10_07

Clarification in one area has come from the MiniBoone experiment at Fermilab, which finds no evidence for oscillations as reported by the LSND experiment. However, there are signs of a new puzzle as MiniBoone sees an excess of events at neutrino energies at 300–475 MeV. The Main Injector Neutrino Oscillation Search collaboration presented a new result for mixing in the 23 sector, with Δm223 = 2.38 + 0.20 – 0.16 × 10–3 eV2 and sin223 = 1.00 with an error of 0.08. For the 13 sector, however, there is still a desperate need for new experiments. The Karlsruhe Tritium Neutrino experiment will try to measure directly the electron neutrino (an incoherent sum of mass states) using the classic technique of measuring the endpoint of the tritium beta-decay spectrum, with a sensitivity of 0.2 eV. Neutrinoless double beta-decay experiments provide another route to neutrino mass and could constrain the lightest state in the mass hierarchy. Taking what we already know from oscillations, one or two of the lightest neutrinos (depending on mass hierarchy) should have masses of at least 0.05 eV. Much now depends on experiments to come.

Dark matter in the cosmos seems to be another sure sign of physics beyond the Standard Model. Cosmology indicates that it is composed of non-baryonic particles and is mostly “cold” – low energy – and so cannot consist of the known lightweight neutrinos. Current direct searches for dark-matter particles are reaching cross-sections of around 10–44 cm2, and the next generation of experiments are aiming to reach a factor of 10 lower. Dark-matter annihilation can affect the gamma-ray sky, so the GLAST mission, due to be launched in December, could complement the searches for dark-matter candidates that will take place at the LHC.

The cosmos holds other mysteries for particle physics, in particular the long-standing question of the origin of ultra high-energy cosmic rays. Clues to the location of the natural accelerators lie in the precise shape of the spectrum at high energies: are there particles with energies above the Greisen–Kuzmin–Zatsepin cut-off? The Pierre Auger Observatory has ushered in a new age of hybrid detection based on a combination of scintillation and air fluorescence detectors. Together, the two techniques reveal both the footprint and the development of an extensive air shower, so reducing the dependence on interaction models. Auger now has more events above 10 EeV than previous experiments, and confirms the “ankle” and steepening at the end of the spectrum (and, since the conference, the first evidence for sources of ultra-high-energy cosmic rays, see “Pierre Auger Observatory pinpoints source of mysterious highest-energy cosmic rays“). Understanding thie spectrum depends on determining the mass of the incoming particles. Photons constitute less than 2% of the cosmic radiation at these high energies; is the remainder all protons, or are there heavier components, as the data from Auger hint at?

Back on Earth, the LHC is uniquely poised to go beyond the Standard Model, as Ellis pointed out in his opening talk. So a key question for everyone is: when will the LHC start-up? Lyn Evans, LHC project leader at CERN, brought the latest news, but first reminded the audience just how remarkable the project is. He began by paying homage to Kjell Johnsen, who died on 18 July, the week before the conference. Johnsen led the project to build the world’s first proton–proton collider, the Intersecting Storage Rings (ISR) at CERN. The LHC is a magnificent tribute to Johnsen, explained Evans, for without the ISR, there would be no LHC. The idea of storing protons, without the synchrotron radiation damping effects inherent in electron beams, was a leap of faith; respected people thought that it would never work.

Now, the LHC is built and the effort to cool down and power-up is underway. Unsurprisingly in a project so complex, problems arise, but they are being overcome; the schedule now foresees that beam commissioning should begin in May 2008, with the aim for first collisions at 14 TeV two months later. The injection system can already supply enough beam for a luminosity of 1034 cm–2 s–1, but in practice commissioning will start with only a few bunches for each beam, to ensure the safety of the collimation and protection systems.

For the LHC experiment collaborations, commissioning will also start with an emphasis on safety. They will study the first collisions with minimum-bias triggers while they gain full understanding of their detectors, before moving on to QCD dijet triggers to “rediscover” physics of the Standard Model. With 1 fb–1of data collected, there will be the opportunity to begin searching for new physics, with signs of supersymmetry perhaps appearing early. A major goal will of course be to discover the Higgs boson – or whatever mechanism it is that underlies electroweak symmetry-breaking. This is a key issue that the LHC should certainly resolve. Beyond it lie other more exotic questions, concerning extra dimensions and tests of string theory, for example, and even “unparticles” – denizens of a scale-invariant sector weakly coupled to the particles of the Standard Model, as recently proposed by Howard Georgi.

As the LHC nears completion, there is plenty of activity on projects to complement it. The largest is the proposed Inter-national Linear Collider (ILC) to provide e+e collisions at a centre-of-mass energy of 500 GeV. The collaboration released the Reference Design Report in February, putting the estimated price tag for the machine at $6.4 thousand million. Like the LHC, it will be a massive undertaking, involving some 1700 cryogenic units for acceleration. To reach still higher energies with an e+e collider, the Compact Linear Collider study is an international effort to develop technology to go up to 3 TeV in the centre-of-mass. The key feature is a double-beam delivery system, with a main beam and a drive beam, and normal conducting structures. It will require an accelerating gradient of more than 100 MV/cm to reach 3 TeV in a total length less than 50 km. The aim is to demonstrate the feasibility by 2010, with a technical design report in 2015.

In other areas, there are proposals for super B-factories and neutrino factories to produce the intense beams needed to study rare and/or weak processes in both fields. The idea behind the neutrino factories is to generate large numbers of pions, which will decay to muons that can be cooled and then accelerated before they decay to produce the desired neutrinos. An important requirement will be a high-intensity proton driver to produce pions in primary proton collisions. Such drivers have, of course, other uses: the Spallation Neutron Source in Oak Ridge, for example, is operating with the world’s first superconducting proton linac, currently delivering 0.4 × 1014 protons with each pulse. Other issues for a future neutrino factory are the cooling and acceleration of the muons. The Muon Ionisation Cooling Experiment at the UK’s Rutherford Appleton Laboratory will test one such concept, using liquid hydrogen absorbers to reduce the muon momentum in all directions. The subsequent acceleration will have to be fast, before the muons decay, and in this respect researchers are revisiting the idea of fixed-field alternating gradient (FFAG) accelerators, which dates back to the early 1950s. To test the principle, a consortium at the Daresbury Laboratory in the UK plans to build the world’s first non-scaling FFAG machine, a 20 MeV electron accelerator.

The design of particle detectors will have to adapt to the more exacting conditions at future machines, to deal with larger numbers of particles, higher densities of particles and higher radiation doses. Issues to consider include: segmentation to deal with the high density of particles; speed to handle large events quickly; and thin structures to keep down the material budget. For the ILC, various collaborations are working on four concepts for the collider detectors; the aim is to select two of these in 2009 and have engineering designs completed by 2010.

The next conference in the series is in Krakow, in 2009. It will be interesting to learn how the new ideas presented at HEP 2007 have advanced, to see the first steps across the new frontier with the LHC and to find out if we can see further towards where we are going.

• HEP 2007 was organized by the universities of Durham, Leeds, Lancaster, Liverpool, Manchester and Sheffield, together with the Cockcroft Institute and Daresbury Laboratory.

A global network listens for ripples in space–time

Albert Einstein predicted the existence of gravitational waves, faint ripples in space–time, in his general theory of relativity. They are generated by catastrophic events of astronomical objects that typically have the mass of stars. The most predictable generators of such waves are likely to be binary systems, black holes and neutron stars that spiral inwards and coalesce. However, there are many other possible sources, such as: stellar collapses that result in neutron stars and black holes (supernova explosions); rotating asymmetric neutron stars, such as pulsars; black-hole inter-actions; and the violent physics at the birth of the early universe.

In a similar way that a modulated radio signal can carry the sound of a song, these gravitational-wave ripples precisely reproduce the movement of the colliding masses that generated them. Therefore, a gravitational-wave observatory that senses the space–time ripples is actually transducing the motion of faraway stars. The great challenge is that these ripples are a strain of space (a change in length for each unit length) of the order of 10–22 to 10–23 – tremors so small that they are buried by the natural vibrations of everyday objects. As inconspicuous as a rain drop in a waterfall, they are difficult to detect.

CCvir_10_07

During the past few years, a handful of gravitational-wave projects have dared to make a determined attempt to detect these ripples. The Italian–French Virgo project (figure 1), the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US, the British–German GEO600 project and the TAMA project in Japan have all constructed gravitational-wave observatories of impressive size and ambition. Several years ago, the GEO team joined the LIGO Scientific Collaboration (LSC) to analyse the data collected from the two interferometers. Recently the Virgo Collaboration has been reinforced by a Dutch group, from Nikhef, Amsterdam.

CCgwi_10_07

The gravitational-wave observatories are based on kilometre-length, L-shaped Michelson interferometers in which powerful and stable laser beams precisely measure this differential distance using heavy suspended mirrors that act as test masses (see figure 2). The space–time waves “drag” the test masses in the interferometer, in effect transducing (at a greatly reduced scale) the movement of the dying stars millions of parsecs away, much in the way that ears transduce sound waves into nerve impulses.

The main problem in gravitational-wave detection is that the largest expected waves from astrophysical objects will strain space–time by a factor of 10–22, resulting in movements of 10–18 to 10–19 m of the test masses on the kilometre-length interfer-ometer arms. This is smaller than a thousandth of a proton diameter, and this signal is so small that Einstein, after predicting the existence of gravitational waves, also predicted that they would never be susceptible to detection.

To surmount this challenge requires seismic isolators with enormous rejection factors to ensure that the suspended mirrors are more “quiet” than the predicted signal that they lie in wait to perceive (the typical motion of the Earth’s crust, in the absence of earthquakes, is of the order of a micrometre at 1 Hz). The instruments use extremely stable and powerful laser beams to measure the mirror separations with the required precision, but without “kicking” them with a radiation pressure exceeding the sought-after signal. The mirrors being used are marvellous, state-of-the-art constructions, and the thermal motion on their surface is more hushed than the signal amplitude. All of this is housed in large diameter (1.2 m for Virgo and LIGO) ultra-high vacuum (UHV) tubes that bisect the countryside. In fact, the vacuum pipes represent the largest UHV systems on Earth, and their volume dwarfs the volume of the longest particle-collider pipes.

CCsen_10_07

The detectors are extremely complex and difficult to tune. The installation of the LIGO interferometers finished in 2000 and complete installation of Virgo followed in 2003. Both instruments then went through several years of tuning, with LIGO reaching its design sensitivity about a year ago and Virgo fast approaching its own design target (figure 3).

CCarm_10_07

On 22 May, a press conference in Cascina, Pisa, announced the first Virgo science run, as well as the first joint Virgo–LIGO data collection. That week also saw the first meeting of Virgo and the LSC in Europe. It was a momentous occasion for the entire field of gravitational-wave detection. Although the LIGO network was already into its fifth science run, which had started in November 2005, many in the community saw the announcement as marking the birth of a global gravitational-wave detector network.

CC2bh_10_07

The collaborative first and fifth science runs of Virgo and LIGO, respectively, ended on 1 October. The effort proved to be a tremendous success, demonstrating that it is possible to operate gravitational-wave observatories with excellent sensitivity and solid reliability for prolonged periods. The accumulated LIGO data amount to an effective full-year of observation at design sensitivity with all three LIGO interferometers in coincident operation, and the four-month-long joint run produced coincidence data between the two observatories with high efficiency. Although no gravitational waves were detected, the collected data are being analysed and will produce upper limits and other astrophysically significant results.

Running gravitational-wave detectors in a network has a fundamental importance related to the goal of opening up the field of gravitational-wave astronomy. Gravitational waves are best thought of as fluctuating, transversal strains in space–time. In an oversimplified analogy, they can be likened to sound waves, that is, as pressure waves of space–time travelling in vacuum at the speed of light. Like sound waves, gravitational waves come in the acoustic frequency band and, since gravitational-wave interferometers act essentially as microphones and lack directional specificity, the waves can be “heard” rather than “seen”, This means that a single observatory may detect a gravitational-wave burst but would have difficulty pinpointing its source. Just as two ears at opposite sides of the head are necessary to locate the origin of a sound, a network of several detectors at distant points around the Earth can triangulate the sources of gravitational waves in space. This requires a global network and is critical for pinpointing the location of the source in the sky so that other instruments, such as optical telescopes, can provide additional information about the source. In addition, signals as weak as gravitational waves require coincidence detection in several distant locations to confirm their validity by rejecting spurious events generated by local noise.

Before Virgo joined, the LSC already consisted of two major gravitational-wave observatories in the US – one in Livingston, Louisiana, and the other in Hanford, Washington – as well as the smaller European GEO600 observatory in Germany. The addition of the Virgo interferometer to this network has greatly reinforced its detection and pointing capabilities. The Livingston observatory hosts a single interferometer with a pair of 4 km arms. Hanford has two instruments: a 4 + 4 km interferometer like that at Livingston and a smaller 2 + 2 km one. GEO has a single 0.6 + 0.6 km interferometer, while Virgo operates a 3 + 3 km one. The introduction of Fabry–Perot cavities in the arms boosts the sensitivity of the three larger interferometers, extending the effective lengths of the arms to hundreds of kilometres. GEO600 increases its effective arm length to 1.2 km by using a folded beam.

CCmir_10_07

Japan has a somewhat smaller (0.3 + 0.3 km) interferometer, known as TAMA, located in Mitaka, near Tokyo. It is currently being refurbished with advanced seismic isolators and should soon join the growing gravitational-wave network. Japan is also considering the construction of the Large-scale Cryogenic Gravitational-Wave Telescope (LCGT). This would be a 3 + 3 km, underground interferometer with cryogenic mirrors, which would later become part of the global array of gravitational-wave interferometers. Australia is also developing technologies for gravitational-wave instruments and is planning to build an interferometer.

The effectiveness of gravitational-wave observatories is characterized both by their “reach” and by their duty cycle. The reach is conventionally defined as the maximum distance at which the inspiral signal of two neutron stars (each 1.4 solar masses) would be detectable with the available sensitivity. In gravitational-wave astronomy, improvements in sensitivity achieve more than they do in optical astronomy. Doubled sensitivity equals double reach, resulting in an eightfold increase in the observed cosmic volume and in the expected event rate. Similarly, because of the coincidence requirements between multiple interferometers, the duty cycle of a gravitational-wave observatory is more important than in a stand-alone optical instrument.

During the fifth science run, the two larger LIGO interferometers showed a detection range for the inspiral of two neutron stars of 15–16 Mpc, while the half-length interferometer had a reach of 6–7 Mpc. Virgo, with its partial commissioning, achieved a reach of 4.5 Mpc during the recent first joint run with LIGO. Virgo did not yet reach its design sensitivity in this run. However, its seismic isolation system helped it to achieve a superior seismic event immunity, resulting in longer “locks” (with a record of 95 hours) and an excellent observational duty factor. (The interferometer can only take data when it is “locked” – when all of the mirrors are controlled and held in place to a small fraction of a wavelength of light.) LIGO reached a duty cycle of 85–88% at the end of its fifth science run, while Virgo reached the same level on its first run.

The duty cycle of the triple coincidence of Virgo, LIGO–Hanford and LIGO–Livingston exceeded 58% (or 54% when including the smaller, second Hanford interferometer) and 40% in conjunction with GEO600. This was an amazing achievement given the tremendous technical finesse required to maintain all of these complex instruments in simultaneous operation.

The gold-plated gravitational-wave event would be the detection of a neutron star (NS) or black hole (BH) inspiral. However, even at the present design sensitivity, the Virgo–LIGO network has a relatively small chance of detecting such events. Currently, LIGO could expect a probability of a few per cent each year of ever detecting NS–NS inspirals, with perhaps a larger probability of detecting NS–BH and BH–BH inspirals, and an unknown probability of detecting supernova explosions (only if asymmetric and in nearby galaxies) and rotating neutron stars (only if a mass distribution asymmetry is present).

As scientists analyse the valuable data just acquired from the successful science run, the three main interferometers will now undergo a one- to two-year period of moderate enhancements and final tuning, and then resume operation for another year of joint data acquisition with greater sensitivity and an order of magnitude better odds of detection. At the end of that period, the interferometers will undergo a more drastic overhaul to boost their sensitivity by an order of magnitude with respect to the present value. A tenfold increase in sensitivity will result in a thousandfold increase of the listened-to cosmic volume, and correspondingly, up to a thousand times improvement in detection probability.

At that point (expected in 2015–16), the network will be sensitive to inspiral events within at least 200 Mpc and we can expect to detect and map several such events a year, based on our current understanding of populations of astrophysical objects. This will mark the beginning of gravitational-wave astronomy, a new window to explore the universe in conjunction with the established electromagnetic-wave observatories and the neutrino detectors.

Beyond this already ambitious programme, the gravitational-wave community has begun tracing a roadmap to design even more powerful observatories for the future. Interferometers based on the surface of the Earth can operate with high sensitivity only above 10 Hz, as they are limited by the seismically activated fluctuations of Earth’s Newtonian attraction. This limits detection to the ripples generated by relatively small objects (tens to hundreds of solar masses) and to “modest” distances (redshift Z = 1). Third-generation observatories built deep underground, far from the perturbations of the Earth’s surface, would be able to detect gravitational waves down to 1 Hz, and be sensitive enough to detect the lower-frequency signals coming from more massive objects, such as intermediate-mass black holes. Finally, space-based gravitational-wave detection interferometers such as the Laser Interferometer Space Antenna (LISA) are being designed to listen at an even lower frequency band. LISA would detect millihertz signals coming from the supermassive black holes lurking at the centre of galaxies. The aim is to launch the interferometer around 10 years from now, as a collaboration between ESA and NASA.

Although gravitational waves have not yet been detected, the gravitational-wave community is poised to prove Einstein right and wrong: right in his prediction that gravitational waves exist, wrong in his prediction that we will never be able to detect them.

bright-rec iop pub iop-science physcis connect