Comsol -leaderboard other pages

Topics

The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics

by Leonard Susskind, Little Brown and Company. Hardback ISBN 9780316016407, $27.99.

Despite appearances, you will not encounter Stephen Hawking in an armoured wheel chair, Lenny Susskind wearing a short spade and a net, or Gerard ’t Hooft with a spear and a shield; all three in the gladiator’s arena. This book contains a lot of drama, but most of it happens in the heads of these physicists and in their discussions. All three, the main characters of the book, are good friends and respect each other profoundly.

In the 1970s Hawking studied quantum mechanics near black holes and made the remarkable discovery that they are not black after all. They radiate energy with an apparently thermal spectrum, the temperature of which is inversely proportional to the mass of the hole. For the black holes that occur in nature at the centre of galaxies, or as the final products of the deaths of supermassive stars, this radiation is completely negligible. So, what was the point? Elaborating on his computations, Hawking concluded that in this process, if some information is gobbled up by the hole once it passes its event horizon, it will be forever lost. There is no way to retrieve it.

This was the starting shot in the war, and what a shot it was. As Susskind explains in great detail, it rocked the boat of physics so badly that it almost caused it to sink.

CCboo1_03_09

The claim was made in the late 1970s but ’t Hooft and Susskind learnt about it in a special meeting in 1981, in the attic of Werner Erhard (of “est” fame). Many physicists at the time dismissed the problem, but our two heroes recognized the mortal blow that it represented to the heart of quantum mechanics. A basic feature in the quantum description of nature is the conservation of information. In more technical terms, we believe that no matter how complex a process, it will never violate the unitarity of quantum evolution in time. The formation of a black hole out of ordinary stuff – and its subsequent evaporation – should not represent an exception despite its complexity. Hawking put his finger on a fundamental issue that hindered the possible unification of general relativity and quantum mechanics, which was a major preoccupation of Albert Einstein and many after him.

Hawking had clearly won, by surprise, the first battle. This we learn at the beginning of the book. The rest describes Susskind’s strategy of attrition until he could claim victory a quarter of a century later.

In sharing the author’s path to victory you will learn a lot of deep physics: the basis of quantum mechanics; the fundamental characteristics of black holes; the need to use string theory and some of its tools developed in the 1990s – arcane notions such as the principles of black-hole complementarity, the discovery of D-branes by Joe Polchinski and, above all, the holographic principle that appeared first in the study of the problem by Susskind and ’t Hooft, but that was masterfully formulated in string theory by Juan Maldacena. There are many other heroes in this story: Strominger, Vafa, Sen, Witten, Callan, Horowitz, Giddings, Harvey, Thorlacius and Russo etc. – who all provided the ammunition necessary to demolish Hawking’s edifice, to the point that he surrendered by around 2003.

In parts three and four of the book, the going gets necessarily rough. The ideas are deeply unfamiliar and one may from time to time feel some form of mental saturation. Being a consummate storyteller, the author punctuates the more difficult passages with a good deal of irreverent and iconoclastic humour. Read the chapter “Ahab in Cambridge”. His description of life and academia in Cambridge, England, is hilarious. Indeed, throughout the book you will get a good number of laughs.

In all, the book presents a fascinating and intellectual adventure in accessible terms where you can learn some of the more challenging ideas in modern theoretical physics. The author follows to the letter Einstein’s mandate of making things as simple as possible, but not simpler. It is original, honest, profound and fun. You could hardly ask for more.

CERN firms up the LHC schedule

In a workshop in Chamonix on 2–6 February, members of the LHC accelerator and experimental teams, as well as CERN’s management, met to formulate a realistic timetable to have the LHC running safely and delivering collisions. The main outcome is that there will be physics data from the LHC in 2009 and there is a strong recommendation to run the machine through the winter until the experiments have produced substantial quantities of data. Such extended running could achieve an integrated luminosity of more than 200 pb–1 at 5 TeV per beam.

Meetings in Chamonix were a feature of the annual winter shutdown at CERN during the LEP era, providing a forum where intense discussions led to a clear consensus on objectives for the following year. CERN’s director-general, Rolf Heuer, intends for similar meetings to guide operations during the LHC era. The first occasion provided a tough start, as the participants had to agree on the best way to proceed following the incident in sector 3-4 that brought LHC commissioning to a halt last September.

CCnew1_03_09

The crucial improvement since the incident in sector 3-4 is a new resistance-measurement system which can detect nano-ohm resistances in the joints. This new system would have prevented September’s incident and will prevent all imaginable failures of a superconducting joint in the future. The work on this new detection and protection system was reviewed at the workshop and is already making good progress. Following completion of the design of the two principal electronics boards, the first orders were placed in early February. At the same time, manufacture of the cable segments had begun and installation started in sector 4-5.

For any “unimaginable” failure of a joint, the installation of new pressure-relief valves will reduce the amount of damage that occurs, compared with last year. The new valves will prevent pressure build-up and collateral damage by allowing a greater rate of helium release in the event of a sudden increase in temperature. Discussions in Chamonix centred on whether to install these pressure-relief valves in one go or to stage their installation over the next two shutdowns. There were many interesting exhanges on this topic and opinions were divided. The CERN management is to make the final decision on this in the week beginning 9 February.

Meanwhile, work continues apace on the repairs at the LHC. At the end of January, a dipole from sector 1-2, which had been identified as having an internal splice resistance of 100 nΩ, was opened up after removal from the tunnel and was found to have little solder on the splice joint. It is likely that a similar small resistance was at the root of the incident in sector 3-4. The LHC teams can now detect a single defective splice in situ when a sector is cold and they have identified another dipole showing a similar defect in sector 6-7. This sector will be warmed up and the magnet removed. Each sector has more than 2500 splices, but the resistance tests can only be conducted on cold magnets. Three sectors remain to be tested: sector 3-4, where the incident occurred, and the adjoining sectors, 2-3 and 4-5.

Tests on the magnets were among the important topics under discussions at Chamonix. The participants agreed on teams to work on the detailed analysis of the measurements made during the cold tests of magnets in building SM18 before their installation in the tunnel. New analysis techniques will be devised to provide a complete picture of the resistance in the joints of all magnets installed in the LHC. The aim is to allow an early warning and early correction of any further suspicious splices.

• For up-to-date news, see The Bulletin at http://cdsweb.cern.ch/journal/.

Protons reach the J-PARC Hadron Experimental Hall

The Main Ring at the Japan Proton Accelerator Research Complex (J-PARC) has reached a new milestone with the successful extraction of a proton beam to the Hadron Experimental Hall and then to the beam dump.

CCnew2_03_09

J-PARC, a joint project of the Japan Atomic Energy Agency and the KEK laboratory, has been under construction at Tokai since 2001. With a 1600 m circumference and a 500 m diameter, the 50 GeV synchrotron of the Main Ring is the third and final stage in the accelerator complex. The first stage is the linac, followed by the 3 GeV synchrotron. The Main Ring will operate at 30 GeV in the first phase of the project.

The proton-beam tests at J-PARC started in November 2006 and had reached the initial goal for protons in the Main Ring of 30 GeV by December 2008. Then on 27 January, 30 GeV protons were extracted from the Main Ring to the secondary particle-production target, T1, located 250 m downstream in the Hadron Experimental Hall and were transported onwards to the beam dump.

The Hadron Experimental Hall, which is one of two facilities at the Main Ring, will provide beams of secondary particles produced by the protons. These beams will be the most intense secondary-particle beams at this energy and should facilitate several different experiments, including precise measurements of CP violation in K mesons and studies of the collective motions of strange quarks in hypernuclei. To make an abundance of secondary particles available from the primary proton beam has required the development of various methods for handling the high-intensity primary beam. This has required the construction of a dense radiation shield and magnets for the high-radiation area that are rugged enough and easily replaceable if problems arise.

Neutron and muon beams are already available in the Material and Life Science Facility at J-PARC. With the success of the Hadron Experimental Hall, an important goal is to send high-power neutrino beams to the Super-Kamiokande neutrino detector, 295 km away.

MSU will host new rare-isotope facility…

The US Department of Energy (DOE) has selected Michigan State University (MSU) to design and establish the Facility for Rare Isotope Beams (FRIB), a new research facility to advance the understanding of rare nuclear isotopes and nuclear astrophysics. It should take about a decade to design and build at an estimated cost of $550 million. FRIB will serve an international community of around 1000 researchers. MSU currently hosts the National Superconducting Cyclotron Laboratory (NSCL). Its director, Konrad Gelbke, will lead the team to establish the FRIB on the MSU campus.

The joint DOE–National Science Foundation Nuclear Science Advisory Committee (NSAC) first recommended as a high priority the development of a next-generation nuclear structure and astrophysics facility in its 1996 Long Range Plan. Since then, the FRIB concept has undergone numerous studies and assessments within DOE and by independent parties such as the National Research Council of the National Academy of Sciences. These studies – in addition to NSAC’s 2007 Long Range Plan – concluded that such a facility is a vital part of the US nuclear-science portfolio. It complements existing and planned international efforts, providing capabilities unmatched elsewhere.

The DOE issued a Funding Opportunity Announcement (FOA) on 20 May 2008 to solicit applications for the conceptual design and establishment of the FRIB, to enable fair and open competition between universities and national laboratories. The proposals received were subject to a merit-review process conducted by a panel of experts from universities, national laboratories and federal agencies. The appraisal included rigorous evaluation of the proposals based on the merit review criteria described in the FOA, presentations by the applicants and visits by the merit review-panel to each applicant’s proposed site.

…and projects to upgrade the NSCL make excellent progress

Despite the winter weather, including more than 50 cm of snow in December, construction continued on the new office wing and the new experimental area for research with stopped and reaccelerated rare isotope beams at the NSCL at MSU. Construction milestones achieved by the end of 2008 included: completing the steelwork for the new experimental area; tearing down one of the original wings of the NSCL building to make space for the new offices; completing the office foundations and underground utilities; and drilling the well for the lift shaft. Work continues on the steel superstructure for the new office block and on masonry for the new experimental area, which is scheduled to be enclosed in February.

CCnew3_03_09

Indoors, meanwhile, faculty and staff at the NSCL are making strides towards implementing new research capabilities related to a new accelerated beam facility, ReA3. This upgrade, which includes a new linear accelerator and a new experimental area, is funded by MSU and should begin commissioning in early 2010.

ReA3 will provide unique low-energy, rare-isotope beams, which will be produced by stopping fast, separated rare isotopes in a gas-stopper and then reaccelerating them in a linear accelerator. It will make available reaccelerated beams of elements that are typically difficult to produce at facilities based on isotope separation on-line. Among the science opportunities that ReA3 will open is the possibility of measuring a remaining set of nuclear-reaction rates that are necessary for accurate models of nova nucleosynthesis and studying how exotic nuclei with large neutron halos interact at large intranuclear distances.

The balcony that will hold ReA3 and the electron-beam ion trap (EBIT) for charge breeding is complete and ready for devices to be mounted. Development continues on the various components to stop, transport, charge-breed and reaccelerate rare isotopes. These components include the linear gas stopper; the low-energy beamline system from the gas stopper to the EBIT and to the new stopped-beam area; the EBIT charge breeder and mass separator; and the linac.

MINOS maps the deepest secrets of the upper atmosphere

A collaborative study by particle physicists and atmospheric researchers has found the first correlations between daily variations in cosmic-ray muons detected deep below ground and large-scale phenomena in the upper atmosphere. The effect suggests that underground muon-detectors could play a valuable role in identifying certain meteorological events and observing long-term trends.

Scott Osprey and colleagues from the UK’s National Centre for Atmospheric Science and Oxford University have worked with members of the Main Injector Neutrino Oscillation Search (MINOS) collaboration in analysing data collected between 2003 and 2007 by the MINOS Far Detector, located 705 m below ground in a disused iron mine at Soudan, in Minnesota. The MINOS experiment intercepts a neutrino beam that goes 725 km from Fermilab to Soudan and studies long-baseline neutrino oscillations; the penetrating muons appear as background noise.

The teams have found a close relationship between the rate of muons detected in MINOS and upper-air temperatures from the European Centre for Medium Range Weather Forecasts. In particular, they discovered strong correlations between the muon rate and the upper-air temperature during short-term events (of around 10 days) in the upper atmosphere, or stratosphere, in winter.

When primary comic rays strike the Earth’s atmosphere they interact, creating pions and kaons. These mesons in turn decay to produce muons – the most energetic of which penetrate deep below the Earth’s surface. The mesons can also interact before they decay, so the number of muons produced depends on the local density of the atmosphere and varies with temperature. An increase in temperature means a decrease in density and, hence, fewer mesons interact and instead decay, increasing the number of muons. Physicists have known of this effect since the Monopole Astrophysics and Cosmic Ray Observatory first observed a seasonal variation in the rate of muons a decade ago (Ambrosio et al. 1997).

CCnew4_03_09

Most of the mesons that give rise to the muons detected in MINOS occur at altitudes of around 15 km in the region known as the tropopause, where there is little variation in temperature. However, the mesons also occur in the mid-stratosphere – at altitudes where temperatures fluctuate, particularly in winter. For the analysis, the team defined an “effective” temperature based on an average temperature over the altitudes where mesons occur, weighted by the calculated distribution of meson production.

The results show a striking relationship between this temperature and the number of muons, with correlated changes occurring over periods of only a few days (Osprey et al. 2009). The data for the Northern Hemisphere winter of 2004–2005 are particularly interesting. The meteorological data indicate the occurrence of a major phenomenon, known as a sudden stratospheric warming, during February. This was linked to break-up of the winter polar vortex, a polar cyclone that brings cooler weather and which extended over the MINOS site in early February. Prior to that, the 2004–2005 winter had seen the lowest recorded temperatures in the polar stratosphere, and ozone concentration in the polar vortex was anomalously low.

The results show that underground muon data contain information that could identify important short-term meteorological events, over and above the already known seasonal effect. This is interesting for atmospheric researchers, as it provides an independent technique to measure such phenomena. Moreover, physicists have cosmic-ray data from experiments dating back 50 years or more, covering periods when upper-air observations from weather balloons were less reliable than today.

The primary mechanism of blue stragglers is stellar cannibalism

The long-standing mystery of the origin of blue stragglers in globular clusters has been solved. Researchers have found that these overweight stars do not result primarily from collisions between stars but from stellar "cannibalism" in binary star systems where plasma is gradually pulled from one star to the other to form a more massive, bluer star.

A globular cluster is a spherical body composed of about 100,000 stars tightly bound by mutual gravity (CERN Courier July/August 2006 p10). Several tens of these clusters orbit our galaxy. They are usually composed of old stars all born at the same time in a giant star-formation region (CERN Courier June 2006 p14). In an old cluster, the most massive stars should have exhausted their hydrogen fuel a long time ago and should have died in supernova explosions. Only low-mass stars, shining redder light, should remain. However, globular clusters do contain several stars that are much too blue and too massive to have survived until now.

The origin of these "blue stragglers" that still fight for life has been a puzzle of stellar evolution for 55 years. As normal stellar formation cannot continue in globular clusters, owing to the quasi-absence of gas, there must be a different mechanism in their dense central cores that can continuously form massive stars. Two main theories emerged over time: that blue stragglers were created through collisions of two stars; or that one star in a binary system was "reborn" by pulling matter off its companion.

The new research by Christian Knigge from the University of Southampton, in the UK, and colleagues from McMaster University, in Canada, allows them to favour one of these scenarios. They compared the observed number of blue stragglers in 56 globular clusters with the predicted single–single stellar-collision rate based on the inner density of the clusters. They found no clear correlation, dispelling the theory that blue stragglers are created through collisions with other stars. On the other hand, they did find a strong relationship between the total mass contained in the core of the globular cluster and the number of blue stragglers observed within it. Because more massive cores also contain more binary stars, they inferred a connection between blue stragglers and binaries in globular clusters (Knigge et al. 2009).

This research provides strong evidence that the primary mechanism for the formation of blue stragglers is "stellar cannibalism" by the most massive star in a binary system as it pulls material from its lighter companion. What remains to be investigated is whether the binary parents of blue stragglers evolve mostly in isolation, or whether close encounters – failed collisions – with other stars in the cluster are required to form these binaries.

CERN sets course for new horizons

Rolf-Dieter Heuer is no stranger to CERN. He first joined the laboratory’s staff in 1984 to work on the OPAL experiment at LEP. Nor is he a stranger to top-level management in particle physics. Having been spokesman of the 330-strong OPAL collaboration from 1994, he took up a professorship at the University of Hamburg in 1998 and became research director for particle physics and astrophysics at DESY in 2004. For the past 10 years he has steered DESY’s participation in projects such as the LHC and a future international linear collider. He has also fostered the restructuring of German particle physics at the high-energy frontier. Now he faces new challenges and new opportunities as he takes over the reins at one of the world’s largest scientific research centres.

As Heuer begins his five-year mandate as CERN’s director-general the first goal is clear: to see LHC physics in 2009. The immediate priority is to repair the machine following the damaging incident that occurred soon after the successful start-up last September. Heuer recalls how smoothly the machine operators established beam on 10 September, with the experiments timing in on the same day (CERN Courier November 2008 p26). “This was a big success,” he asserts. “When you look back to LEP, it’s amazing how fast it went.” For Heuer, the start up demonstrated not only that the LHC works, but that it works well. “The LHC as a project is now completed,” he adds. In his view, the repairs underway are part of the continuing commissioning process and he has full confidence in the team to have the LHC operating again as expected later this year.

A machine for the world

Longer term, Heuer’s vision for CERN stretches to horizons beyond the LHC, not just in time but also in terms of the broader particle-physics arena. This wider view includes several aspects with a common underlying theme of communication, from external relations with other high-energy physics laboratories to the transfer of technology and knowledge to society. One of his first acts as director-designate was to propose a management structure that includes a highly visible external relations office. This is to be a conduit for communication with laboratories and institutes not only in CERN’s 20 member states but also in the other nations with which the organization has relations at one level or another.

Another way in which CERN reaches beyond its boundaries as a centre for particle physics is through knowledge and technology transfer (KTT). Here Heuer believes that there should be more emphasis on knowledge, which he feels has not been sufficiently exploited in the past. He stresses that the goal should not primarily be potential funding, but to make a big impact on global society. “It’s great to have additional funding, but that should be secondary. It’s not funding that should drive KTT” he says.

The LHC will be the world machine for many years

Rolf Heuer

However, Heuer’s most ambitious – and perhaps contentious – goals are arguably his aspirations for CERN positioned as a laboratory for the world. In some respects that process has already begun. “We are about to enter the terascale in particle physics,” he says. “The LHC will be the world machine for many years.”

A first priority will be to strengthen CERN’s intellectual contribution, so that it has a role beyond that of a service laboratory. “CERN has to provide a service,” Heuer explains. “But to provide the best results we need the best people and therefore there needs to be an intellectual challenge.” A first step will be to create a new centre at CERN for the analysis and interpretation of LHC data. The idea is to create close contact between staff and users, between experiment and theory. “It should be a focal point in addition to other centres,” says Heuer. In particular, he envisages “a centre that fosters open discussion between theorists and experimenters, where people can discuss and perhaps develop common tools”. He acknowledges that it will be a challenge, but as he says: “I want to challenge people.”

A global view

Out on the broader world stage, Heuer hopes to influence the current panorama in particle physics. He believes that it is important to combine the strengths of the particle-physics laboratories around the world and to co-ordinate the various programmes. In general, “we need breadth with coherence”, he says. Starting at home, there are plans for a workshop on “New Opportunities in the Physics Landscape at CERN” in May to look at the future for fixed-target experiments at the CERN.

In this context, breadth also means to venture beyond the conventional boundaries of particle physics, particularly to overlaps with astroparticle physics and nuclear physics, where there are common aspects of experimental methods and theoretical ideas. “We need a closer dialogue with other communities,” he explains. “We should not separate fields too much. There are differences but we should emphasize the commonalities and aim for a ‘win–win’ scenario.”

Co-operation and collaboration are key words in Heuer’s view. “High-energy physics facilities are becoming larger and more expensive,” he points out, “and, to state it positively, funding is not increasing.” However, long-term stability in funding is going to be a necessary condition for the future survival of the field. “We need new approaches from funding agencies,” he says, “which look beyond national and regional boundaries.” One step could be for funding agencies to meet on a more global basis. Here the CERN Council provides a model that these agencies are already studying for, as Heuer notes, “it seems to work”.

More generally, keeping particle physics and CERN on track for a fulfilling future will no doubt require an organizational form that has yet to be defined. “We need to be open and inventive,” says Heuer. “A key word is partnership.” He argues that it will be crucial to retain excellent national and regional projects in addition to global initiatives to maintain expertise world wide; for example, he believes that it is essential to have accelerator laboratories in all regions.

“May you live in interesting times” is a supposedly a curse, but taken at face value it could also be a blessing. CERN, and Heuer as director-general, are certainly experiencing interesting times. The hope at CERN and in the wider particle-physics community must be that the future is not only interesting but global and bright.

Conference probes the dark side of the universe

During the past decade a consistent quantitative picture of the universe has emerged from a range of observations that include the microwave background, distant supernovae and the large-scale distribution of galaxies. In this “standard model” of the universe, normal baryonic matter contributes only 4.6% to the overall density; the remainder consists of dark components in the form of dark matter (23%) and dark energy (72%). The existence and dominance of dark energy is particularly unexpected and raises fundamental questions about the foundations of modern physics. Is dark energy merely Albert Einstein’s cosmological constant? Is it a new kind of field that evolves dynamically as the universe expands? Or is a new law of gravity needed?

In the search for answers to these questions, more than 250 participants, ranging from senior experts to young students, attended the 3rd Biennial Leopoldina Conference on Dark Energy held on 7–11 October 2008 at the Ludwig Maximilians University (LMU) in Munich. The meeting was organized jointly by the Bonn-Heidelberg-Munich Transregional Research Centre “The Dark Universe” and the German Academy of Sciences Leopoldina, with support from the Munich-based Excellence Cluster “Origin and Structure of the Universe”. The goal of the international symposium was to gain a better understanding of the nature of dark energy by bringing together observers, modellers and theoreticians from particle physics, astrophysics and cosmology to present and discuss their latest results and to explore possible future routes in the rapidly expanding field of dark-energy research.

CCdar1_03_09

Around 60 plenary talks at the conference were held in the central auditorium (Aula) of LMU Munich, with lively discussions following in poster sessions (where almost 100 posters were displayed) and during the breaks in the inner court of the university. There were fruitful exchanges between physicists engaged in a range of observations, from ground-based studies of supernovae to satellite probes of the cosmic microwave background (CMB), and theorists in search of possible explanations for the accelerated expansion of the universe, which was first reported in 1998. This acceleration has occurred in recent cosmic history, corresponding to redshifts of about z ≤ 1.

An accelerating expansion

Brian Schmidt of the Australian National University in Canberra gave the observational keynote speech. He led the High-z Supernova Search Team that presented the first convincing evidence for the existence of dark energy – which works against gravity to boost the expansion of the universe – almost simultaneously with the Supernova Cosmology Project led by Saul Perlmutter of the Lawrence Berkeley National Laboratory and the University of California at Berkeley. Adam Riess, a member of the High-z team, presented constraints on dark energy from the latest supernovae data, including those from the Hubble Space Telescope at redshift z > 1. This is where the acceleration becomes a deceleration, owing to the lessening impact of dark energy at earlier times (figure 1).

Both teams independently discovered the accelerating expansion of the universe by studying distant type Ia supernovae. They found that the light from these events is fainter than expected for a given expansion velocity, indicating that the supernovae are farther away than predicted (figure 2, p18). This implies that the expansion is not slowing under the influence of gravity – as might be expected – but is instead accelerating because of some uniformly distributed, gravitationally repulsive substance accounting for more than 70% of the mass-energy content of the universe – now known as dark energy.

Type Ia supernovae arise from runaway thermonuclear explosions following accretion on a carbon/oxygen white dwarf star and after calibration have an almost uniform brightness. This makes them “standard candles”, suitable as tools for the precise measurement of astronomical distances. Wolfgang Hillebrandt of the Munich Max-Planck Institute for Astrophysics presented 3D simulations of type Ia supernova explosions. It is still a matter of debate how standard these so-called “standard candles” really are. Their colour–luminosity relationship is inconsistent with Milky Way-type dust and, as Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics mentioned, the role of dust is generally underestimated. Future supernova observations in the near infrared hold promise because, at these wavelengths, the extinction by dust is five times lower. Bruno Leibundgut of ESO said that infrared observations using the future James Webb Space Telescope will be crucial in solving the problem of reddening from dust.

As Schmidt pointed out, and others detailed in subsequent talks, measurements of the temperature fluctuations in the CMB provide independent support for the theory of an accelerating universe. These were first observed by the Cosmic Background Explorer in 1991 and subsequently in 2000 by the Boomerang and MAXIMA balloon experiments. Since 2003 the Wilkinson Microwave Anisotropy Probe (WMAP) has observed the full-sky CMB with high resolution. Additional evidence came from the Sloan Digital Sky Survey and 2-degree Field Survey. In 2005 they measured ripples in the distribution of galaxies that were imprinted in acoustic oscillations of the plasma when matter and radiation decoupled as protons and electrons combined to form hydrogen atoms, 380,000 years after the Big Bang. These are the “baryonic acoustic oscillations” (BAOs).

Dark-energy candidates

Eiichiro Komatsu of the Department of Astronomy at the University of Texas in Austin, lead author of WMAP’s paper on the cosmological interpretation of the five-year data, said that anything that can explain the observed luminosity distances of type Ia supernovae, as well as the angular-diameter distances in the CMB and BAO data, is “qualified for being called dark energy” (figure 3). Candidates include energy, modified gravity and an extreme inhomogeneity of space.

CCdar2_03_09

Although the latter approach was presented in several talks, the impression prevailed that the effects of dark energy are too large to be accounted for through spatial inhomogeneities and an accordingly adapted averaging procedure in general relativity. Komatsu – and many other speakers – clearly favours the Lambda-cold-dark-matter (ΛCDM) model, with a small cosmological constant Λ to account for the accelerated expansion. The dark-energy equation of state is usually taken to be w = p/ρ= –0.94 ± 0.1(stat.) ± 0.1 (syst.) with a negative pressure, p; a varying w is not currently favoured by the data. Several speakers presented various versions of modified gravity. Roy Maartens of the University of Portsmouth in the UK acknowledged that ΛCDM is currently the best model. As an alternative he presented a braneworld scenario in which the vacuum energy does not gravitate and the acceleration arises from 5D effects. This scenario is, however, challenged by both geometric and structure-formation data.

Theoretical keynote-speaker Christof Wetterich of Heidelberg University emphasized that the physical origin, the smallness and the present-day importance of the cosmological constant are poorly understood. In 1988, almost simultaneously with but independently from Bharat Ratra and James Peebles, he proposed the existence of a time-dependent scalar field, which gives rise to the concept of a dynamical dark energy and time-dependent fundamental “constants”, such as the fine-structure constant. Although observations may eventually decide between dynamical or static dark energy, this is not yet possible from the available data.

CCdar3_03_09

Yet another indication for the accelerated expansion comes from the investigation of the weak-lensing effect, as Matthias Bartelmann of Heidelberg University and others explained. This method of placing constraints on dark energy through its effect on the growth of structure in the universe relies on coherent distortions in the shapes of background galaxies by foreground mass structures, which include dark matter. The NASA-DOE Joint Dark Energy Mission (JDEM) is a space probe that will make use of this effect, in addition to taking BAO observations and distance and redshift measurements of more than 2000 type Ia supernovae a year. The project is now in the conceptual-design phase and has a target launch date of 2016. ESA’s corresponding project – the Dark UNiverse Explorer – is part of the planned Euclid mission, scheduled for launch in 2017. There were presentations on both missions.

The first major scientific results from the 10 m South Pole Telescope (SPT) initial survey were the highlight of the report by John Carlstrom, principal investigator for the project. The telescope is one of the first microwave telescopes that can take large-sky surveys with precision. It will be possible to use the resulting size-distribution pattern together with information from other telescopes to determine the strength of dark energy.

CCdar4_03_09

Carlstrom described the detection of four distant, massive clusters of galaxies in an initial analysis of SPT survey data – a first step towards a catalogue of thousands of galaxy clusters. The number of clusters as a function of time depends on the expansion rate, which leads back to dark energy. Three of the detected galaxy clusters were previously unknown systems. They are the first clusters detected in a Sunyaev–Zel’dovich (SZ) effect survey, and are the most significant SZ detections from a subset of the ongoing SPT survey. This shows that SZ surveys, and the SPT in particular, can be an effective means of finding galaxy clusters. The hope is for a catalogue of several thousand galaxy clusters in the southern sky by the end of 2011 – enough to rival the constraints on dark energy that are expected from the Euclid Mission and NASA’s JDEM.

The conference was lively and social activities enabled discussions outside the conference auditorium, particularly during the lunch breaks in nearby Munich restaurants. The presentations and discussions all demonstrated that the search for definite signatures and possible sources of the accelerated expansion of the universe continues to flourish and has an exciting future ahead. The results on supernovae and the CMB have led the way, but there is still much to learn. In his conference summary, Michael Turner of the University of Chicago emphasized that “cosmology has entered an era with large quantities of high-quality data”, and that the quest to understand dark energy will remain a grand scientific adventure. Future observational facilities – such as the Planck probe of the CMB, which is scheduled for launch around Easter 2009, the all-sky galaxy-cluster X-ray mission eROSITA, ESA’s Euclid and NASA’s JDEM – are all designed to produce unprecedented high-precision cosmology results that will shed new light on dark energy.

The light-pulse horizon

“According to the general theory of relativity, space without aether is unthinkable; for in such space there (not only) would be no propagation of light … But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time.” Albert Einstein, 1920.

Aether, the pure air breathed by gods, is not much in fashion in laboratories today. Physicists speak instead of the vacuum, in the context of quantum physics, and quantum vacuum fluctuations that fill space that is free from real matter. How light slips through these fluctuations was first studied in the 1930s by Werner Heisenberg, H Euler, W Kockel and Victor Weisskopf, and later by Julian Schwinger. Their work revealed the first “effective” interaction – the new and unexpected scattering of light on light, and of light on the background electromagnetic field. This interaction originates in quantum vacuum fluctuations into electron–positron pairs and makes the electric field unstable to pair production. Thus any macroscopic electric field is metastable because in principle, it can decay into particles.

The critical field strength for this instability, E0, arises when a potential of V0 = 2mc2/e = 1 MV (where m is the electron’s mass and e its charge) occurs over the electron’s Compton wavelength – that is, when E0 = 1.3  ×  1018 V/m. This leads to vacuum decay into pairs at timescales of less than attoseconds (10–18 s). The back-reaction of the particles that are produced screens the field source, giving an effective upper limit to the strength of the electric field. However, as the applied external field decreases in strength, its lifespan increases rapidly: for a field strength of E = 5 × 1016 V/m, the lifespan is similar to the age of the universe so that, for all practical purposes, present-day field configurations are stable.

The Compton wavelength of an electron is one three-millionth of a typical optical wavelength, so vacuum fluctuations do not greatly obstruct the propagation of light. Moreover, as Schwinger showed, a coherent ideal plane light-wave cannot scatter from itself (or be influenced by itself) no matter what the field intensity is. This is the only known form of light to which the vacuum is exactly transparent within the realm of quantum electrodynamics. For non-ideal plane waves, space–time translation-invariance symmetry and quantum coherence only partially protect the propagation of light pulses.

CCpul1_03_09

A laser pulse of several kilojoules and just a few wavelengths long is all but a plane wave. Such a pulse pushes apart virtual electrons and positrons, in the near future up to an energy of many giga-electron-volts. If the virtual vacuum waves were to decohere, the light pulse would materialize into pairs. However, by quantum “magic” the deeply perturbed vacuum is restored after the pulse has passed. Thus a single pulse, even though it is not a plane wave, will at present-day intensities slip through the vacuum. Colliding light pulses provide a greater opportunity to interact with the vacuum structure because the magnetic field can be compensated and/or the electric wave-number doubled, thereby enhancing the light–vacuum interaction. Two superposed pulses do not so much interact with each other, but interact together with the fluctuations in the vacuum.

High-intensity pulsed lasers also offer a radical approach to accelerating real particles to high energies. The electromagnetic fields of the laser pulses can be huge: current off-the-shelf, high-power lasers can deliver electric fields as great as 10 GeV/μm (104 TeV/m). Metal will typically break down at fields of less than 100 MeV/m – a natural limit and the current standard for accelerator designs based on RF technology. The much higher fields available using lasers promise ultracompact accelerator technology, although the difficulties should not be underestimated. The shorter wavelengths involved imply far better control and precision than with RF acceleration. What helps to push laser technology ahead is the greater intensity of light that is available in comparison with RF. For this reason, laser-pulse technology is the most significant ingredient of laser acceleration, and great progress can now be achieved on timescales of a year.

CCpul2_03_09

This was not always so. Until the mid-1980s, efficient ultrashort pulse-amplification that would preserve the beam quality seemed to be unattainable, considering the damage caused to optical devices. A solution emerged in 1985 with the concept of chirped pulse amplification (CPA), in which a short pulse at an energy level as low as nanojoules is stretched by a large factor in time using dispersive elements, such as a pair of diffraction gratings (figure 1). This is possible because of the large number of Fourier frequencies that form the ultrashort pulse. Each frequency takes a different route and hence a different time to traverse the dispersive element.

Once the pulse has been stretched, the red part of the spectrum is ahead, followed by the blue. The stretching factor can be as large as 106 yet the operation does not significantly change the total pulse energy. Consequently the pulse intensity drops by the same ratio, i.e. 106, implying that the long pulse can be amplified safely, preserving the beam quality and laser components. This concept works so well that in modern CPA systems the pulse is stretched by a factor of 106, amplified by 1012, then compressed by a factor of 106 back to its initial time structure. A nano- to microjoule primary pulse turns into a pulse of up to kilojoules comprising nearly 1022 photons of (sub)micron wavelength. In a nutshell, this pulse is a table-top particle accelerator. The interaction with matter of light pulses containing joules or even kilojoules of energy (compared with the less than microjoules of the most powerful particle accelerators) generates intense bursts of radiation (figure 2).

Accelerating gradients

Nevertheless, laser particle acceleration has had its ups and downs. As the Woodward–Lawson theorem states, direct plane-wave laser acceleration of particles is not possible – you lose what you win in a perfect wave. However, if the intense pulse is so short that it “resonates” with the innate matter(plasma)-oscillations, a huge accelerating gradient is possible. The energy imparted to the particle in each acceleration step can be directly derived from the wave amplitude of the pulse. The short-pulsed nature of the laser is also of great interest in this acceleration method, as is the possibility of using circularly polarized light.

Particle-acceleration schemes use lasers to generate wake waves in plasma in the relativistic regime for electrons in the optical field. Enrico Fermi once contemplated a 1 PeV (109 MeV) accelerator girdling the Earth; laser acceleration may allow us to reach this energy on the scale of 1 km by employing a subpicosecond 15 MJ laser. The route to this goal would test ultrahigh-gradient acceleration theory at 10 TeV, which could be achievable with a laser of 15 kJ and a 50 fs pulse. Such an intense laser pulse is not yet available, but the proposed European Light Infrastructure (ELI) should offer an opportunity to explore this domain. The peak power of ELI will be in the exawatt (1018 W) region – that is, 100,000 times the power of the global electricity grid – albeit only over several femtoseconds.

CCpul3_03_09

Are there other ways to go from the laser pulse to an intense particle beam? If beam quality is not of great concern, it is possible to exploit the action of the pulse on a foil that is only a fraction of the wavelength thick. At the Trident Laser Facility at Los Alamos National Laboratory, Manuel Hegelich and his team shoot a high-contrast (no preceding light) pulse onto a thin, carbon-diamond nanofoil. Such a pulse is not reflected by the “pre-plasma” formed on the foil but propagates through the foil, where it picks up electrons. The cloud of relativistic, wave-riding electrons generates longitudinal electrical fields, which cause carbon ions to follow electrons, creating two “beams”. At ELI such a pulse–foil interaction could provide a source of high-energy relativistic heavy ions, because the pulse intensity that could be achieved would permit direct acceleration of ions in a relativistic regime (figure 3).

The plasma cloud emerging from the foil could form gamma-ray beams suitable for photonuclear physics. Einstein observed that a relativistic “flying mirror” (in this case the plasma) would “square” the relativistic Doppler effect, leading to a boost of photon energy, ω = 4γ2ω0, where ω0 is the original energy and γ the Lorentz factor (figure 4). This effect has been demonstrated, both by using the laser wakefield created on the surface of a solid and by using a relativistically moving plasma of thin foil propelled by the laser beam, from which another laser beam is reflected. It should soon lead to compact coherent X-ray and even gamma-ray light sources. Dietrich Habs and colleagues at the Munich-Centre for Advanced Photonics are pursuing an initial design effort. The gamma rays produced in this way are not only of high energy but also compressed by a factor 1/γ2 into an ultrashort pulse. The coherent pulse contains increased electromagnetic fields, so the technology leads to ultrahigh electrical-field strengths where the decay of the vacuum becomes observable.

It appears that coherent reflection of a femtosecond pulse is possible from a flying mirror of dense plasma with γ=10,000 – that is, from an electron cloud moving with an energy of 5 GeV. The resulting 400 MeV photon pulse would also be compressed from femtoseconds to 10–23 s. Such a pulse could, in principle, be focused into a femtometre-scale volume, the size of a nucleon. On such a small distance scale, 10 kJ would be enough to reach temperatures in the 150 GeV range, which should allow the study of the melting of the vacuum structure of the Higgs field and the electroweak phase transition. Clearly this is on the far horizon, but there are other distance/temperature scales of interest on the journey there. Such a system would allow studies of electromagnetic plasma at megaelectron-volt temperatures and exploration of the quark–gluon plasma on a space–time scale at least 1000 times as great as can currently be achieved. This would be truly recreating a macroscopic domain of the early universe in the laboratory.

CCpul4_03_09

Another fundamentally important aspect of the science possible with the extremely high fields in lasers concerns the immense acceleration, a, that electrons experience in the electromagnetic field of the pulses (e.g. up to a = 1030 cm/s2 for an electron in ELI). According to the equivalence principle, this corresponds to an equivalent external gravitation. The effect for the accelerated electron is that the distance, d = c2/a, to this event horizon becomes as short as the electron’s Compton wavelength, in which limit experiments can probe the behaviour of quantum particles in the realm of strong gravity. Work is under way to demonstrate Unruh radiation, a cousin of Hawking radiation. (Hawking radiation is thermal radiation in strong gravity, while Unruh radiation arises in the presence of strong acceleration.) Such experiments would allow the study of the extent and validity of the special and general theories of relativity, as well as test the equivalence principle in the quantum regime.

To conclude, high-intensity pulsed lasers, and in particular the proposed ELI facility, offer a novel approach to particle acceleration and widen the range of fundamental physics questions that can be studied (Mourou et al. 2006). Light pulses will be able to produce synchronized high-energy radiation and pulses of elementary particles with extremely short time structures – below the level of attoseconds. These unique characteristics, which are unattainable by any other means, could be combined to offer a new paradigm for the exploration of the structure of the vacuum and fundamental interactions. Ultra-intense light pulses will also address original fundamental questions, such as how light can propagate in a vacuum and how the vacuum can define the speed of light. By extension it will also touch on the question of how the vacuum can define the mass of all elementary particles. The unique features of ELI – its high field-strength, high energy, ultrashort time structure and impeccable synchronization – herald the entry of pulsed high-intensity lasers into high-energy physics. This is a new scientific tool with a discovery potential akin to what lay on the horizon of conventional accelerator technology in the mid-20th century.

bright-rec iop pub iop-science physcis connect