Comsol -leaderboard other pages

Topics

Radioactive beam research notches up 50 years

If you have access to a high-energy accelerator or a nuclear reactor it is relatively easy to produce a lot of radioactive nuclei. It is a much harder job to include selectivity so that just one specific isotope among those produced reaches detectors. This is particularly true for short-lived nuclei with half-lives of a second or less, or for nuclei with very low abundances. Spatial separation of production and detection sites is necessary, and the practical solution is to bring nuclei of interest from one site to the other in the form of a radioactive beam. Following a pioneering experiment carried out in 1951 at the Niels Bohr Institute (NBI) in Copenhagen, several ways have been developed to do this. A symposium celebrating the NBI experiment, and taking the opportunity to assess current state-of-the-art techniques, was held at the Royal Danish Academy in Copenhagen in November 2001.

cerncope1_5-02

Otto Kofoed-Hansen and Karl Ove Nielsen were the authors of NBI’s first experiment. Their basic intent was to measure the recoil momentum resulting from the emission of a neutrino in beta decay. The best way to do this is with noble gas atoms, so Kofoed-Hansen and Nielsen set out to collect neutron-rich krypton isotopes produced in the fission of uranium. Their technique was similar in principle to the converter technique currently being suggested for use in the next generation of radioactive ion-beam facilities. Deuterons from the NBI cyclotron generated neutrons in an internal target. These bombarded an external uranium oxide target in which baking powder was mixed to create a gas flow out of the target. The krypton atoms produced flowed towards a nearby isotope separator. They were ionized, mass separated and then collected, allowing the decay measurements to be done.

Soon after, the NBI cyclotron was moved to a new area and the experiment closed. The idea was taken up again a decade later and finally led to the creation of the ISOLDE facility at CERN where experiments began in 1967. Nielsen was again actively involved in the start-up phase. Both he and Kofoed-Hansen, who continued in weak interaction physics, were active at CERN for many years.

Radioactive beam techniques

One of the two major techniques used for producing radioactive beams today, isotope separation online (ISOL), is a direct descendant of the NBI experiment. At the symposium, Juha Äystö of CERN and the University of Jyväskylä discussed the widespread use of ISOL around the world today. In an ISOL system, the nuclei produced are thermalized and extracted through an ion guide if they are still in an ionized state, or through an ion source if they are not. Target and ion source technology is a key element of ISOL systems, as Helge Ravn of CERN pointed out. Current developments are leading towards ever more selective systems and targets capable of coping with very high power. The hope is to go from present generation targets, which can take up to 30 kW, to targets that are able to withstand up to or above 1 MW. The converter method with neutrons offers one possible approach. An alternative was discussed by Alex Mueller of Orsay, who suggested using an electron beam converted into bremsstrahlung photons to induce photofission. Conventional fission in a high-flux reactor is also being explored in the Munich Accelerator for Fission Fragments (MAFF) project presented by Dietrich Habs of Garching.

cerncope2_5-02

The other major technique for production of a radioactive beam is separation in flight. At low energies, fusion (and transfer) reactions dominate – this is the way superheavy elements are made – but at energies above 100 MeV per nucleon, projectile fragmentation and fission are the important reaction mechanisms. The rather thick production targets currently in use lead to very efficient use of the primary beam, explained Brad Sherrill of Michigan State University, but the radioactive beam produced is quite extended in phase space.

Many experiments can still be done, but stopping in a gas cell and reacceleration to achieve better beam quality is being investigated at several laboratories, as outlined by Piet van Duppen of Leuven. This is one of the key ingredients in the US Rare Isotope Accelerator (RIA) project discussed by Jerry Nolen of Argonne (see Climbing out of the nuclear valley). Two major projects to upgrade existing fragmentation facilities were discussed. Isao Tanihata of the Japanese institute of physical and chemical research, RIKEN, presented his laboratory’s radioactive beam factory upgrade, and Walter Henning of GSI in Darmstadt presented his laboratory’s upgrade project.

CERN workshop marks a transition

Quark mixing is a topic that has attracted intense scrutiny ever since it was first proposed in the 1960s. Recently, CERN’s Large Electron-Positron collider (LEP) has made important measurements concerning mixing, particularly hadrons containing b quarks (see LEP helps fill CKM matrix). Properties such as B hadron lifetimes, the oscillations between particles and antiparticles for neutral B mesons, and the couplings of the b quark to the other quark flavours were all studied at LEP, and the results have had a significant impact on our understanding of quark mixing.

The LEP experiments were the first to observe the time dependence of oscillations of the neutral B0 meson and measure its oscillation frequency precisely. However, the data used for most of these B physics analyses were taken a number of years ago. The LEP detectors have now been dismantled and the collaborations are reaching the end of their studies. The more complex analyses (for example, the search for high-frequency oscillations of neutral strange quark-containing B0s mesons) have just been completed. Meanwhile, the B-factories at SLAC in California, US, and KEK in Japan (see CP violation enters a new era) are dedicated to studying this physics and are producing enormous samples of B mesons. In particular, they have recently demonstrated CP violation in the B0 meson system. Experiments at the upgraded Tevatron at Fermilab in Chicago, US, also have a rich B physics programme and are starting to take data.

cernwork1_5-02

To mark this transition, a workshop on the subject of quark mixing was held at CERN in February. Its aim was to review the current status of theory and experiment and to provide an opportunity for the fruitful exchange of ideas between the theoretical and experimental communities. The meeting’s title – workshop on the CKM unitarity triangle – refers to the Cabibbo-Kobayashi-Maskawa (CKM) matrix that describes quark mixing in the Standard Model.

The concept of quark mixing was first developed by Cabibbo, who introduced a single mixing angle to describe transitions between up, down and strange quarks. With the discovery of the charm quark, this was extended to a matrix describing mixing between quarks of the first two generations. Kobayashi and Maskawa then generalized mixing for six quark flavours, described by the three-by-three CKM matrix. This gives the couplings Vij between the up-type quarks i = u,c,t and the down-type quarks j = d,s,b. The matrix should be unitary (a weakly decaying quark should transform into one of the other known quark flavours) and this unitarity condition leads to relationships between the elements of the matrix. The nine complex matrix elements can be described by four parameters – three mixing angles and a phase (one mixing angle and no phase would be required if there were only four quark flavours). The introduction of a phase is crucial, as Kobayashi and Maskawa recognized. That it allows for CP violation – matter-antimatter asymmetry – in the Standard Model led them to propose that mixing occurs between three generations of quarks – long before the members of the last generation (b and t) were discovered.

The unitarity triangle

A widely used parameterization of the quark mixing matrix which was developed by Lincoln Wolfenstein of Carnegie Mellon University, US, expresses the elements as an expansion in the powers of λ – the sine of the Cabibbo angle – which has a known value of 0.22 from kaon and hyperon decays. The remaining three parameters required to describe the mixing are denoted as A, ρ and η. In this parameterization, the element Vcb is given by Aλ2 and has been well measured from the semileptonic decays of hadrons containing the b quark, leading to a value for A of 0.84. The values of the two parameters ρ and η remain to be determined.

One of the unitarity conditions of the CKM matrix is VudV*ub + VcdV*cb + VtdV*tb = 0. Dividing by VcdV*cb leads to a triangular relationship in the ρ, η plane, where the length of the base is unity. The left-hand side of the triangle is proportional to Vub/Vcb, which can be measured by studying semileptonic B meson decays into charmless final states, and the right-hand side is proportional to Vtd/Vcb, which can be extracted from the B0 oscillation frequency.

cernwork2_5-02

This triangle is known as the unitarity triangle, and it neatly summarizes the state of knowledge of this physics. The angles of its sides correspond to the phases of the matrix elements involved, and are directly related to the CP asymmetries that are predicted to occur in B decays. In particular, the time-dependent asymmetry between the decays of B0 and anti-B0 mesons to a J/ψ and a Ks is expected to have the form sin2ΒsinΔmdt, where Δmd is the B0 oscillation frequency and Β is the angle between the right-hand side of the triangle and its base (twice the phase of B0 mixing). This is the channel that the B-factories have used to observe CP violation in the B0 system. Many other decay modes (including those for B0s mesons) can be used to measure different CP asymmetries, and thus determine the other angles of the triangle. However, that programme of measurements lies in the future, in particular at the dedicated B physics experiments at hadron colliders – LHCb at CERN’s LHC, and BTeV at Fermilab’s Tevatron.

First workshop

The workshop at CERN was intended to be the first of two meetings and concentrated on measurements of the sides of the triangle using existing data. The second workshop will cast an eye towards future B physics to be carried out at the LHC. The meeting was organized by Achille Stocchi and Marco Battaglia from the DELPHI experiment at LEP, who have played an important role in the working groups that were set up by the LEP collaborations to supervise the combination of their results.

The work of these groups has been widely appreciated and their data have been used by rapporteurs at conferences and featured in the Particle Data Group (PDG) review of particle properties. They have expanded to provide world averages including representatives from the SLD experiment at SLAC, CLEO at Cornell, and Fermilab’s CDF. One of the aims of the workshop was to discuss how the work of these LEP-based groups should be taken over by representatives of the experiments that are now leading the field, in particular the B-factories. Weiming Yao from the PDG chaired a dedicated session on this issue, with discussion involving representatives from all of the experiments in a very constructive atmosphere. The B-factory experiments, BaBar at SLAC and Belle at KEK, expressed their commitment to taking the lead in future B physics averaging.

Relating measured quantities to the CKM matrix elements involves understanding how the quarks are bound into hadrons, since the couplings refer to quarks but the observed particles are hadrons. This is the realm of QCD, but in the non-perturbative regime in which an exact solution is not possible. A key theoretical framework in the field of unitarity triangle studies is lattice QCD, in which the equations can be solved by discretizing space-time onto a network of points. The accuracy of the predictions is then limited by the computing power available, which limits the size of the lattice and the spacing of its points; in the limit of zero lattice spacing the exact solution of QCD would be recovered. In addition, to save on computing resources, the pair production of light quarks from the vacuum (the so-called quenched approximation has usually been neglected). Lattice QCD has the attraction that most of the theoretical parameters required for unitarity triangle studies have been calculated in a consistent framework. Some first results are becoming available without quenching using two light quark flavours. These can be used to estimate the effect of the quenched approximation, which appears to be encouragingly small for most variables studied. Discussion started at the workshop on creating a “lattice data group” with representatives from the numerous groups pursuing lattice QCD. This would aim to provide a consensus on the best use of lattice results, combining them where relevant along the lines of the PDG for experimental results.

cernwork3_5-02

The workshop was based around three working groups, the first of which concentrated on the left-hand side of the triangle and the second on the right-hand side. The third working group studied the thorny issue of how all the various measurements that are relevant to the unitarity triangle should best be combined. Converting the measured quantities into constraints on ρ and η is only possible with the guidance of theory, with corresponding uncertainties. How best to deal with theoretical errors has proved controversial in the past, because they do not generally follow the Gaussian distribution that straightforward error combination relies on.

There are two main schools of thought concerning the fit for the apex of the triangle – those following Bayesian statistics and those following Frequentist statistics. It was a major success of the workshop to bring proponents of these two approaches together and compare in detail the results when their fitting programs are applied to the same inputs. The main difference is that when an input parameter has a quoted statistical (Gaussian) component to its uncertainty and a systematic component that is taken to be flat between certain limits, the Bayesian approach convolutes the two errors, whereas the Frequentist approach corresponds to adding them linearly, giving a more conservative result. If the two fitting programs are fed with the same input likelihoods, the allowed regions that result are very similar. The issue therefore becomes not one of which statistical school to believe, but rather of how to ensure that the theoretical uncertainties are correctly handled. The latest result of the unitarity triangle fit from the group using the Bayesian approach – performed during the workshop itself using all the agreed inputs from the other working groups – is shown in figure 1. This fit gives an indirect measurement of the angle Β, corresponding to sin2Β lying within the range 0.57-0.81 at 95% confidence level. It is in excellent agreement with the direct measurement from the B-factories, which is indicated by the coloured bands.

The workshop was very well attended, with more than 200 participants and a strong international mix of delegates. The four days of meetings involved lively discussion, with sessions running late into the evening, and the summary session took place on a Saturday morning. The success of this first meeting has led to the proposal for the next meeting to be held by the Institute of Particle Physics Phenomenology in Durham, UK, in conjunction with the next in their series of workshops on heavy-flavour physics and CP violation. The second meeting will therefore take place in either Durham or the nearby Lake District next spring.

CP violation enters a new era

New results show that CP violation in the decays of particles containing heavy quarks is becoming precision physics. These studies could soon lead to new insights into quark physics, in particular the mystery of how a universe apparently composed only of matter emerged from a Big Bang which initially produced matter and antimatter in equal amounts.

What physicists call CP violation ultimately distinguishes matter from antimatter. If physics is CP-symmetric, the behaviour of left-handed particles is the same as that of right-handed antiparticles, and vice versa. Since 1964 physicists have known that this convenient symmetry does not quite work. The difference between matter and antimatter is not simply a case of human convention.

Why CP violation happens is still a mystery, but one early outcome – when only three types of quark were known – was the realization that whatever the explanation of CP violation, it requires at least six different types of quark. In a world containing only three or even four kinds of quark, there would be no way of distinguishing matter from antimatter.

For some 35 years, CP violation experiments were confined to the study of neutral kaon particles (containing the strange quark). Only this physics arena provided the right conditions for seeing the small effect (a few parts per thousand). However, in principle CP violation could also be visible with any quark-based neutral particle which is difficult to distinguish from its antiparticle.

Accumulated results suggested that the best place to look would be the decays of the neutral B particles, containing the fifth or “b” quark. New high-energy B-factories – the electron-positron colliders PEP-II and KEKB – were constructed at the Stanford Linear Accelerator Center (SLAC) and the Japanese KEK laboratory respectively, to provide optimal conditions for this new physics. At these machines, major new experiments – Belle at KEKB and BaBar at PEP-II – provided their first tentative results in 2000, which were updated last (see How CP violation came to B).

Between them, these two experiments have gone on to accumulate some 100 million examples of B pairs. For B production, the machines are tuned to the energy region at and around the upsilon 4S resonance, with the best CP violation hunting ground being the decay of a neutral B particle into a J/psi and a neutral kaon.

The underlying quark transitions responsible for CP violation are described by a triangle in a special parameter space. The larger the area of this triangle, the greater the CP violation, and measurements aim to measure the angles and sides of the triangle.

The first angle to be measured, Β, is conventionally expressed as sin2Β. Now BaBar sees sin2Β as 0.75 ± 0.09 ± 0.04 (previously 0.34 ± 0.20 ± 0.05), and Belle sees 0.82 ± 0.12 ± 0.05 (previously 0.58 + 0.32 – 0.34 + 0.09 – 0.10). B particle CP violation definitely happens, and the effect is larger than the earlier suggestions.

In addition, Belle at KEKB has seen CP violation in the decay of neutral B mesons into two charged pions – the decay rate of the neutral B particle via this route is not the same as that of the neutral B antiparticle. This is analogous to the signal seen at Brookhaven in 1964 in the decays of neutral kaons which provided the first evidence for CP violation.

Vital to these new measurements is the performance of the electron-positron colliders at KEK and SLAC. This performance is usually expressed as luminosity, which is proportional to the rate of electron-positron collisions. KEKB’s luminosity has reached a remarkable 7.25 x 1033, not far from the ambitious 1034 design luminosity, but already the highest collider luminosity ever reached anywhere. PEP-II has exceeded 4.6 x 1033. Improving on these performances requires hard work and ingenuity, particularly as they are already not far from fundamental limitations (the “beam-beam limit”), but machine physicists remain optimistic when such creditable performances have been attained so soon after commissioning such complicated new machines.

The runs are continuing, and the results will probably be updated this summer.

Snake charming induces spin-flip

As the venerable Cooler Ring at the Indiana University Cyclotron Facility in the US enters its final year, a Michigan-led team headed by Alan Krisch has notched up a polarized-beam milestone using the ring’s unique accelerator-physics capabilities. The team, which includes physicists from Michigan and Indiana in the US, IHEP-Protvino in Russia and KEK in Japan, has charmed a string of magnets known as a Siberian snake to spin-flip stored protons some 400 times with very little polarization loss.

Siberian snakes were invented at Novosibirsk in the late 1970s. A Siberian snake is a chain of quadrupoles and solenoids that has the ability to overcome any other spin-changing device or random spin-changing effects in any storage ring.

To achieve this latest result, however, the spin-flip team used a small RF-dipole magnet to charm the snake into occasionally surrendering its dominance of all spin motion. This could be done because the RF-dipole used was even more closely matched to the spin motion’s frequency than the Siberian snake.

cernnews2_4-02

The RF dipole’s polarization loss is about 12% in 400 spin-flips, giving a spin-flip efficiency of more than 99.9%. This could allow many spin-flips of polarized proton or electron beams while they are stored for billions of turns, offering the promise of greatly reduced systematic errors in spin-asymmetry experiments. This capability could be important for scattering experiments in storage rings that already have Siberian snakes.

The same RF-dipole magnet could give a 99.9% spin-flip efficiency to the polarized protons accelerated at Brookhaven’s RHIC (p8) and perhaps one day at CERN’s LHC, DESY’s HERA, or Fermilab’s Tevatron. This is possible because an RF-dipole’s transverse magnetic field is essentially invariant under the Lorentz transformation from its stationary rest frame to any highly relativistic proton or electron rest frame, in which each spin in the beam seems to receive the magnetic field’s instructions.

The RF dipole was built by Michigan’s Spin Physics Center in the late 1980s to serve as an injection-kicker to the Cooler Ring. Recycled in 1999, the dipole was connected to a small RF voltage supply, which gave it a weak transverse RF magnetic field. This allowed some spin-flipping with the Siberian snake in the Cooler Ring, but efficiency was very low and most of the original polarization was lost after a few spin-flips.

The team then tried to improve the spin-flip efficiency by increasing the RF dipole’s strength. The first step in this direction was taken by student Boris Blinov, who devised a way to increase the strength of the RF dipole to around 220 gauss cm while increasing its turn-on time from microseconds to milliseconds. This removed the high-frequency transient fields that had previously destroyed the beam and led to a spin-flip efficiency of 99.63 ± 0.05%, measured in May 2001.

Then, in November and December 2001, the group used a much stronger voltage supply to increase the RF dipole’s strength to about 560 gauss cm. This adjustment led to a 99.93 ± 0.02% spin-flip efficiency in a stored beam of 120 MeV protons.

The spin-flip team will spend its final year at the Cooler Ring on other experiments, since 99.93% seems sufficient. Possible projects include trying to spin-flip polarized deuterons and studying third- and fourth-order snake depolarizing resonances. These are hard to find at low-energy storage rings, but may be all too easy to find in proton rings such as Brookhaven’s RHIC and DESY’s HERA.

Mad about physics in Antananarivo

cernnews8_4-02

The Malagasy capital of Antananarivo welcomed HEP-MAD ’01, Madagascar’s first conference on high-energy physics, at the end of September last year. The conference was organized by the Montpellier branch of France’s National Centre for Scientific Research (CNRS) and the town’s Malagasy Cultural Association, along with the University of Antananarivo and Madagascar’s National Institute of Nuclear Science and Technology.

Topics covered included introductory reviews of astrophysics, the status of electroweak theories, Higgs searches and precision tests of the Standard Model. Results on CP violation from CERN’s NA48 experiment were discussed, along with the most recent results from the BaBar and Belle experiments at the B-factories at SLAC in the US and KEK in Japan. These were followed by theoretical talks on heavy-quark decays and non-perturbative quantum chromodynamics (QCD). There were also discussions of QCD results from CERN’s LEP and DESY’s HERA colliders. Pre-conference presentations covered applications of the field in the environmental and medical domains.

The aim of the HEP-MAD conference was to stimulate the creation of a high-energy physics institute in Antananarivo, which was successfully achieved. HEP-MAD will now become a biannual event that will next be held in September 2003. The proceedings of the conference will be published by World Scientific.

Superluminal phenomena shed new light on time

cernsuper1_4-02

Quantum effects such as vacuum polarization in gravitational fields appear to permit “superluminal” photon propagation and give a fascinating new perspective on our understanding of time and causality in the microworld. To understand these new developments, we first need to question the origin of the received wisdom that superluminal motion necessarily leads to unacceptable causal paradoxes. In special relativity, the problem arises because while all observers agree about the time ordering of events linked by a subluminal signal, for a superluminal signal different observers disagree on whether the signal was received after or before it was emitted. In other words, viewed in a certain class of inertial frames, a superluminal signal travels backwards in time (figure 1). However, by itself this is not sufficient to establish the familiar causal paradoxes associated with time travel. A genuine causal paradox requires a signal to be sent from the emitter to a point in its own past light-cone – a time-reversed return path must also be possible. In special relativity, such a return path is guaranteed by the existence of global inertial frames. Crucially, a causal paradox requires both of these conditions to be met.

This is the loophole that may allow the possibility of superluminal propagation in general relativity. Einstein’s theory of gravity is based on the weak equivalence principle, which states that at each point in space-time there exists a local inertial frame – in other words a freely falling observer does not feel a gravitational force. This principle leads directly to the description of gravity by a curved space-time that is locally flat. In the conventional theory, however, this is supplemented by a further simplifying assumption, known as the strong equivalence principle (SEP), which requires that dynamical laws are the same in each of these local inertial frames.

While the SEP may be consistently imposed in classical physics, somewhat surprisingly it is violated in quantum theory (see Further information). In quantum electrodynamics (QED), Feynman diagrams involving a virtual electron-positron pair influence the photon propagator. This gives the photon an effective size of the order of the Compton wavelength of the electron. If the space-time curvature has a comparable scale, then an effective photon-gravity interaction is induced. This depends explicitly on the curvature, in violation of the SEP. The photon velocity is changed and light no longer follows the shortest possible path. Moreover, if the space-time is anisotropic, this change can depend on the photon’s polarization as well as direction. This is the quantum phenomenon of “gravitational birefringence”. The effective light-cones for the propagation of photons in gravitational fields no longer coincide with the geometrical light-cones fixed by the local Lorentz invariance of space-time, but depend explicitly on the local curvature.

Superluminal photons

Drummond and Hathrell first described this phenomenon in a seminal paper in 1980. But a further surprise was in store. When they computed the quantum modifications to the light-cones, they found that in many cases the photon velocity was superluminal. Indeed we now know that for propagation in vacuum space-times (solutions of Einstein’s field equations in regions with no matter present, such as the neighbourhood of the event horizon of black holes), there is a general theorem showing that if one photon polarization has a conventional subluminal velocity, the other polarization is necessarily superluminal. In fact, gravity affects the photon velocity in two distinct ways: the first through the energy momentum of the gravitating matter; and the second through the component of the curvature of space-time that is not determined locally by matter, the so-called Weyl curvature. It is this that produces birefringence.

Can superluminal photon propagation really be compatible with the principle of causality, or does it necessarily imply the existence of time machines? After all, such motion is genuinely backwards in time as viewed locally by a class of inertial observers. The question remains controversial, but the key is the SEP. In special relativity, a causal paradox requires both outward and return signals to be backwards in time in a global inertial frame. In general relativity, however, global Lorentz invariance is lost and the existence of a sufficiently superluminal return signal is not guaranteed. The quantum violation of the SEP certainly permits superluminal motion, but with photon velocities predetermined by the local curvature. Consistency with causality is therefore a global question. If the original space-time admits a global causal structure with respect to the geometrical light-cones, then causality will be respected even in the presence of superluminal photons if this structure is preserved with respect to the new light-cones. This rapidly leads to sophisticated issues of global topology in general relativity, but at this stage superluminal photons appear to be both consistent with causality and predicted by QED.

Black holes and cosmology

Since the original Drummond-Hathrell discovery, superluminal photons have been studied in a variety of curved space-times, ranging from the Schwarzschild, Reissner-Nordström or Kerr metrics describing black holes to the Bondi-Sachs space-time describing gravitational radiation from an isolated source and the Friedmann-Robertson-Walker (FRW) space-time of Big Bang cosmology. One of the most fascinating results to emerge involves the status of the event horizon surrounding a black hole. At first sight, it seems that if we can exceed the usual speed of light, it may be possible to escape from within the black hole horizon. If so, the location of the effective horizon would become fuzzy on a microscopic scale, with potentially far-reaching consequences for the quantum theory of black holes. Remarkably, however, it turns out that this possibility is not realized – while the light-cones of physical photons may differ from the geometrical light-cones everywhere else, they coincide exactly on the event horizon. Once again, the superluminal phenomenon evades a potentially paradoxical clash with the causal properties of space-time.

Another fascinating result involves the propagation of photons in the very early universe. Investigations of superluminal photons in the FRW space-time show that photon velocity increases rapidly at early times, independently of polarization. Recent work on the rather different subject of cosmologies in which the fundamental constant c varies over cosmological time has shown that an increase in the speed of light in the early universe can resolve the so-called “horizon problem”, which motivates the popular inflationary model of cosmology. Quantitative predictions of the size of quantum-induced superluminal photon velocities in the strong gravitational fields characterizing the inflationary epoch are currently beyond reach, but it is intriguing to reflect that quantum theory predicts that the physical speed of light increased sharply in the very early evolution of the universe.

Gravitational rainbow

The most recent research into superluminal photon propagation in QED has focused on the key issue of dispersion. In conventional optics, light passing through a refractive medium has a reduced phase velocity that depends on its frequency. This dispersive effect allows the group velocity of a wave pulse to differ from its phase velocity, and to be significantly greater or less than c. This is the origin of several striking recent experiments on the speed of light, notably those of Vestergaard Hau and colleagues at Harvard in which they reduce the group velocity of a light pulse almost to zero by shining tuned lasers on a cloud of ultracold sodium atoms.

For fundamental questions relating to causality, however, the relevant “speed of light” is not the group velocity, but the asymptotic value of the phase velocity at high frequency. The original analysis of Drummond and Hathrell determined the phase velocity in the low-frequency limit, so it is of critical importance to extend their work and discover the full dispersion relation for the quantum propagation of photons in a gravitational field. We need to find the frequency dependence of the refractive index for gravity – in other words, the gravitational rainbow. There is, however, a fundamental theorem of conventional optics that requires the refractive index at high frequency to be less than at low frequency. If this remains true in the gravitational context, then the original superluminal prediction would in fact be a lower bound on the crucial asymptotic phase velocity. However, the validity of this theorem in the presence of gravity has been questioned and a final resolution must rely on explicit computations of high-frequency propagation. Significant progress has recently been made, suggesting that the superluminal phenomenon can persist to high frequency, but research is ongoing and further surprises cannot be ruled out.

Gravitational lensing

Theoretical evidence for superluminal phenomena is so far confined to the bizarre quantum microworld where virtual particles interact with a foamy, curved space-time. This is the regime where quantum field theory in curved space-time comes into its own and other phenomena arise that challenge our fundamental assumptions about the laws of nature, such as the famous prediction of Hawking radiation from microscopic black holes. But once the cat is out of the bag, it is hard to squeeze it back in. Once we have established that, in principle, superluminal light is possible and the SEP can be violated without compromising causality, it becomes an urgent question to ask whether nature has chosen to take advantage of this scenario on macroscopic, astrophysical scales. If so, how would we observe violations of the SEP in astronomy?

cernsuper3_4-02

Gravitational birefringence produces a polarization-dependent shift D f = (f2/R2)f in the Einstein formula for the angle of deflection f = 4M/R of light with closest approach distance R to a spherically-symmetric mass M. This would be seen if f2 were characterized by an as yet unknown large scale rather than the quantum scale lc derived in Further information, and would produce a polarization dependence in the apparent position of the lensed images. Observation of this effect would be direct evidence for gravitational birefringence and imply a violation of the strong equivalence principle on astronomical scales. (Photo: Kavan Ratnatunga, Johns Hopkins University.)

The clearest indication of a modified speed of light would be a change in the classic Einstein formula for the deflection of light by a massive object. This was the original prediction of general relativity that was triumphantly verified by Eddington’s 1919 expedition to Brazil, when the deflection of light from a distant star by the Sun was observed during a solar eclipse. This effect is the origin of gravitational lensing (see Hubble image), which in recent years has been developed into a precise and sophisticated tool in astronomy and is used in searches for dark matter and protogalaxies. Gravitational birefringence on astrophysical scales would show up as polarization dependence in gravitational lensing, with the apparent positions of the lensed images changing with the polarization of the observed light. Polarization dependence in gravitational lensing would therefore be a smoking gun for interactions between light and gravity that violate the SEP, and its discovery would have profound implications for fundamental physics.

Further information

cernsuper2_4-02

In QED, Feynman diagrams involving a virtual electron-positron pair effectively give the photon a “size” of the order of the Compton wavelength lc of the electron ((a) in diagram). This produces an interaction between the photon and gravity that distorts the photon’s trajectory through curved space-time so that it no longer follows the usual geodesic path ((b) in diagram). This effect changes the light-cones from k2 = 0 to k2 = f1 Tmnkm kn + f2 Cmrnskm kn er es where k and e are the photon’s momentum and polarization. There are two distinct effects – one due to the energy momentum Tmn of matter and a second, polarization-dependent, interaction depending on the Weyl curvature Cmrns of the space-time. The remarkable feature of this formula is that it permits both k2 > 0 and k2 < 0, implying superluminal motion ((c) in diagram). p>
In the low-frequency limit, f1 and f2 are constants of the order of alc2, where a is the fine-structure constant. This determines the magnitude of the photon velocity shifts to be of the order of alc2/L2, where L is a typical curvature scale.

In general, f1 and f2 are functions depending on derivatives of the curvature. Determining their precise form is the subject of current research aimed at a complete determination of the dispersion relation for photon propagation in gravitational fields.

QCD comes to the home of Goethe and Schiller

cernqcd1_4-02

Financed by the European Union, the research network “Quantum Chromodynamics at High Energies and the Deep Structure of Elementary Particles” supports the collaboration of quantum chromodynamics (QCD) theorists from eight European countries interested in fundamental aspects of QCD and its applications in experiments at accelerator laboratories. Following meetings in Durham (the home institute of network spokesperson James Stirling), Florence and Paris, Germany hosted the meeting in 2001.

Around 70 participants from Europe, the US and Russia presented and discussed recent results on QCD. A large number of presentations by young researchers showed how active the field is. Robert Klanner, research director of Hamburg’s DESY laboratory, opened the meeting with an outline of prospects for DESY’s upgraded HERA II collider, which started up in 2001. HERA II sees a factor of five increase in luminosity over the original collider. Klanner also reported on preparations for the future TESLA linear collider, which will lead to new challenges in QCD research.

HERA results on deep inelastic electron-proton scattering have always been a focus of the network’s interest. In particular, measurements of proton structure functions in the region of small momentum fraction (Bjorken-x) are crucial for the precise determination of parton densities, which will be necessary for understanding processes to be studied at CERN’s forthcoming Large Hadron Collider (LHC). In the small-Bjorken-x region, unravelling the interplay of effects arising from Balitsky, Fadin, Kuraev and Lipatov (BFKL) theory for radiative corrections to parton scattering, and the QCD evolution equations of Dokshitzer, Gribov, Lipatov, Altarelli and Parisi (DGLAP), is a challenge to theorists. As Guido Altarelli pointed out, a resumption of large logarithmic corrections improves the accuracy and can extend the region of validity of the evolution equations.

Further developments

QCD also plays an important role in electron-positron scattering. One example is the search for the perturbative QCD pomeron that emerges from BFKL theory as being responsible for effects observed in small-angle collisions. Electron-positron scattering is the best environment to search for a BFKL pomeron, since photons radiated by the electron and positron provide the cleanest incoming states for hadronic processes. If both photons are highly virtual, perturbative QCD allows an absolute prediction for the total two-photon cross-section. Comparisons of leading-order BFKL calculations with data from CERN’s LEP electron-positron collider have been made. These leave no doubt that a consistent next-to-leading-order calculation is needed, both for the BFKL pomeron and for the photon impact factor. Whereas the former has been completed, the latter is still under investigation, and two independent groups reported results.

For hadron colliders, researchers are working hard to develop reliable and efficient event-generator programs. As Bryan Webber of Cambridge University emphasized, the Monte Carlo simulation of multijet final states will be a vital tool in the search for new physics at the LHC. The development of these computer algorithms moves particle physicists back into the front line of modern computing.

Higher-order corrections

Another branch of contemporary QCD-related research is the calculation of higher-order corrections to hard processes. With HERA giving high statistics data on proton structure functions, theorists have started extending the accuracy of analytical QCD calculations up to three loops. These ambitious calculations require new techniques and tools to be designed to allow the calculation of higher-order Feynman diagrams to be computerized as completely as possible.

The investigation of the transition in QCD from perturbative to nonperturbative physics is likely to remain a point of interest for the next few years.

The most challenging question in QCD is the transition from perturbative QCD to nonperturbative high-energy scattering. In deep-inelastic scattering, HERA has measured the transition from QCD parton physics at large momentum transfer to photoproduction at zero momentum transfer, and theorists are trying to analyse the onset of nonperturbative effects. An intriguing possibility is the existence of a novel state of QCD – saturation – characterized by high gluon density. Phenomenological support for this comes from a model based on this idea that has been very successful in describing HERA data in the transition region. An alternative, more conservative approach starts from the nonperturbative side where the photoproduction cross-section is consistent with the hadronic pomeron, and introduces a second “hard” pomeron to account for the observed stronger rise with energy at large momentum transfer. The investigation of the transition in QCD from perturbative to nonperturbative physics is likely to remain a point of interest for the next few years.

Weimar is a focal point for German and European history and culture; Germany’s greatest poet and dramatist, Johann Wolfgang von Goethe lived there for more than 40 years. Friedrich von Schiller also settled there,and collaborated with Goethe to make the Weimar Theatre one of the most prestigious in the country. The composer Franz Liszt and the philosopher Friedrich Nietzsche are also among the town’s celebrated residents. Weimar has witnessed more than its fair share of historic events, such as the battle of Jena and Auerstedt and the meeting of Napoleon and Tsar Alexander I in nearby Erfurt. It lent its name to the Weimar Republic from 1919 until 1933, and it gave birth to the world famous Bauhaus style of architecture. It is for this unique combination of reasons that meeting organizers, Jochen Bartels and Johannes Blümlein, chose Weimar for the European QCD network meeting. Following extensive renovations for Goethe’s 250th anniversary in 1999, the town offered a pleasant and stimulating atmosphere for scientific discussions.

DESY workshop combines gravity and particle physics

The relationship between astrophysics, cosmology and elementary particle physics is fruitful and has been constantly evolving for many years. Important puzzles in cosmology can find their natural explanation in microscopic particle physics, and a discovery in astrophysics can sometimes give new insights into the structure of fundamental interactions. The inflationary universe scenario offers a good example. Inflation is a beautiful way to understand the cosmological flatness and horizon problems (see box overleaf) and apparently induces large-scale density fluctuations consistent with experimental observations. Inflation also predicts the existence of dark matter elementary particles together with a certain amount of dark energy manifested as the cosmological constant L. This has recently become clear through fascinating new experimental results. However, some of the pieces essential for building a theory that combines the physics of the macrocosmos with all microscopic phenomena in a complete and satisfactory way are still missing.

Quantum gravity

On the theoretical and conceptual level, the quest for a theory of quantum gravity is the most prominent and important problem facing theoretical physics. Quantizing gravity will be necessary to describe the physics at regions of very large space-time curvature – near or inside black holes, for example, or at extremely short time scales after the Big Bang. Any new theory that goes beyond the established Standard Model of particle physics and of cosmology must explain known facts in a broader and more unified perspective. At the same time it should not introduce more – and perhaps hidden – assumptions than there are facts in need of explanation. Finally, it must pass experimental tests and be verifiable (or falsifiable), at least in principle. Superstrings offer, perhaps for the first time, a promising avenue for constructing a viable theory of quantum gravity, since they contain gravity with a spin 2 graviton field as well as all the basic ingredients of the Standard Model.

The choice of topics – gravity and particle physics – for the 2001 DESY workshop (held in Hamburg) was largely influenced by impressive recent astrophysical observations showing that the overall mass and energy density of today’s universe is extremely close to its critical value (W = 1). Another main theme of the workshop was string theories, particularly the recently developed M-theory (often dubbed “the mother of all theories”) that underlies string theories. In string and M-theory, multidimensional surfaces, rather than just strings, are also allowed. These higher-dimensional membranes (or branes) and one particular type, Dirichlet, or D-branes, subject to a particular set of boundary conditions, have proved important in understanding black holes in string theory.

At DESY theory workshops, introductory lectures covering the main topics of the workshop are traditionally given on the first day. On this occasion, Costas Bachas of the Ecole Normale Supérieure in Paris presented string theory, string dualities, D-branes and M-theory. Slava Mukhanov of the Ludwig-Maximilians University in Munich discussed inflation. Stefan Theisen of the Max-Planck Institute (MPI) in Potsdam covered the holographic principle, which asserts that information contained in some region of space can be represented as a “hologram” – a theory that lives on the boundary of that region. Finally, Orsay’s Pierre Binétruy discussed the cosmological constant.

Cosmic inflation

Cosmic inflation in the early universe is one of the most appealing hypotheses in cosmology. Inflation stretches space to be flat, and leads naturally to the density of the universe, W, having its critical value of 1. It explains the large-scale smoothness of the cosmic microwave background (CMB) and inflates quantum fluctuations from microscopic scales to the cosmological scale, thereby creating density fluctuations. In the first talk of the workshop, Paolo de Bernardis of the University of Rome, La Sapienza, showed an impressive array of new experimental CMB data from the balloon experiment BOOMERanG. These are in complete agreement with the predictions of inflation. BOOMERanG and COBE show that the universe is indeed spatially flat. Moreover, the matter-energy density, WM, is clearly dominated by a large dark matter component. Most excitingly, WM is not enough to flatten the universe, but there is now convincing evidence for a non-vanishing contribution WL from dark energy, arising from a cosmological constant L.

One of the most burning problems is explaining the microscopic origin of the cosmological constant L, while at the same time understanding why L is so small compared with the natural scale of gravity. In this context it is very important to determine whether L is a static quantity, totally unchanged through time, or whether it is dynamic. Quintessence – a “fifth force” that changes with time – offers a concrete realization of this idea. It was introduced by Slava Mukhanov and by Heidelberg’s Christof Wetterich, who discussed how the cosmic coincidence problem (why the cosmological constant only recently started to dominate the expansion of the universe) can be explained by some kind of attractor mechanism.

Agreement between the theoretical idea of inflation and experiment is convincing. However, model building is still difficult and seems to require several assumptions and fine-tuning of parameters. This leads to the question of whether there are serious competitors for inflation, for example, in M-theory. This would be desirable since some basic arguments state that de Sitter space-times, which describe an exponentially growing universe, are difficult to implement in supergravity and superstring theories. As Fernando Quevedo of Cambridge discussed, there is a nice way to build inflationary models into brane-world models in string theory in such a way as to trigger the graceful exit from inflation. This leads to a hybrid inflationary scenario being realized in brane-world models. A more radical approach to explaining the flatness and horizon problems – one that really competes with inflation – was introduced by Burt Ovrut from the University of Pennsylvania. Taking its name from a Greek word meaning conflagration, the ekpyrotic universe theory explains the rapid expansion of the early universe as arising from the collision of branes. Through such a collision, a huge amount of energy is almost uniformly and homogeneously deposited on our universe. Despite offering a fascinating and challenging alternative to standard inflation, many aspects of the ekpyrotic universe need further investigation.

Challenging branes and strings

A particularly compelling picture of a 10-dimensional universe has been developed over recent years. In this picture, observable gauge interactions are confined to a possibly three-dimensional domain wall, whereas the gravitational force is mediated over the entire 10-dimensional space-time. This scenario would account for the vast difference between the observed strength of the gravitational interaction and nature’s other fundamental interactions. It also offers the exciting possibility that the extra dimensions can be much larger than previously assumed – up to almost 1 mm. If the extra dimensions are compact, their sizes are constrained by high-precision experiments that measure deviations from Newtonian gravity below 1 mm, as Joshua Long of Colorado University pointed out. Depending on the coupling strength of gravity inside the extra dimensions, the present experimental upper bounds for their size vary between 1 mm and several microns. New techniques are expected to push these bounds below 1 µm. In addition, Bonn’s Hans-Peter Nilles and Valery Rubakov of Moscow’s Institute of Nuclear Research discussed more theoretical issues and exotic effects arising from extra dimensions.

One challenge in string theory is to construct brane-world models that come as close as possible to the Standard Model of elementary particles. Ralph Blumenhagen of Berlin’s Humboldt University suggested that intersecting brane worlds is a promising approach. Stable intersecting brane-world models reproducing the Standard Model can be constructed, but issues such as the correct pattern of Yukawa couplings and gauge coupling unification still need to be addressed.

D-branes have provided many important theoretical insights into the nature of gravity and gauge theory. One of the most prominent consequences of D-brane physics is that a string theory in a so-called anti-de Sitter space-time (one with constantly negative curvature) is equivalent to conformal field theories – in other words there is a deep connection between string theory and quantum field theory. This so-called AdS/CFT duality is one of the most prominent and basic consequences of D-brane physics. Further presentations on D-branes and supergravity were given by Jan Plefka of the MPI in Potsdam, Klaus Behrndt of Humboldt University, Matthias Gaberdiel and Dan Waldram of the University of London, and Thomas Mohaupt from Jena. Another important aspect of D-branes is the field of non-commutative geometry, since under general boundary conditions the world volume co-ordinates of D-branes become non-commutative. Non-commutative field theories have therefore recently received much attention, and were discussed by Luis Alvarez-Gaumé of CERN and Volker Schomerus from the MPI in Potsdam.

String theories have made great advances of late, but there remain many unsolved problems, as Hermann Nicolai of the MPI in Potsdam pointed out in his workshop summary. At a fundamental level, is there really a unified description of all string theories in terms of M-theory? It is still not clear what M-theory really is. Is it 11-dimensional supergravity together with membranes and five-branes? Or is it given by matrix theory? Or are the fundamental degrees of freedom of M-theory related to the supermembrane? The workshop could not provide the answers. Furthermore, there remains the fundamental question of how a small but positive cosmological constant L can be consistently built into superstring theory or supergravity theories. Data from future facilities will be essential in advancing our understanding of these issues.

The theoretical physics community looks forward to seeing these data, and several talks alluded to what we may expect. In astrophysics and cosmology, the study of supermassive black holes in galactic centres poses many questions on how the first black holes were formed, what masses they have, and what their final destiny is. Ralf Bender of Munich discussed these issues. Cosmology with gravitational waves could open up a new avenue for deepening our understanding of the early universe. Bernard Schutz of the MPI in Potsdam presented the status of the four ground-based interferometric gravitational wave detectors: GEO600 (Germany), VIRGO (Italy), LIGO (US) and TAMA300 (Japan). More ambitious is the LISA project, due to be launched in 2011, in which three spacecraft in orbit around the Sun will form the interferometer. LISA may even provide new information on string cosmology and brane-world scenarios. Albrecht Wagner, head of the DESY directorate, discussed future colliders, such as the LHC and TESLA, whose input is urgently needed for further theoretical progress in particle physics.

Big Bang problems

Despite its success, there are three problems with the Big Bang model which were hotly debated at the DESY theory workshop.

*The horizon problem Remote regions of the universe that have been out of contact – or beyond each others’ horizon – are nevertheless similar.

*The flatness problem The universe appears to be largely “flat” – the mass-energy density W is close to its critical value of 1, which would steer the universe to a fate between a big chill and a big crunch. Big Bang cosmology predicts that any deviation from flatness in the early universe should have increased as the universe expanded, which is difficult to reconcile with observation today.

*The monopole problem Big Bang cosmology predicts that magnetic monopoles should be commonplace, yet so far not a single one has been seen.

Space-time symmetry is put to the test

cernsymm3_4-02

The Bloomington campus of Indiana University, US, hosted its second meeting on CPT and Lorentz symmetry (CPT ’01) on 13-15 August 2001.

The meeting, which was attended by physicists from the US, Japan and Europe, focused on experimental and theoretical developments in the study of space-time symmetries. The first meeting solely on this topic was held in Bloomington in 1998.

Some of the most fundamental symmetries in physics are the space-time symmetries of Lorentz transformations – where the laws of physics are unchanged under boosts and rotations – and CPT – the combination of charge conjugation (C), parity inversion (P) and time reversal (T). Interest in these symmetries has flourished in recent years as attempts to find cracks in the Standard Model have intensified.

Lorentz symmetry, which states that reference frames are equivalent if either rotated or moved at a constant velocity with respect to each other, appears to be an exact symmetry in nature. So too does CPT symmetry. The 1954 CPT theorem of Bell, Lüders and Pauli states that any Lorentz-invariant field theory must be CPT-invariant. No experiment has detected any violation of either symmetry, but as experimental tests of CPT and Lorentz symmetry continue to improve, some intriguing opportunities to resolve asymmetry arise.

cernsymm2_4-02

CPT ’01 was opened by distinguished physicist Yoichiro Nambu of Chicago, US, who gave a historical perspective on the topic of CPT symmetry in physics.

For CPT symmetry to be broken in any theory, one of the preconditions of the CPT theorem must be removed. One possibility is to base the theory on extended objects – as in string theory, for example.

The Standard Model extension of Alan Kostelecky of Indiana University, US, uses this idea and spontaneous symmetry breaking as the context in which the Standard Model lagrangian is supplemented with general CPT- and Lorentz-violating terms. The extension has all of the usual features of the Standard Model of particle physics, except for the breaking of the two symmetries.

Since the first meeting on Lorentz and CPT symmetry in 1998, when only a handful of experimental bounds were known, a steady stream of new limits on CPT and Lorentz symmetry has been flowing. Kostelecky presented an overview of the theory – developed over a period of 10 years – and discussed the variety of experiments at the high-energy and high-precision frontiers that could be in a position to detect effects. The Standard Model extension is giving new impetus to Lorentz and CPT tests by isolating specific types of signals in an explicit framework. The ideas are intriguing, since many experiments have never probed effects with the characteristics that are predicted in this theory. One of these effects is sidereal variations in frequencies that were previously thought constant.

Sidereal variations occur because the theory challenges the traditional idea that empty space is isotropic and structureless, suggesting instead that special directions exist. In fact, most experiments point in a particular direction, given, for example, by the orientation of a linear accelerator, the plane of a cyclotron, or the magnetic field in an atomic clock. It is customary to ignore this orientation, because space is considered directionally inert.

However, if empty space has a faintly resolvable structure, including directional dependence, it may be possible to find variations in measurements repeated over time as the orientation of the experiment changes with the rotation of the Earth. Several speakers at CPT ’01 reported attempts to isolate sidereal variations of this type. Impressive bounds on Lorentz and CPT violation have resulted.
<textbreak=Non-commutativity>
An interesting theoretical development relating to the Standard Model extension is the realization that the model contains non-commutative field theory. Roman Jackiw of MIT, US, discussed Lorentz violation in non-commutative photodynamics. He also pointed out the relevance of non-commuting spatial variables in quantum mechanics. Other speakers discussed related issues, including a conservative bound on the non-commutativity parameter of (10 TeV)-2. This has implications for Lorentz violation, since a non-commutative theory is recovered from the Standard Model extension by choosing suitable values for the parameters.

Robert Antonucci of the University of California at Santa Barbara, US, discussed implications for symmetry tests using polarization data from astronomical sources, such as quasars, and Indiana University’s Matthew Mewes presented a new bound based on polarization-axis comparisons of light from such sources. The result is one of the most stringent bounds on Lorentz symmetry to date, of three parts in 1032. This complements tests performed on systems within other sectors of the Standard Model extension.

cernsymm6_4-02

Two CERN collaborations among several efforts worldwide to test fundamental symmetries are ATHENA and ATRAP, which plan to trap cooled antihydrogen for high-precision spectroscopy. ATRAP spokesperson Gerald Gabrielse of Harvard University, US, reported that his experiment is making good progress. The idea is to compare antihydrogen spectral frequencies with the corresponding frequencies that are known to a great precision for ordinary hydrogen. CPT symmetry requires that the comparison shows no differences. Since summer 2000, when the CERN antiproton decelerator began operation, steady progress has been made by CERN groups in the development of technology to create antihydrogen in a trapped form.

Continuing the search

A number of neutral-meson high-energy experiments continue to search for violations of Lorentz and CPT symmetry. For the K, D and B mesons, Lorentz-violating effects depend on the particle momentum. It is therefore of interest to search for speed- and orientation-dependent signals. Hogan Nguyen of the KTeV collaboration at Fermilab, US, reported a new result bounding parameters for CPT violation at 10-21 GeV in the neutral-kaon system.

Rob Gardner of the FOCUS collaboration at Fermilab presented the first result of a search for sidereal variations in the oscillations of neutral D mesons. The result implies sensitivity to effects in the charm sector, bounding parameters at 10-15 GeV. Yoshihide Sakai of the BELLE collaboration at KEK, Japan, reported a recent result bounding CPT symmetry in the B-meson system.

New bounds in the lepton sector have been contributed by recent muon and muonium experiments. David Kawall of Yale University, US, reported on the muonium experiment at Los Alamos National Laboratory in New Mexico, US. Using hydrogen-like muonium “atoms” composed of a positive muon and an electron, a collaboration led by Vernon Hughes, also of Yale, studied the ground-state hyperfine transitions in this system using data taken over a two-year period.

Analysis of the high-precision data reveals no sidereal variations in any of the transition frequencies, thereby bounding the relevant parameter combinations in the Standard Model extension at 2 x 10-23 GeV. This first ever search for Lorentz violation in the muon sector provides a 10-fold improvement on the previous results. Mario Deile of Yale and David Hertzog of Illinois represented the Muon g-2 Collaboration at Brookhaven National Laboratory, US. Plans to use data from this experiment to seek out possible Lorentz and CPT violation signals in the context of the Standard Model extension were discussed. The sensitivities are expected to be competitive with those of the Los Alamos group.

Several high-precision spectroscopic measurements are also proving invaluable in testing the Standard Model extension. Nobel Laureate Hans Dehmelt’s Penning-trap group at the University of Washington, US, has placed several bounds on symmetry violation in the electron sector. These results were reviewed by Robert Bluhm of Colby College in Maine, US, who also described the theory behind other planned symmetry tests in atomic systems.

Some tests involve comparisons of particles and their antiparticles, which is possible with electrons and positrons for example, and also in muon experiments. Other tests involve monitoring atomic-clock and maser frequencies to identify Lorentz-violating variations. Ron Walsworth of the Harvard-Smithsonian Center for Astrophysics, US, discussed recent measurements and future possibilities with masers, and Mike Romalis of Princeton University, US, presented plans to build an innovative helium-potassium comagnetometer for future tests.

External influences

Resolving variations in frequencies is experimentally challenging because there are numerous environmental influences in a laboratory, such as temperature, that vary on a daily basis. Ensuring that an experiment is monitoring the right effect is critically important. New approaches towards these intricacies will be available in the near future when precision atomic clocks and masers are planned to fly on the International Space Station (ISS). It will be possible to exploit the short rotational period of about 90 min, and various other properties of the ISS platform.

Several scientists involved with ISS projects spoke at the Bloomington meeting, including Kurt Gibble of Penn State, US, who discussed the rubidium atomic-clock experiment (RACE), and Neil Ashby of Colorado, US, who presented the primary atomic reference clock in space (PARCS). Another experiment, SUMO, involves flying superconducting microwave oscillators on the ISS and was discussed by Joel Nissen of Stanford, US. It has the potential to test several aspects of fundamental symmetries. Also at the meeting was Lute Maleki, the Jet Propulsion Laboratory project manager for several ISS experiments. He outlined the novel SpaceTime experiment, which proposes to carry three oscillators on a high-speed sweep past the Sun.

One of the finest tests of CPT symmetry with electrons has been done by the Eöt-Wash group at the University of Washington, led by Eric Adelberger. The experimental apparatus, a torsion pendulum with an overall spin polarization, was described by Blayne Heckel. The results bound several CPT-violating parameters in the electron sector at about 10-29GeV.

Further information

Sidereal variations

The Standard Model extension discussed at the Indiana meeting predicts a background of minuscule directed quantities (tensors) that are fixed in space. The figures show these as red arrows filling the vacuum. Particles and antiparticles can interact differently with this background, so the combined symmetry CPT (the product of charge conjugation, parity and time reversal) can be violated. This is illustrated in Figure 1 by different properties of a basketball and an antibasketball in a laboratory on Earth. Figure 2 shows the situation 12 h later. The local direction of the arrows in the laboratory has changed because the Earth has rotated, so the CPT violation is different. Similarly, in real experiments, one way to observe CPT and Lorentz violation is to look for particle properties that vary with the Earth’s sidereal period. Results from several such experiments were reported at the meeting.

Superbends expand the scope of Berkeley’s ALS

A superconducting separator dipole

At first it was a perfect match. The physical constraints of its site at the Lawrence Berkeley National Laboratory on a hillside above the University of California’s Berkeley campus, the research interests of its initial proponents and the fiscal realities of the times all pointed to the same conclusion in the early 1980s: the Advanced Light Source (ALS) should be a third-generation, but low-energy, synchrotron radiation source designed for highest brightness in the soft-X-ray and vacuum-ultraviolet spectral regions.

While the ALS has turned out to be a world leader in providing beams of soft X-rays – indeed, furnishing these beams remains its core mission – there has nonetheless been a steadily growing demand from synchrotron radiation users for harder X-rays with higher photon energies. The clamour has been strongest from protein crystallographers whose seemingly insatiable appetite for solving structures of biological macromolecules could not be satisfied by the number of crystallography beamlines available worldwide.

The question was how to provide these X-rays in a cost-effective way without disrupting the thriving research programmes of existing ALS users. Superconducting bend magnets (superbends) provided the answer for the ALS and a proposal was adopted (a proposal that was originally made in 1993 by Alan Jackson of Berkeley and Werner Joho of Switzerland’s Paul Scherrer Institute) to replace some of the normal combined-function (gradient) magnets in the curved arcs of the storage ring with superconducting dipoles that could generate higher magnetic fields and, thus, synchrotron light with a higher critical energy.

A team headed by David Robin, the leader of the ALS Accelerator Physics Group, took on the pioneering task of retrofitting superconducting bend magnets into the magnet lattice of an ­operating ­synchrotron light source. In particular, three 5 Tesla superbends were to replace the 1.3 Tesla centre gradient magnets in ­Sectors 4, 8, and 12 of the 12-fold symmetric ALS triple-bend achromat storage-ring lattice. The long project culminated early last October when, after a six-week shutdown to install and commission the superbends, the ALS reopened for users with a new set of capabilities.

The superbends have extended the spectral range of the ALS to 40 keV for hard-X-ray experiments. They do not degrade the high brightness of the ALS in the soft-X-ray region, for which the ALS was originally designed, nor do they degrade other performance specifications, such as beam stability, lifetime and reliability. They do not require that any straight sections normally occupied by ­high-brightness undulators be sacrificed to obtain high photon energies by filling them with high-field, multipole wigglers. Superbend magnets are already serving the first of a new set of protein crystallography beamlines. Ultimately, 12 new beamlines for crystallography and other applications, such as microtomography and diamond-anvil-cell high-pressure experiments, will be constructed.

Superbend history

The ALS was originally based on an electron storage ring with a 198 m circumference and a maximum beam energy of 1.9 GeV to provide peak performance in the vacuum-ultraviolet and soft-X-ray spectral regions. One way for the ALS to respond to the demand that arose in later years for higher photon energies would have been to use some of its scarce straight sections for high-field, multipole wigglers. Later, in 1997 the ALS did install one such wiggler – a device that provides the hard X-rays for an extremely productive protein crystallography beamline (Beamline 5.0.2) operated by the Berkeley Center for Structural Biology.

However, the drawback of the wiggler route was immediately obvious: many wigglers would limit the number of high-brightness undulators that give the ALS its state-of-the-art, soft-X-ray performance and that justified its construction in the first place. Moreover, a wiggler cannot readily service more than one beamline capable of the demanding multiwavelength anomalous diffraction experiments that many crystallographers want to perform, whereas a bend magnet can. In the end, the ALS adopted the superbend alternative proposed by Jackson and Joho – a choice that brought along some imposing challenges.

Superconductivity is no stranger to synchrotron light sources, where superconducting bend magnets have been used in small (mini) synchrotrons dedicated to X-ray lithography. In addition superconducting ­insertion devices in straight sections are, if not common, a venerable technology. Unlike wigglers and undulators in straight sections, however, superbends would be an integral part of the storage-ring lattice in a large multi-user facility and could not simply be turned off in case of failure or malfunction. So, the stakes were very high – the pay-off would be an expanded spectrum of ­photons to offer users; the risks included the possibility of ruining a perfectly good light source or, at the very least, causing unacceptable downtime.

Diagram showing flux and brightness

In 1993, newly hired accelerator physicist Robin was set to work on preliminary modelling studies to see how superbends could fit into the storage ring’s magnetic lattice and to determine whether the lattice symmetry would be broken as a result. He concluded that three superbends with fields of 5 Tesla, deflecting the electron beam through 10° each, could be successfully incorporated into the storage ring. Later, beginning in 1995, Clyde Taylor of Berkeley’s Accelerator and Fusion Research Division (AFRD) led a laboratory-directed R&D project to design and build a superbend prototype.

By 1998 the collaboration (which included the ALS Accelerator Physics Group, the AFRD Super­con­duc­ting Magnet Program and Wang NMR Inc) had produced a robust magnet that reached the design current and field without quenching. The basic design has remained unchanged through the production phase. It includes a C-shaped iron yoke with two oval poles protruding into the gap. A mile-long length of superconducting wire made of niobium-titanium alloy in a copper matrix winds more than 2000 times round each pole. The operating temperature is about 4 K.

With the strong support of ALS advisory committees and Berkeley laboratory director Charles Shank, Brian Kincaid – at that time the ALS director – made the decision to proceed with the superbend upgrade, and his successor, Daniel Chemla, made the commitment to follow through. The superbend project team, now including members of Berkeley’s engin­eering division, held a kick-off meeting in September 1998 with Robin as project leader, Jim Krupnick as project manager and Ross Schlueter as lead engineer. Christoph Steier then joined the team a year later as lead physicist.

Subsequently, the success of wiggler Beamline 5.0.2, combined with some pioneering work on normal bend-magnet beamlines by Howard Padmore and members of his ALS Experimental Systems Group, led to the formation of user groups from the University of California, the Howard Hughes Medical Institute and elsewhere that were willing to help finance superbend beamlines, further adding to the momentum of the project.

Superbend team work pays off

For the next three years, the superbend team worked towards making the ALS storage ring the best understood such ring in the world. In every dimension of the project, from beam dynamics to the cryosystem, from the physical layout inside the ring to the timing of the shutdowns, there was very little margin for error. To study the beam dynamics, the accelerator physicists adapted an analytical technique used in astronomy called frequency mapping (CERN Courier January 2001 p15). This provided a way to “experiment” with the superbends’ effect on beam dynamics both theoretically and experimentally before the superbends were installed.

Another technical challenge was to design a reliable, efficient and economical cryosystem capable of maintaining a 1.5 ton cold mass at 4 K with a heat leakage of less than 1 watt. Wang NMR was contracted to construct the superbend systems (three plus one spare). Wang designed a self-sustaining cryogenic system based on a commercial cryocooler, leads made of high-temperature superconductors and a back-up cryogenic reservoir.

By 1998 the collaboration had produced a robust magnet that reached the design current and field without quenching

Following some preparatory work during previous shutdowns, the installation of the superbends began in August 2001. The initial installation plan was very tight. In one 11-day period, the superbend team removed three normal gradient magnets and a portion of the electron-beam injection line in straight section 1 just upstream of Sector 12; installed the superbends; modified cryogenic systems; and completed extensive control system upgrades. They also installed many other ­storage-ring items and prepared for start-up with a beam.

After the installation phase, the goal was to commission the ALS with superbends and return the beam to users by 4 October. This schedule allowed the month of September to commission the ring (with the exception of a four-day break for the installation of the front ends for two superbend beamlines) and a three-day period for beamline realignment. However, commissioning proceeded much faster than had been expected and it was less than two weeks after the start of the installation when the machine was ramped up to full strength, and the effects of the superbends on the performance of the storage ring were fully evaluated.

Because so much was at stake, the storage ring had been studied and modelled down to the level of individual bolts and screws to ensure a smooth, problem-free installation into the very confined space within the storage ring. This attention to detail also paid off in the rapid commissioning. To take one example, the superbends were very well aligned, as demonstrated by a stored beam with little orbit distortion and small corrector-magnet strengths.

At the end of the first day, a current of 100 mA and an energy of 1.9 GeV were attained. At the end of the first weekend, the injection rate and beam stability were near normal. By the end of the first week, the full 400 mA beam current was ramped to 1.9 GeV and studies of a new, low-emittance lattice with a non-zero dispersion in the straight sections (designed to retain the high brightness that the storage ring had without superbends) were begun. By the end of the second week, test spectra taken in some beamlines showed no change in quality due to the presence of superbends.

Since reopening for business in October, the ALS has not experienced any significant glitches that might be associated with such a major change. Overall the ALS has made good on its promises to users of installing and commissioning the superbends without disrupting or delaying their research programmes and operating them with no adverse effects on performance in the bread-and-­butter soft-X-ray spectral region, as demonstrated by the values of the storage-ring parameters.

Superbend beamlines are already taking data and more are under construction or planned. Three superbend protein-crystallography beamlines are now taking data, and researchers at the first of these to come on line have already solved 15 structures. Three more crystallography beamlines are on the way. Non-crystallography beamlines currently in the works include one for tomography and one for high-pressure research with diamond-anvil cells, two areas for which superbends are even more advantageous than they are for protein crystallography, because they more fully exploit the higher photon energies that superbends can generate. Many other areas, including microfocus diffraction and spectroscopy, would also benefit enormously through the use of the superbend sources.

In summary, a new era at Berkeley’s ALS is under way.

bright-rec iop pub iop-science physcis connect