New Year’s Day 2005 saw the official birth of the Laboratoire d’AstroParticule et Cosmologie (APC) in Paris. The APC’s core physics activities are high-energy astrophysics, cosmology and neutrino physics – fields that are linked by a high level of theoretical activity and innovative data-handling methods.
In gestation for more than four years, this multi-disciplinary, ultra-modern laboratory at the interface between particle physics and astrophysics has existed in the form of a “research federation” since January 2002. The APC was born from the converging scientific interests of the University of Paris VII Denis Diderot and the three research organizations that supported and moulded it: CNRS (via three of its scientific departments, IN2P3, INSU and SPM), CEA (Material Sciences Department) and the Paris Observatory. A scientific council has been in operation for several years, and in spring 2004, an evaluation committee gave the green light for the APC to be established.
Directed by Pierre Binétruy, the laboratory currently employs 65 scientists and research scientists from the Ile-de-France region, more than 50 engineers, technicians and administrative staff and some 50 post-docs, visiting scientists and students. It should ultimately have a complement of around 200 people. The scientists come from varying backgrounds: physicists from the Collège de France’s PCC laboratory, which closed down on 31 December to be merged into the APC; particle and astroparticle physicists from CEA’s DAPNIA laboratory; astrophysicists from the Paris Observatory; and theorists from the Orsay Theoretical Physics Laboratory and the Paris Astrophysics Institute. These groups have already joined forces in a series of jointly run research projects and rapprochement should continue in 2005, albeit at separate geographical locations. The APC will enter a second phase at the beginning of 2006, when the laboratory moves into the new premises of the University of Paris VII’s physics department, currently under construction on the Left Bank.
The APC physicists have already initiated or participated in projects and experiments. These include Auger and EUSO in the field of cosmic rays; INTEGRAL, HESS, GLAST and X-shooter in gamma astronomy; ANTARES in neutrino astronomy; Borexino and Double Chooz in neutrino physics; Planck and BRAIN for the study of the cosmological background; SNLS for the study of supernovae; and LISA for gravitational waves. An annual international scientific workshop has been organized since 2001; the next is scheduled for June 2005.
New measurements of unstable nuclei caught in the act of decay have gone a long way towards resolving long-standing questions about the triple-alpha process that creates carbon nuclei. The results, from experiments at CERN’s ISOLDE facility and the IGISOL facility at the University of Jyväskylä in Finland, were recently published in Nature (Fynbo et al. 2005).
Although physicists first discovered how stars build up atomic nuclei in the 1950s, they have been unable to pin down key properties of carbon nuclei that determine the triple-alpha rate over a wide temperature range. Now, thanks to the isotope separate online method for creating and isolating isotopes (largely developed at CERN) and to developments in particle detectors, a team of researchers has been able to tackle this problem.
The collaboration – involving more than 30 people at eight European universities and institutes, including CERN – first isolated short-lived isotopes of 12B and 12N. These transform via beta decay into 12C, populating resonances in carbon that then break into three alpha particles. By measuring precisely the timing and energies of alpha particles emitted by the samples, the team was able to infer the energy, spin and parity of the carbon nuclei just before decay.
The triple-alpha process is more probable only because 12C can exist at particular energies close to the combined energies of 8Be and 4He. At the temperature levels in most stars – 108 to 109 K – carbon’s so-called Hoyle resonance at 7.65 MeV determines the triple-alpha rate. But at higher and lower temperatures, other resonances – some observed, some theorized – come into play.
In the recent experiments, the researchers were able to pin down the spin and parity of the wide resonance near 10 MeV. First observed in 1958, it had been measured at 10.1 MeV (width 3 MeV), but its spin-parity could be determined only as either 0+ or 2+. The new study puts this resonance at 11.23(5) MeV with width 2.5(2) MeV, and finds its spin-parity to be 0+.
The group also looked for a long-theorized 2+ resonance at 9.1 MeV, width 0.56 MeV. If it existed, it would play a dominant role at temperatures of more than 109 K, but there was no evidence for it. However, to fit the data, the team needed to introduce a 2+ resonance at 13.9(3) MeV with width 0.7(3) MeV. They were then able to calculate a revised triple-alpha rate over a wide temperature range, from 107 to 1010 K. Compared with the previously calculated rate, the new rate is significantly faster at low temperatures (107 to 108 K); the same in the middle range dominated by the Hoyle resonance; and slower at high temperatures (109 to 1010 K).
The revised rate has several astrophysical implications. One such is that in the universe’s first stars, which began with no carbon and burned at relatively low temperatures, the only form of hydrogen fusion would be through the proton-proton chain. Once enough carbon had formed through the triple-alpha process, however, the CNO (carbon-nitrogen-oxygen) chain could begin. With the higher triple-alpha rate, this evolution might have taken only half as long as was previously thought. Also, in type-II supernovae, the lower rate above 109 K implies that the fraction of 56Ni produced would be less than previously thought.
The cosmic microwave background (CMB) radiation provides the most precise probe of the largest structures of the universe. Now, however, a team from Case Western Reserve University in Cleveland, Ohio, and CERN has discovered surprising evidence that the largest-scale features of the microwave sky seem to be correlated with both the motion and the orientation of the solar system (D J Schwarz et al. 2004).
The tiny temperature variations of the CMB were discovered by the Cosmic Background Explorer (COBE) satellite more than a decade ago. Then, in February 2003, the Wilkinson Microwave Anisotropy Probe (WMAP) team published the analysis of their first year of high-resolution observations of the full sky.
In a stunning manner, the results from WMAP confirmed the Standard Model of modern cosmology, with its key elements of a period of cosmological inflation and a composition of 5% baryons, 25% cold dark matter and 70% dark energy. One real surprise, however, was how WMAP showed that the optical depth for microwave photons is high, which implies an unexpected early onset for star formation.
A second look at the publicly available WMAP data reveals anomalies at the largest angular scales (> 60°). For example, the angular two-point correlation function vanishes at scales larger than 60° (as already seen by COBE, but largely forgotten). In Fourier space, the vanishing of the two-point correlation function at large scales is reflected by the smallness of the quadrupole and octopole moments. As we observe only one universe, it is possible to attribute these findings to bad luck (cosmic variance), although – taken at face value – the measurement does not agree with the expectation from inflation.
In fact, the WMAP measurements contain more information. Angélica de Oliveira-Costa and colleagues studied the cosmic quadrupole and octopole and realized that both are very planar and aligned, i.e. all minima and maxima happen to fall on a great circle on the sky – another unexpected feature (de Oliveira-Costa et al. 2004).
Craig Copi, Dragan Huterer and Glenn Starkman of Case Western Reserve University then developed a method to assign 1 directions to the 1-th multipole (multipole vectors). While Starkman was on sabbatical at CERN, the team was joined by Dominik Schwarz, also at CERN at the time, to test the claims of de Oliveira-Costa et al. by means of multipole vectors.
To their surprise, the new method revealed at high statistical significance (99.9% CL) that the observed quadrupole and octopole are inconsistent with a Gaussian random, statistically isotropic sky (the generic prediction of inflation). They also looked for correlations with any known directions on the sky. No significant correlation with the Milky Way was found, but a strong correlation with the orientation of the solar system (ecliptic plane) and with its motion (measured as the CMB dipole) showed up.
A comparison with 100,000 skies generated by Monte Carlo shows that each of those correlations alone is unlikely at more than 99% CL. Therefore, there is strong evidence either of some systematic error in the WMAP pipeline (although in a preliminary analysis, the team is now discovering similar features in COBE maps), or that the largest scales of the microwave sky are dominated by a local foreground.
This finding has vast implications. It casts doubts on the cosmological interpretation of the lowest-1 multipoles from the temperature-temperature correlation and from the temperature-polarization correlation, and in turn on the claim that the first stars formed very early in the history of the universe.
The year 1969 saw the publication of the first results indicating that hard scattering centres exist deep inside protons. A collaboration between the Stanford Linear Accelerator Center (SLAC) and the Massachusetts Institute of Technology was using SLAC’s new high-energy electron linac to pioneer a rich new field in the study of the nucleus – deep inelastic scattering. Their measurements revealed that nucleons are made up of point-like particles, which Richard Feynman dubbed “partons”. Thirty-five years on, studies of the parton-nature of the nucleus continue, not only at the traditional high-energy centres, but also at lower-energy laboratories, and in particular at the Thomas Jefferson National National Accelerator Facility (Jefferson Lab) in Virginia.
Jefferson Lab is home to the Continuous Electron Beam Accelerator Facility (CEBAF). Its main mission is to explore the atomic nucleus and the fundamental building-blocks of matter. As part of this mission, researchers there study the transition from the picture of the nucleus as a bound state of neutrons and protons to its deeper structure in terms of quarks and gluons – in other words, the transition from the hadronic degrees of freedom of nuclear physics to the quark-gluon degrees of freedom of high-energy physics. In exploring this transition, a wide range of experiments has been performed, from measurements of elastic form factors at large momentum transfers to studies of deep inelastic scattering.
An array of spectrometers together with electron-beam energies of up to 5.7 GeV has allowed the laboratory to make significant contributions to this field. This article describes three experiments, each aimed at improving our understanding of a different aspect of the partonic nature of matter. The first, a classic deep inelastic scattering experiment, seeks to further our understanding of the composition of nucleon spin. The second experiment studies the concept of quark-hadron duality – a link between the deep inelastic region and the resonance region. The third experiment uses the atomic nucleus as a laboratory to improve understanding of the propagation and hadronization of quarks. Jefferson Lab’s ability to perform this range of measurements is illustrated by the plot from the CEBAF Large Acceptance Spectrometer (CLAS) shown on the cover of this magazine, where the hadronic resonance peaks are seen to be washed out as one goes from the delta resonance around 1.2 GeV to higher invariant masses and into the deep inelastic scattering realm of quarks and gluons.
The spin of the nucleon
In the 1980s, experiments at CERN and SLAC showed that only a small fraction of the nucleon spin is carried by the quarks. Since then, many experiments have been done to solve this “spin crisis”. Our current understanding of nucleon spin is that it is the sum of the spins of the valence quarks and the q and the qbar sea quarks, the orbital angular momenta of quarks and the spins of gluons. While many experiments have been performed in the region of low Bjorken x, where sea quarks and gluons dominate, it has been experimentally challenging to make measurements in regions of higher Bjorken x, where the valence quarks are predicted to dominate.
Using the high-resolution spectrometers in Hall A with a polarized 3He target, Jefferson Lab has recently completed a measurement in the valence-quark region of the neutron spin asymmetry An1. At large momentum-transfer squared (Q2) this asymmetry is approximately the ratio of the polarized and unpolarized structure functions: g1/F1. Perturbative quantum chromodynamics (QCD) as well as constituent quark models predict that this ratio should tend towards unity as Bjorken x goes to unity, while simple calculations based on SU(6) symmetry predict that An1 should be zero. Previously, the An1 data in the range of x from 0.4 to 0.6 were consistent with zero. However, the new high-precision data from experiment E99-117 at Jefferson Lab, shown in figure 1, indicate that the An1 asymmetry is becoming positive and hints that it may indeed approach unity (Zheng et al. 2004). This result reveals strong SU(6) symmetry-breaking and agrees best with calculations that include quark orbital angular momenta.
Quark-hadron duality
In the early 1970s, Elliott Bloom and Fred Gilman proposed that resonances created in electroproduction are a “substantial part of the observed scaling behaviour of inelastic electron-proton scattering”. They showed that by appropriately averaging data in the resonance region, the resonance data could be smoothly linked with deep inelastic data. The authors were careful to state that they had not predicted the scaling behaviour of deep inelastic scattering, but that they had pointed out a phenomenological correlation that hinted at a common origin. This phenomenon has become commonly known as quark-hadron duality.
Recent results from Jefferson Lab’s Hall C have shown that duality works quantifiably better and over a larger Q2 range than previously thought (Niculescu et al. 2000, Ent et al. 2000). Using data in the nucleon-resonance region, it has been shown that it is possible to use the idea of duality to extract the proton’s magnetic form factor as well as the ratio of longitudinal to transverse deep inelastic electron-proton scattering cross-sections. Recent structure function data in the resonance region from Hall C are shown in figure 2. These data show that even for the different structure functions, the resonance data oscillate about curves generated from fits to deep inelastic scattering.
The successful application of duality to extract known quantities suggests that it should also be possible to use it to extract quantities that are otherwise kinematically inaccessible. For example, analysis is now under way on data from experiment E01-012 at Jefferson Lab, which is using duality to determine the An1 asymmetry to higher Bjorken x values than are kinematically accessible through deep inelastic scattering. As a check of its validity, this extraction will partially overlap the deep inelastic scattering An1 data.
Propagation of quarks
A recent experiment in Hall B at Jefferson Lab using CLAS has made measurements to improve our understanding of quark propagation through cold QCD matter. The data from this experiment, which are now being analysed, should shed light on two topics: quark hadronization and quark energy loss (as shown schematically in the artist’s impression above).
Owing to confinement, a quark struck with sufficient energy will produce multiple hadrons through a process known as hadronization. This process is uniquely described by QCD. While the evolution of quarks into hadrons cannot be seen directly, the attenuation of the leading particles emerging from deep inelastic scattering can be determined and used to gain insight into the hadronization process.
Before the quark hadronizes, it can be considered to be travelling through the colour field of the nucleus. QCD predicts that as the quark traverses the nucleus, it will lose energy by emitting gluons. This is similar to how in quantum electrodynamics a charged particle moving through matter loses energy by emitting photons. The energy loss of the quark in the QCD process can be studied by observing the properties of the leading particles in deep inelastic scattering from different nuclei. Preliminary analysis of these data has shown that a majority of the lightest particles (pions) with the largest relative energy are strongly attenuated as one goes from light nuclei to heavy nuclei. This observation is in sharp contrast to the naive view that the nucleus is a benign spectator. The quantitative results of this experiment will be forthcoming.
In summary, three experiments from Jefferson Lab, one from each experimental hall, illustrate a different aspect of the partonic nature of matter. These experiments, along with others being performed around the world, are refining our understanding of the fundamental building-blocks of matter. With the proposed upgrade of its accelerator to 12 GeV, the laboratory will be able to cover an even greater kinematic range as it continues to map out the transition from hadronic degrees of freedom to quark-qluon degrees of freedom, providing an increasingly accurate portrait of the nucleus.
I first heard about the J, or J/Ψ as it would become, in Jacques Prentki’s office at CERN, immediately after lunch on an unforgettable afternoon in November 1974. At that time I was in the theory group at CERN, and responsible for organizing the phenomenology seminars. That day the speaker was Yair Zarmi from the Weizmann Institute, and his topic was inclusive hadron production in the quark parton model, on which he had written a much quoted paper with Michael Gronau and Finn Ravndal. Up to lunchtime that day it seemed extremely interesting.
Walking back to my office after lunch I noticed three people in Jacques Prentki’s office, and Jacques himself was looking agitated. I heard the words: “Hey, Frank, come look at this!” I cannot recall who the third person was, but the central character was a French experimentalist named Jean-Jacques Aubert. He was a member of Sam Ting’s group at the Brookhaven National Laboratory, and there in Prentki’s office, for the very first time, I saw the famous plot of their “J” particle. It was sharp, tall and narrow, and 3 GeV in mass, which in those days was heavy. It was astonishing.
Zarmi gave his seminar, but I don’t remember anything about it – all I can recall is that afterwards the questions weren’t about his talk; they were all about the rumours that a new particle had been found. Then hot news arrived: Aubert would give a special seminar later that afternoon in the Theory Discussion Room.
I arrived early, but it was almost too late – it was already standing room only. News spread fast around CERN, even in the days before Tim Berners-Lee invented the Web. Jack Steinberger opened the proceedings with this statement: “It is good to hear of something that we can really believe in.” He briefly explained the circumstances, then introduced “Professor Aubert”.
Jean-Jacques began by saying, somewhat apologetically, “I am not a professor”, to which Steinberger instantly retorted, “You will be!” Of course, Steinberger was right, and in 1982 Aubert became professor at the Université de la Méditerranée in Marseilles, where he later created the Centre for Particle Physics.
Somewhere in all of this we learned that SLAC had seen it too. And then we heard that Ting was en route from the US to give an official presentation at CERN in the auditorium.
The search for charm
Ting gave his talk and wrote an enormous “J” on the board at CERN. On that occasion, the J was also in honour of J-J Aubert. The auditorium was overflowing and there was prolonged applause.
Soon afterwards, details reached CERN of the discovery of the same particle by Burton Richter and colleagues working on the Mark I detector at SLAC, where they called it the “psi” (Ψ). Then, a week later, Sacha Dolgov stopped me in the corridor: “SPEAR has found another one.” This was the Ψ‘. This too was narrow.
John Bell soon organized a special session where the data would be presented and three theorists would review the ideas. Mary Gaillard, who with Ben Lee and Jonathan Rosner had already drafted a seminal paper, “The search for charm”, naturally reviewed charm; Dolgov reviewed the possibility that this (these!) could be the long-sought Z boson (the discovery of the real one was still a decade in the future), and I was given the task of discussing colour. I was enthusiastic about this because, according to rumour, Richard Feynman had said that a new quantum number must be involved to make it narrow, and Jacques Prentki had said “It’s not charm.” What we were not up to speed with was the phenomenon of asymptotic freedom, which proved to be so vital for understanding the bizarre properties of these particles (and which has this year been recognized with the award of a Nobel prize.
The impact of the news we all heard on that November day 30 years ago was such that Yair Zarmi’s talk, and the whole area in which it fell and on which I had been devoting my effort, lay dormant in my notebook. We dropped everything and turned to the only problem in town. Two years later, after it had become undisputedly clear that the J/Ψ was the first example of a new type of quark – charm – bound with its antiquark, Ting and Richter shared the Nobel prize for their discovery, a vital step in building the current Standard Model of particles and interactions.
Efforts to test Lorentz symmetry at ever-increasing sensitivities have opened up new perspectives in theoretical and experimental physics. While high-energy accelerators have traditionally been designed to probe fundamental particles at ever-smaller microscopic scales, it has been known for some time that data from many such experiments could contain information about weak Lorentz-violating background fields that exist in space on scales the size of the solar system and greater. Among the basic signals being sought are sidereal variations arising due to the rotation of the Earth relative to the fixed background stars. Measurements of these effects at CERN and similar facilities can potentially also be confirmed in other experiments, such as those involving atomic clocks, masers, torsion pendulums, optical and microwave cavities, astronomical polarization data and Penning traps, to name some examples. The triennial Meeting on Lorentz and CPT Symmetry (CPT ’04), held at Indiana University on 4-7 August 2004, provided a forum for researchers in the field to compare results and new ideas.
Lorentz symmetry, the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space, has survived a century of tests since Albert Einstein introduced the special theory of relativity. CPT, the combination of charge reversal (C), parity inversion (P) and time reversal (T), is a closely related symmetry of nature that also appears to be exact. The general theory of Lorentz and CPT violation, known as the Standard Model Extension (SME), was developed by Alan Kostelecky and collaborators at Indiana University as part of the effort to unify the theories of quantum mechanics and gravity by investigating the full range of possible violations of Lorentz and CPT symmetries. The violations appear in the SME as minuscule coefficients, which are estimated to be observable in experiments that are sensitive at the Planck scale, where quantum mechanics and gravitation are merged. Numerous experiments are able to reach these sensitivity levels and have explored the coefficient space of the SME for almost 10 years.
The opening session of CPT ’04 featured theorist Yoichiro Nambu from Chicago University and experimentalist Ron Walsworth from Harvard. Nambu outlined several interesting and counterintuitive features of physics that could be observed in a Lorentz-violating world. In efforts to study such a world, many experiments employ, or plan to employ, rotating turntables and highly controlled laboratory environments. Walsworth spoke about the idea of a dedicated facility to provide these technical services. He also discussed an experiment using co-located helium and xenon masers to place the first bounds on 11 combinations of SME coefficients for the neutron.
Tests with neutrinos and photons
Carlos Peña-Garay of the Institute for Advanced Study at Princeton presented the theoretical arguments for massive neutrinos based on observations from solar neutrinos, atmospheric neutrinos and the KamLAND experiment in Japan. The SME offers a general framework for Lorentz and CPT violation in the neutrino sector, and in particular a general analysis of neutrino oscillations exists. The full range of effects involves dozens of coefficients, but a simple two-coefficient model with no neutrino masses has been devised by Kostelecky and Matt Mewes at Indiana University. Mark Messier of the SuperKamiokande collaboration showed in his talk that this rudimentary model, dubbed the “bicycle” model, fits the atmospheric neutrino data from SuperKamiokande just as well as the fit with models involving neutrino mass differences. The MINOS experiment, due to become fully operational in December, may be able to resolve sidereal effects as predicted by the SME. This experiment will take data in the Soudan mine in northern Minnesota from a neutrino beam originating at Fermilab near Chicago.
Data from the Liquid Scintillating Neutrino Device (LSND) experiment at Los Alamos can be interpreted as showing that there is a 0.26% probability that a muon antineutrino will decay into an electron antineutrino over a distance of about 30 m. Rex Tayloe from the LSND collaboration presented a discussion of efforts that are under way to investigate whether the SME coefficients can successfully account for both the LSND data and the atmospheric and solar-neutrino data.
The SME shows that 19 independent components control Lorentz-violating effects on photons. The absence of any observed frequency dependence in the polarization of light from distant quasars places a bound of parts in 1032 on several of these coefficients. Birefringence observations cannot access the remaining ones, but modern Michelson-Morley and Kennedy-Thorndike experiments are doing so using cavity oscillators. Michael Tobar of Western Australia presented the sharpest measurements to date on several combinations of the coefficients, obtained using microwave oscillators. He hopes to improve on them soon with the aid of a rotating platform. Achim Peters, from Humboldt University in Berlin, reported on another experiment using optical resonators created from a single sapphire crystal. He outlined various plans for improving the sensitivity, including better cryogenics and a turntable mount to aid in searching for sidereal variations.
Claus Lämmerzahl of Bremen discussed theoretical approaches to Lorentz violation in the context of electromagnetism. His approach modifies the field equations without requiring a Lagrangian, and the resulting effects include charge non-conservation. An analysis accounting for the changes in length of an optical cavity as it interacts with the SME background was presented by Holger Müller of Stanford University. The approach highlights the interconnectivity of the photon sector in the SME with the fermion sector involving atoms and molecules.
Lämmerzahl also described the OPTIS project, a mission of the European Space Agency, which plans to place optical resonators and masers on a dedicated satellite. The mission will offer a number of advantages over ground-based tests of Lorentz symmetry. The status of programmes that are funded through NASA for other space tests was outlined by Joel Nissen of Stanford. He pointed out that the Superconducting Microwave Oscillator (SUMO) mission could be adapted to fly independently of the International Space Station.
Tests in atomic physics
Various Lorentz tests have been done or are planned in atomic systems, and include three experiments at CERN that involve antiprotons. Ryugo Hayano of the ASACUSA collaboration reported on efforts to study the spectrum of antiprotonic helium. Another test involves comparison of the antihydrogen spectrum with that of conventional hydrogen. Alban Kellerbauer of the ATHENA collaboration reported on the status of the effort to create, cool and trap antihydrogen. Together with the ATRAP antihydrogen collaboration, ATHENA and ASACUSA utilize CERN’s Antiproton Decelerator facility to supply antiprotons. In terms of the SME, comparison of the hyperfine levels of the atomic spectra could test CPT symmetry for the proton. Whereas searches for sidereal variations using only one species of atom access complicated combinations of SME coefficients, the test involving hydrogen atoms and anti-atoms is clean and would be difficult to duplicate in other experiments.
Penning traps are other devices that allow comparisons of properties of particles and their corresponding antiparticles. Brian Odom reported on the progress of Gerald Gabrielse’s group at Harvard working on a new measurement of g-2 for the electron and positron. It is expected that the researchers will be able to improve significantly on existing resolutions.
Atomic clocks operating on transitions involving a change in the spin state have the potential to test Lorentz coefficients in the SME with great precision. Mike Romalis from Princeton discussed the status of his potassium-helium comagnetometer, involving a mixture of these two atoms in a single bulb. The experiment, which has the potential to achieve record sensitivities, is currently taking data. Atomic clocks have been planned to fly on the International Space Station or other space platforms, providing increased speeds and rotation rates and hence improved sensitivities for tests of Lorentz symmetry. Kurt Gibble of Penn State reviewed the status of his two-arm rubidium clock, which may eventually be used to perform Lorentz tests in space.
Blayne Heckel and Eric Adelberger of the “Eöt-Wash” torsion pendulum group at the University of Washington in Seattle discussed progress in several experiments. One type of pendulum employs a bob with a net electronic spin but no net magnetic field, and oscillates torsionally from a thin thread. The group is trying to eliminate systematic effects possibly due to swing, bounce and wobble of the thread in their most recent version. Preliminary results show that they already have significant improvements in sensitivity to Lorentz-violation coefficients in the electron sector.
Violations in theory
The SME has enjoyed broad interest because it encompasses Lorentz-violation experiments from all areas of physics. It is a step in the quest to produce a single theory that unites the quantum world with the theory of gravitation. In the absence of a fundamental theory that achieves this goal, the SME provides a full set of possible Lorentz- and CPT-violating effects that could reasonably occur in the low-energy limit of such a theory.
At the fundamental level, theorists are currently considering various ideas, including string theory and several forms of quantum gravity. Daniel Sudarsky of the National University of Mexico (UNAM) gave a review of some approaches, particularly loop quantum gravity. Ralf Lehnert of Vanderbilt University discussed how Lorentz violation can be associated with space-time-varying couplings in a scenario with a cosmological scalar field. Lorentz tests in atomic experiments have a variety of generic features, and Robert Bluhm of Colby College gave a comprehensive overview of these and the related theory. Some of these features are also common in Lorentz tests in other experimental sectors.
Should Lorentz violation be found soon, it would be a fitting time to build on the foundations laid by Einstein a century ago.
The past 10 years have seen a first generation of Lorentz experiments probing SME coefficients in the theoretical context of a flat space-time. A second generation of tests has the potential to probe the SME with gravity. Kostelecky presented some of the myriad of features of the SME in a curved space-time. Particles with spin are incorporated into the space-time structure using the “vierbein” formalism, rather like a mathematical four-legged spider placed in a smoothly varying manner at each point in the manifold. The coefficients for Lorentz violation acquire additional dependence on features such as the space-time metric and on associated properties like torsion and curvature.
The plethora of experiments aimed at identifying violations of Lorentz symmetry bear testimony to the continued relevance of Einstein’s achievements. The ever-increasing sensitivity to SME coefficients continues to strengthen this legacy. Remarkably, much of the SME coefficient space is still unexplored. Should Lorentz violation be found soon, it would be a fitting time to build on the foundations laid by Einstein a century ago.
Unification is the dream of high-energy physicists. Attaining this conceptual – and actual – convergence undoubtedly constitutes the fundamental challenge in the search for elementary objects and interactions. There have been many suCCEesses in this area. If, however, it sometimes looks as if the main pieces of the puzzle have yet to be put together, it is because the opportunities nature gives us to attain an all encompassing and global view of the most disparate properties – described by the general theory of relativity and quantum mechanics – are extremely rare.
The early universe is often considered to be the only instance of a situation in which quantum and gravitational processes are equally important. However, black holes with small masses could also provide insight into these unusual conditions and would certainly constitute, if they exist, the only objects in today’s cosmos where such extreme physical processes can oCCEur. Even if it remains speculative, the search for small black holes is certainly warranted, especially since many models predict their existence in fundamentally different frameworks and contexts.
From the astrophysics point of view, it is thought that only massive black holes – with masses several times that of the Sun – are possible, as they are the only ones able to form in the final stages of stellar evolution. Although they have many fascinating properties, these large black holes are not as rich as their smaller cousins could be. The lighter the black hole the greater its surface gravity – and the more interesting the associated physical effects. This is simply due to the fact that Newton’s gravitational force is linearly dependent on mass but quadratically dependent on the inverse distance (which is itself proportional to the Schwarzschild radius of the black hole). In particular, the phenomenon of Hawking evaporation is significant only in the case of small black holes: the tidal effect becomes so great near the surface that the particle pairs produced by quantum vacuum fluctuations may be broken, one particle falling into the black hole and the other being projected outwards. This process has yet to be observed, precisely because astrophysical black holes are too massive and therefore too cold, but it is certainly one of the most important predictions of quantum field theory in curved space-time. Contrary to the usual ideas of general relativity, black holes are capable of emitting particles. They can even be very hot and very bright if their mass is sufficiently small. Indeed, the principles of thermodynamics apply to black holes, the essential variables being temperature, entropy and internal energy, as opposed to surface gravity, area and mass in the case of general relativity.
Where can we find mini black holes?
The absence of notable excesses of particles in cosmic radiation – especially in the form of antiprotons or gamma rays – compared with the fluxes expected in a “standard” astrophysics context allows strict constraints to be placed on the density of black holes evaporating in today’s universe. In particular, it can be deduced that their contribution to the total mass of the universe is today no higher than one ten millionth. As these small black holes are likely to have been produced in the early cosmos thanks to the fluctuations in density present at that time – and with masses that were arbitrarily low – it is possible to obtain vital information about the universe’s degree of inhomogeneity shortly after the period of inflation. This route of investigation is all the more remarkable in that the relevant scales for the black holes of the early universe are completely beyond the usual observables of cosmology, namely the 3K background radiation and large-scale structure. There is therefore a genuine complementarity between these approaches. Many cosmological scenarios – involving phase transitions, the breaking of scale invariance, blue power spectra, positive running of the spectral index of scale fluctuations, phases of double inflation, topological defects, collisions of bubbles of “real” vacuum in a background of “false” vacuum and softening of the equation of state – may be excluded or severely constrained by the study of small black holes.
In addition to these astrophysical and cosmological aspects, there is another route of investigation that is particularly promising for microscopic black holes, namely at particle accelerators. In response to the persistent problem of hierarchy – why is the Planck scale 16 orders of magnitude higher than the electroweak scale? – a hypothesis put forward a few years ago offers a neat and efficient lead: the existence of large extra dimensions. The novelty of this idea lies in the fact that it is no longer necessary to assume that these dimensions are of sizes close to the Planck length (~10-33cm). Rather, they can be as large as around a millimetre if we suppose that the fields of matter live in the 3+1 dimensional hypersurface of our 3-brane and that only gravity can benefit from new dimensions. The constraints (~10-16cm ) usually derived via the interactions of gauge bosons in extra dimensions can therefore be ignored and only experiments involving the direct measurement of Newtonian gravity put limits on the size of extra dimensions to a value of less than a few tenths of a millimetre. Using such an approach, the traditional Planck energy, EPI~1019GeV, is no more than an effective scale and the real D-dimensional fundamental Planck scale is given by ED = (EPI2/VD-4)1/(D-2), where VD-4 is the volume associated with the D-4 extra dimensions. For D=10 and radii associated with the extra dimensions of the Fermi scale, we obtain ED~TeV. If this model has any meaning, it is effectively a natural choice (and not an arbitrary one based on phenomenological motivations) because it essentially resolves the problem of hierarchy. This approach uses the geometrical properties of space to link completely different energy scales.
A spectacular consequence of such a model is the possibility of being able to produce black holes with the next generation of particle colliders. If the centre-of-mass energy of two elementary particles is indeed higher than the Planck scale ED, and their impact parameter b is lower than the Schwarzschild radius RH, a black hole must be produced. If the Planck scale is thus in the TeV range, the 14 TeV centre-of-mass energy of the Large Hadron Collider (LHC) could allow it to become a black-hole factory with a production rate as high as about one per second. Many studies are underway to make a precise evaluation of the cross-section for the creation of black holes via parton collisions, but it appears that the naive geometric approximation σ~πR2H is quite reasonable for setting the orders of magnitude.
The possible presence of extra dimensions would be doubly beneficial for the production of black holes. The key point is that it allows the Planck scale to be reduced to accessible values, but it also allows the Schwarzschild radius to be significantly increased, thus making the condition b<RH distinctly easier to satisfy. It is important to note that the resulting mini black holes have radii that are much smaller (of the order of 10-4fm in the case of those that can be expected from the LHC) than the size of extra dimensions, and that they can therefore be considered as totally immersed in a D-dimensional space, which has, to a good approximation, a time dimension and D-1 non-compact space dimensions. The black hole thus acts like a quasi-selective source of S waves and sees our brane in the same way as the “bulk” associated with the extra dimensions. As the particles residing in the brane greatly outnumber those living in the bulk (essentially gravitons), the black hole evaporates into particles of the Standard Model. Its lifetime is very short (of the order of 10-26s) and its temperature (typically about 100 GeV here) is much lower than it would be with the same mass in a four-dimensional space. The black hole nevertheless retains its characteristic spectrum in the form of a quasi-thermal law peaked around its temperature. From the point of view of detection, it is not too difficult to find a signature for such events: they have a high multiplicity, a large transverse energy, a “democratic” coupling to all particles and a rapid increase in the production cross-section with energy.
At first glance the production of black holes in colliders could be bad news. It could mean the end of particle physics since the presence of a horizon would obscure all the microphysics processes that could occur behind it. However, it would in fact open up very good opportunities.
First of all the reconstruction of temperature (determined by the energy spectrum of the particles emitted when the black hole evaporates) as a function of mass (determined by the total energy deposited) allows information to be gained about the dimensionality of space- time. In the case of Planck scales close to the TeV mark, the number of extra dimensions could thus be revealed quite easily by the characteristics of the emitted particles. However, one can go further. In particular, quantum gravity effects could be revealed, as behaviour during evaporation in the Planck region is sensitive to the details of the gravitational theory used.
Approaches of the Gauss-Bonnet type, which include quadratic terms in scalar curvature in the Lagrangian, are good candidates for a description beyond general relativity as they can be supported both by theoretical arguments (heterotic strings in particular) and by phenomenological arguments (Taylor expansion in curvature). In such a case, the coupling constant of the Gauss-Bonnet term, namely the quantum character of the gravitational theory used (and the link with the underlying string theory) can also be reconstructed and the LHC would become a very valuable tool for studying speculative gravitation models.
Other promising avenues are also being investigated for new physics. Firstly, the black holes formed may be excellent intermediate states for highlighting new particles. When the collision energy is higher than the Planck scale ED, the cross-section for the creation of black holes is quite large (~500 pbarn) and has no suppression factor. Moreover, when the temperature of the black hole is higher than the mass of a particle, the particle must be emitted during evaporation in proportion to its number of internal degrees of freedom. There is thus a definite potential for the search for the Higgs or for supersymmetric particles in the evaporation products of black holes, possibly with cross-sections much greater than for the direct processes. Finally, taking account of a D-dimensional cosmological constant also modifies the evaporation law. If the constant is sufficiently high – which is possible without contradicting the low value measured in our brane – the temperature and the coupling coefficients with the entities emitted could be the signature of this particular structure of space-time. It would be quite neat and certainly surprising that a measurement of the cosmological constant in the bulk should come from the LHC!
Microscopic black holes are thus a paradigm for convergence. At the intersection of astrophysics and particle physics, cosmology and field theory, quantum mechanics and general relativity, they open up new fields of investigation and could constitute an invaluable pathway towards the joint study of gravitation and high-energy physics. Their possible absence already provides much information about the early universe; their detection would constitute a major advance. The potential existence of extra dimensions opens up new avenues for the production of black holes in colliders, which would become, de facto, even more fascinating tools for penetrating the mysteries of the fundamental structure of nature.
Postscript
It should be stated, in conclusion, that these black holes are not dangerous and do not threaten to swallow up our already much-abused planet. The theoretical arguments and the obvious harmlessness of any black holes that, according to these models, would have to be formed from the interaction of cosmic rays with celestial bodies, mean that we can regard them with perfect equanimity.
Twenty-five years ago experiments at DESY provided the first evidence for a very special kind of particle collision: an electron- positron annihilation process with three “coplanar jets”, i.e. three collimated bundles of particles heading away in the same plane from the electron-positron collision point, rather like the prongs of the Mercedes symbol. Using the PETRA storage ring, which had been completed only the previous year, the teams working at PETRA had found the first direct experimental proof of the existence of the gluon – the particle that transmits the strong force. Analysis of the three-jet events showed that two of the three particle jets were produced by a quark-antiquark pair; the third was generated by a gluon.
This summer DESY commemorated the discovery with a special colloquium, “Gluons and Quantum Chromodynamics”, which was held in the laboratory’s main auditorium on 7 June. In his opening address to the 400 guests, Hermann Schunck, the representative of the German Federal Ministry of Education and Research, emphasized the consequence of the finding. “The discovery of the gluon at PETRA here at DESY marks a truly epochal point in the history of physics – mirroring the pivotal role of the gluon in the realm of physics comparable to the other exchange particles, the photon and the W and Z bosons,” he said. He then left the floor to the three scientific speakers of the colloquium – Harald Fritzsch from Munich, Gerard ‘t Hooft from Utrecht and Albrecht Wagner, the director-general of DESY. In turn, they recalled the emergence of the ideas of quarks, gluons and quantum chromodynamics (QCD), described the structure of QCD theory and reviewed the ground-breaking experiments that led from the discovery of the gluon to the establishment of QCD as the accepted theory of the strong interaction.
The emergence of QCD
As Fritzsch emphasized, QCD is a most exceptional theory in that it generates an enormous complexity out of a very simple Lagrangian – describing for instance all atomic nuclei out of essentially one parameter. It is well established today and has moved on from the era of testing into the realm of precision physics. The route, however, has been long and often paved with misunderstandings and false starts. After the success of quantum electrodynamics in describing the electromagnetic interaction in the 1940s, the general mood turned to one of confusion.
Particle after particle appeared in experiments and the particle-physics garden, which had seemed so tidy in the 1940s, grew into a jungle during the 1960s. A major breakthrough came in 1964, when Murray Gell-Mann and George Zweig proposed that all hadrons are in fact composed of even more elementary constituents, which Gell-Mann called quarks. At the end of the 1960s deep-inelastic scattering experiments at the Stanford Linear Accelerator Center (SLAC) showed for the first time that these quarks were not just hypothetical mathematical entities, but indeed the true building blocks of hadrons.
There was a theoretical objection to the quark model, however; it appeared to violate the Pauli exclusion principle, which states that no two particles with half-integer spin can occupy the same state. Thus the Δ++ particle, which was supposed to consist of three identical quarks in the same state, seemed to be inconsistent with the Pauli principle. A solution to the problem was proposed in 1964 by Oscar Greenberg, and later elaborated in its final form by Fritzsch, Gell-Mann and Heinrich Leutwyler. They suggested that quarks actually come in three “colours”, and that the only stable states formed by them are colourless combinations – a hypothesis that sounded at first like sleight of hand, but which in fact proved to be a most fruitful idea, providing the basis for what later became known as QCD, the mathematical description of the strong interaction.
The quark model, and with it the QCD gauge theory of the strong interaction, gained further impetus in the 1970s, as ‘t Hooft described at the colloquium. In seminal work that revolutionized the theoretical background of particle physics and earned him the 1999 Nobel Prize in Physics together with Martinus Veltman, ‘t Hooft showed in 1971 that such “non-abelian” gauge theories are renormalizable. He thus eliminated a fundamental problem that had hampered the development of a mathematical description of the strong interaction for years, and paved the way for the development of the complete gauge theory of QCD by Fritzsch and Gell-Mann in 1972. Two years later the discovery of the J/ψ meson marked what became known as the November revolution – the discovery of a fourth type of quark, and as such an eminent confirmation of the quark model and QCD.
The discovery of the gluon
In the late 1960s and early 1970s experiments had thus provided evidence for the reality of the quarks. The gauge theories describing the various interactions, however, predicted the existence of mediator bosons that transmit the forces between the particles. Of these, only the photon was definitely known at that time. A first hint of gluons, the mediators of the strong force, came from deep-inelastic scattering experiments, which had shown that only half of a proton’s momentum is carried by the quarks. The missing momentum fraction was interpreted as being carried by electrically neutral constituents, presumably the gluons. But how could the actual existence of these gluons be demonstrated experimentally?
As Wagner – himself an experimenter on the JADE experiment at PETRA at the time – recollected in his talk, by the end of the 1970s it was widely accepted that the annihilation of an electron and a positron and the subsequent formation of a quark-antiquark pair proceeded primarily via the exchange of a photon. The generated quark-antiquark pair would then fragment into hadrons, which would appear in the detector as two back-to-back hadron jets with limited transverse momentum and increasing momenta along the jet axis. Such events with two hadron jets had been discovered at the SPEAR storage ring at SLAC in 1975, and were later analysed in detail at DESY’s 5GeV storage ring, DORIS. Interestingly, just before PETRA appeared on the scene, the PLUTO experiment at DORIS, running on the Y(1S) b-bbar resonance, showed event topologies that were distinctly different from those generated in the nearby continuum, suggestive of the conjectured three-gluon decay of the 1S b-bbar state.
The idea of searching for gluon jets had actually been proposed by John Ellis, Mary Gaillard and Graham Ross in a seminal paper that appeared in 1976. Under the apparently imperative title “Search for Gluons in e+-e– Annihilation”, the authors suggested the existence of “hard-gluon bremsstrahlung”, which should give rise to events with three jets in the final state. According to the laws of field theory, the outgoing quarks can radiate field quanta of the strong interaction, i.e. gluons, which should in turn fragment into hadrons and thus create a third hadron jet forming a plane with the other two (see figure 1). At the particle energies of up to 15GeV per beam delivered by DESY’s newly built PETRA electron-positron storage ring, the probability for such hard-gluon bremsstrahlung processes to occur might amount to a few percent.
The PETRA machine had been completed by the summer of 1978 after only two years of construction. While still being commissioned, it had delivered the first events for e+e– hadrons to the detectors at the end of 1978. Six months later, at the International Neutrino Conference in Bergen, Norway, on 18 June 1979, Bjørn Wiik reported on an event that had been observed in the TASSO detector at PETRA only a few days earlier. It had been analysed in detail by his colleague Sau Lan Wu and her co-worker Georg Zobernig. While the TASSO team was observing abundant events with the expected two-particle jets created by the outgoing quark-antiquark pair, Wu and Zobernig – who had designed a fast algorithm for the analysis of more complicated event topologies, in particular multi-jet structures – had uncovered something new: an event that clearly involved three jets whose momenta lay in a plane. When Paul Söding, who belonged to the same team, travelled to Geneva two weeks later for the European Physical Society (EPS) conference, he was already able to present a few of these three-jet events (see figure 2,). Moreover, a whole variety of plots displaying various analyses gave convincing evidence that bremsstrahlung of the gluon, which had been postulated as the boson mediating the strong force, had indeed been discovered.
Shortly afterwards similar three-jet event topologies were announced by JADE, MARK J and PLUTO, the other groups working at PETRA. All four collaborations presented their data at the Lepton-Photon Symposium at Fermilab in Chicago in August 1979. The corresponding publications by TASSO, MARK J and PLUTO followed in autumn 1979, while the JADE Collaboration published an extended analysis, which included a first determination of the strong coupling constant αs(q2), in spring 1980 (MARK J Collaboration, PLUTO Collaboration and TASSO Collaboration 1979; JADE Collaboration 1980). Sixteen years later, in July 1995, the discovery of the gluon was honoured by the EPS, which awarded its Prize for High Energy and Particle Physics to four physicists representing the TASSO Collaboration: Paul Söding, Bjørn Wiik, Günter Wolf and Sau Lan Wu. A special complementary prize was also awarded to the four collaborations in recognition of their combined work, since, as the EPS statement reads, the “definite existence (of the gluon) emerged gradually from the results of the TASSO Collaboration and the other experiments working at PETRA, JADE, MARK J and PLUTO”.
As Wagner emphasized, the results obtained by the four experiments and the speed with which they were achieved would have been impossible without the initiative of DESY’s director-general at the time, Herwig Schopper, and the outstanding work of the accelerator physicists and engineers under the leadership of accelerator division director Gustav-Adolf Voss, who managed to complete the accelerator on budget and in record time, six months ahead of schedule.
The discovery of the gluon marked the beginning of intensive tests of QCD at PETRA. These included the determination of the spin of the gluon, which proved to be a vector particle; the so-called string effect, i.e. the hadronization of quarks and gluons via the formation of colour strings; various tests of second-order QCD calculations (see figure 3); and precise determinations of the running strong coupling constant αS. At the end of the 1980s the baton passed on to CERN’s Large Electron Positron (LEP) collider. Although it was primarily built to perform precision tests of the electroweak force through the production and decays of Z and W bosons, due to the very large event sample, LEP became an excellent testing ground for QCD (CERN Courier May 2004). For example, whereas the experiments at PETRA were unable to distinguish between quark-quark-gluon and gluon-gluon-gluon vertices, these differences were measured at LEP’s OPAL experiment using a sample of 4 x 106Z0 decays.
Today, QCD is put through its most stringent tests at the Tevatron proton-antiproton collider at Fermilab and at DESY’s electron-proton collider, HERA. The advances made have been remarkable, ranging from an extremely precise determination of the proton structure function to the study of the origin of nucleon spin, the exploration of the non-perturbative nature of the strong interaction, or the problem of quark confinement. Over the past 25 years QCD has thus emerged as the uniquely successful theory of the strong interaction, and it is as such a full part of the Standard Model of particle physics. As Fritzsch concluded in his talk at the colloquium, “The phenomena of the strong interaction are now ‘in principle’ understood.” With the advent of new theoretical approaches such as QCD lattice calculations and new experiments at future accelerators, the “still impressive list of unsolved QCD problems” is set to shrink fast.
• This article is based on one that is to appear in Europhysics News and is published with permission.
A new approach to simulating quantum geometry suggests that starting with a random froth, one might expect a world of three dimensions of space and one of time to appear naturally at large scales. J Ambjørn of the Niels Bohr Institute in Copenhagen, J Jurkiewicz of Jagellonian University in Krakow, and R Loll of Utrecht University have added one crucial ingredient to the randomness – causality, or a speed limit of the speed of light – and this turns out to be enough to yield a world much like the one we live in. The authors comment that to their knowledge this is the “first example of a theory of quantum gravity that generates a quantum space-time with such properties dynamically.”
ICHEP’04, the 32nd International Conference on High Energy Physics, was successfully held in Beijing from 16-22 August, hosted by the Institute of High Energy Physics (IHEP) and the Chinese Academy of Sciences (CAS). As many as 737 physicists attended the meeting, which was opened by Chen Hesheng, director of IHEP, and Bai Chunli, vice-president of CAS.
The programme, as usual for ICHEP conferences, featured plenary talks to review the developments in major topics of interest to the global high-energy physics community, and parallel sessions consisting of talks about recent research results and future plans. There were 25 review talks in the plenary sessions and 296 talks in the 13 parallel sessions. The talks covered a wide range of topics, including electroweak physics, quantum chromodynamics, heavy quark and charm physics, top physics, neutrino physics, particle astrophysics and cosmology, hadron spectroscopy, charge-parity violation, quark matter, the search for new particles, and future accelerators and detectors.
The results from experiments at the Tevatron, the B-factories, the Beijing Electron-Positron Collider (BEPC) and the Relativistic Heavy Ion Collider at Brookhaven, as well as from accelerator-based and non-accelerator-based neutrino experiments also attracted great interest. In particular, experimental results on the possible pentaquark states and their theoretical interpretation were discussed extensively; the majority opinion seems to be that more experimental and theoretical studies are needed before any conclusions can be made. Progress in string theory, extra dimensions, black holes and lattice gauge calculations were also discussed.
The conference was also the occasion for the announcement, on 20 August by Jonathan Dorfan, chairman of the International Committee for Future Accelerators, that the committee had approved the recommendation by the International Technology Recommendation Panel that “cold” technology should be adopted for the future International Linear Collider.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.