Comsol -leaderboard other pages

Topics

Fifty years of antiprotons

On 1 November 1955, Physical Review Letters published the paper “Observation of antiprotons” by Owen Chamberlain, Emilio Segrè, Clyde Wiegand and Tom Ypsilantis, at what was then known as the Radiation Laboratory of the University of California at Berkeley. This paper, which announced the discovery of the antiproton (for which Chamberlain and Segrè would share the 1959 Nobel Prize for Physics), had been received only eight days earlier. However, the story of the discovery of the antiproton really begins in 1928, when the eccentric and brilliant British physicist, Paul Dirac, formulated a theory to describe the behaviour of relativistic electrons in electric and magnetic fields.

CCEann1_11-05

Dirac’s equation was unique for its time because it took into consideration both Albert Einstein’s special theory of relativity and the effects of quantum physics proposed by Edwin Schrödinger and Werner Heisenberg. While it worked well on paper, Dirac’s rather straightforward equation carried with it a most provocative implication: it permitted negative as well as positive values for the energy E. Initially few physicists seriously considered Dirac’s idea because no-one had ever observed particles of negative energy. From the standpoint of both physics and common sense, the energy of a particle could only be positive.

CCEann2_11-05

Attitudes towards Dirac’s equation changed dramatically in 1932, when Carl David Anderson reported the observation of a negatively charged electron in a project at the California Institute of Technology that originated with his mentor, Robert Millikan. Anderson named the new particle the “positron”. Both Dirac and Anderson would win Nobel Prizes for Physics for their discoveries. Dirac shared the 1933 Nobel prize with Schrödinger, and Anderson shared the 1936 Nobel prize with Victor Hess. However, the existence of the positron, the antimatter counterpart of the electron, raised the question of an antimatter counterpart to the proton.

CCEann3_11-05

As Dirac’s theory continued to explain successfully phenomena associated with electrons and positrons, it followed – from the revised standpoints of both physics and common sense – that it should also successfully explain protons. This would then demand the existence of an antimatter counterpart. The search for the antiproton was under way, but it would get off to a very slow start, as it would be another two decades before a machine capable of producing such a particle became available.

CCEann4_11-05

Enter the Bevatron

Anderson discovered the positron with a cloud chamber during investigations of cosmic rays, but it was extremely difficult, if not impossible, to use the same approach for finding the antiproton. If physicists were going to find the antiproton, they were first going to have to make one.

However, even with the invention of the cyclotron in 1931 by Ernest Lawrence, earthbound accelerators were not up to the task. Physicists knew that creating an antiproton would require the simultaneous creation of a proton or a neutron. Since the energy required to produce a particle is proportional to its mass, creating a proton-antiproton pair would require twice the proton rest energy, or about 2 billion eV. Given the fixed-target collision technology of the times, the best approach for making 2 billion eV available would be to strike a stationary target of neutrons with a beam of protons accelerated to an energy of about 6 billion eV.

In 1954, Lawrence commissioned the Bevatron accelerator to reach energies of several billion electron-volts – then designated as BeV (now universally known as GeV) – to be built at his Radiation Laboratory in Berkeley. (Upon Lawrence’s death in 1958, the laboratory was renamed the Lawrence Berkeley National Laboratory.) This weak-focusing proton synchrotron was designed to accelerate protons up to 6.5 GeV. Though never its officially stated purpose, the Bevatron was built to go after the antiproton. As Chamberlain noted in his Nobel laureate lecture, Lawrence and his close colleague, Edwin McMillan, who co-discovered the principle behind synchronized acceleration and coined the term “synchrotron”, were well aware of the 6 GeV needed to produce antiprotons and made certain the Bevatron would be able to get there.

Armed with a machine that had the energetic muscle to make antiprotons, Lawrence and McMillan put together two teams to go after the elusive particle. One team was led by Edward Lofgren, who managed operations of the Bevatron. The other was led by Segrè and Chamberlain. Segrè had been the first student to earn his physics degree at the University of Rome under Enrico Fermi. He had, with the aid of one of Lawrence’s cyclotrons, discovered technetium, the first artificially produced chemical element. He was also one of the scientists who determined that a plutonium-based bomb was feasible, and his experiments on the scattering of neutrons and protons and proton polarization broke new ground in understanding nuclear forces. Chamberlain had also studied under Fermi, and under Segrè as well. He was Segrè’s assistant on the Manhattan Project at Los Alamos while still a graduate student, and later joined Segrè at Berkeley to collaborate on the nuclear-forces studies.

Making an antiproton was only half the task; no less formidable a challenge was to devise a means of identifying the beast once it had been spawned. For every antiproton created, 40,000 other particles would be created. The time to cull the antiproton from the surrounding herd would be brief: within about 10-7 s after it appears, an antiproton comes into contact with a proton and both particles are annihilated.

According to Chamberlain, again from his Nobel lecture, it was understood from the start that at least two independent quantities would have to be measured for the same particle to identify it as an antiproton. After considering several possibilities, it was decided that they should be momentum and velocity.

Measuring momentum

To measure momentum, the research team used a system of magnetic quadrupole lenses, which was suggested to them by Oreste Piccioni, an expert on quadrupole magnets and beam extraction, who was then at Brookhaven National Laboratory. The idea was to set up the system so that only particles of a certain momentum interval could pass through. As the Bevatron’s proton beam struck a target in the form of a copper block, fragments of nuclear collisions would emerge in all directions. While most of these fragments were lost, some would pass through the system. For specifically defined values of momentum, the negative particles among the captured fragments would be deflected by the magnetic lenses into and through collimator apertures.

To measure velocity, which was used to separate antiprotons from negative pions, the researchers deployed a combination of scintillation counters and a pair of Cherenkov detectors. The scintillation counters were used to time the flight of particles between two sheets of scintillator, 12 m apart. Under the specific momentum defined by Segrè, Chamberlain and their collaborators, relativistic pions traversed this distance 11 ns faster than the 51 ns it took for the more ponderous antiprotons. Signals from the two scintillators were set up to coincide only if they came from an antiproton. However, because it is possible for two pions to have exactly the right spacing to imitate the signal from an antiproton, the researchers also used the Cherenkov detectors.

One Cherenkov detector was somewhat conventional in that it used a liquid fluorocarbon medium. It was dubbed the “guard counter” because it could measure the velocity of particles moving faster than an antiproton. The second detector, which was designed by Chamberlain and Wiegand, used a quartz medium, and only particles moving at the speed predicted for antiprotons set it off.

In conjunction with the momentum and velocity experiments, Berkeley physicist Gerson Goldhaber and Edoardo Amaldi from Rome led a related experiment using photographic-emulsion stacks. If a suspect particle was truly an antiproton, the Berkeley researchers expected to see the signature star image of an annihilation event. Here the antiproton and a proton or neutron from an ordinary nucleus, presumably that of a silver or bromine atom in the photographic emulsion, would die simultaneously.

Success!

The antiproton experiments of Segrè and Chamberlain and their collaborators began in the first week of August, 1955. Their first run on the Bevatron lasted five consecutive days. Lofgren and his collaborators ran their experiments for the following two weeks. The Segrè and Chamberlain group returned on 29 August and ran experiments until the Bevatron broke down on 5 September. On 21 September, a week after operating crews had revived the Bevatron, Lofgren’s group was to begin a four-day run, but instead it ceded its time to Segrè and Chamberlain. That day, the future Nobel laureates and their team found their first evidence of the antiproton based on momentum and velocity. Subsequent analysis of the emulsion-stack images revealed the signature annihilation star that confirmed the discovery. In all, Segrè, Chamberlain and their group counted a total of 60 antiprotons produced during a run that lasted approximately 7 h.

The public announcement of the antiproton’s discovery received a mixed response. The New York Times enthusiastically proclaimed “New Atom Particle Found; Termed a Negative Proton”, while the particle’s hometown newspaper, the Berkeley Gazette, sombrely announced “Grim new find at UC”. The Berkeley reporter had been told that should an antiproton come in contact with a person, that person would blow up. Today, 50 years on, antiprotons have become a staple of high-energy physics experiments, with trillions being produced at CERN and Fermilab, and no known human fatalities.

Uppsala 2005: leptons, photons and a lot more

Twenty-five years ago at the Rochester meeting held in Madison, Leon Lederman said, “The experimentalists do not have enough money and the theorists are overconfident.” Nobody could have anticipated then that experiments would establish the Standard Model as a gauge theory with a precision of one in 1000, pushing any interference from possible new physics to energy scales beyond 10 TeV. The theorists can modestly claim that they have taken revenge for Lederman’s remark. However, as the Lepton-Photon 2005 meeting underlined, there is no feeling that we are now dotting the i’s and crossing the t’s of a mature theory. All the big questions remain unanswered; worse still, the theory has its own demise built into its radiative corrections.

CCEupp1_11-05

The electroweak challenge

The most evident of unanswered questions is why are the weak interactions weak? In 1934 Enrico Fermi provided an answer with a theory that prescribed a quantitative relation between the fine-structure constant, α, and the weak coupling, G ˜ α⁄MW2, where MW can be found from the rate of muon decay to be around 100 GeV (once parity violation and neutral currents, which Fermi did not know about, are taken into account). Fermi could certainly not have anticipated that his early phenomenology would develop into a renormalizable gauge theory that allows us to calculate the radiative corrections to his formula. Besides regular higher-order diagrams, loops associated with the top quark and the Higgs boson also contribute, and are consistent with observations.

CCEupp2_11-05

One of my favourite physicists once referred to the Higgs as the “ugly” particle. Indeed, if one calculates the radiative corrections to the mass appearing in the Higgs potential, the same gauge theory that withstood the onslaught of precision experiments at CERN’s Large Electron-Positron collider, the SLAC linear collider and Fermilab’s Tevatron grows quadratically. Some new physics is needed to tame the divergent behaviour, at an energy scale, L, of less than a few tera-electron-volts by the most conservative of estimates. There is an optimistic interpretation, just as Fermi anticipated particle physics at 100 GeV in 1934, that the electroweak gauge theory requires new physics at 2˜3 TeV, to be revealed by the Large Hadron Collider (LHC) at CERN and, possibly, the Tevatron.

CCEupp3_11-05

Dark clouds have built up on this sunny horizon, however, because some electroweak precision measurements match the Standard Model predictions with too high a precision, pushing L to around 10 TeV. Some theorists have panicked and proposed that the factor multiplying the unruly quadratic correction, 2 MW2 + MZ2 + Mh2 – 4Mt2, must vanish exactly. This has been dubbed the Veltman condition. It “solves” the problem because the observations can accommodate scales as large as 10 TeV, possibly even higher, once the dominant contribution is eliminated.

CCEupp4_11-05

If the Veltman condition does happen to be satisfied, it would leave particle physics with an ugly fine-tuning problem reminiscent of the cosmological constant; but this is very unlikely. The LHC must reveal the “Higgs” physics already observed via radiative corrections, or at least discover the physics that implements the Veltman condition, which must still appear at 2 ˜ 3 TeV although higher scales can be rationalized for other tests of the theory. Supersymmetry is a textbook example. Even though it elegantly controls the quadratic divergence by the cancellation of boson and fermion contributions, it is already fine-tuned at a scale of 2 ˜ 3 TeV. There has been an explosion of creativity to resolve the challenge in other ways; the good news is that all involve new physics in the form of scalars, new gauge bosons, non-standard interactions, and so on.

CCEupp5_11-05

Alternatively, we may be guessing the future while holding too small a deck of cards, and the LHC will open a new world that we did not anticipate. The hope then is that particle physics will return to its early traditions where experiment leads theory, as it should, and where innovative techniques introduce new accelerators and detection methods that allow us to observe with an open mind and without a plan.

CP violation and neutrino mass

Another grand unresolved question concerns baryogenesis: why are we here? At some early time in the evolution of the universe quarks and antiquarks annihilated into light, except for just one quark in 1010 that failed to find a partner and became us. We are here because baryogenesis managed to accommodate Andrei Sakharov’s three conditions, one of which dictates CP violation. Precision data on CP violation in neutral kaons have been accumulated over 40 years, and the measurements can, without exception, be accommodated by the Standard Model with three families of quarks. History has repeated itself for B-mesons, but in only three years, owing to the magnificent performance of the experiments at the B-factories – Belle at KEK and BaBar at SLAC. Direct CP violation has been established in the decay Bd →Kπ with a significance in excess of 5σ. Unfortunately, this result and a wealth of data contributed by the CLEO collaboration at Cornell, DAFNE at Frascati and the Beijing Spectrometer (BES) fail to reveal evidence for new physics. Given the rapid progress and the better theoretical understanding of the expectations in the Standard Model relative to the kaon system, the hope is that improved data will pierce the Standard Model’s resistant armour. Where theory is concerned, it is worth noting that the lattice now does calculations that are confirmed by experiment.

A third important question concerns neutrino mass. A string of fundamental experimental measurements has led progress in neutrino physics. Supporting evidence from reactor and accelerator experiments, including first data from the reborn Super-Kamiokande detector, has confirmed discovery of oscillations in solar and atmospheric neutrinos. High-precision data from the pioneering experiments now trickle in more slowly, although evidence for the oscillatory behaviour in L/E of the muon neutrinos in the atmospheric-neutrino beam has become very convincing.

Nevertheless, the future of neutrino physics is undoubtedly bright. Construction at Karlsruhe of the KATRIN spectrometer, which by studying the kinematics of tritium decay will be sensitive to an electron-neutrino mass as low as 0.02 eV, is in progress, and a wealth of ideas on double beta decay and long-baseline experiments is approaching reality. These experiments will have to answer the great “known unknowns” of neutrino physics: their absolute mass and hierarchy, the value of the third small mixing angle and its associated CP-violating phase, and whether neutrinos are really Majorana particles. Discovering neutrinoless double beta decay would settle the last question, yield critical information on the absolute-mass scale and, possibly, resolve the hierarchy problem. In the meantime we will keep wondering whether small neutrino masses are our first glimpse of grand unified theories via the seesaw mechanism, or represent a new Yukawa scale tantalizingly connected to lepton conservation and, possibly, the cosmological constant.

Information on neutrino mass has also emerged from an unexpected direction – cosmology. The structure of the universe is dictated by the physics of cold dark matter and the galaxies we see today are the remnants of relatively small overdensities in its nearly uniform distribution in the very early universe. Overdensity means overpressure that drives an acoustic wave into the other components that make up the universe, i.e. the hot gas of nuclei and photons and the neutrinos. These acoustic waves are seen today in the temperature fluctuations of the microwave background, as well as in the distribution of galaxies in the sky. With a contribution to the universe’s matter similar to that of light, neutrinos play a secondary, but identifiable role. Because of their large mean-free paths, the neutrinos prevent the smaller structures in the cold dark matter from fully developing and this effect is visible in the observed distribution of galaxies.

Simulations of structure formation with varying amounts of matter in the neutrino component, i.e. varying neutrino mass, can be matched to a variety of observations of today’s sky, including measurements of galaxy-galaxy correlations and temperature fluctuations on the surface of last scattering. The results suggest a neutrino mass of no more than 1 eV, summed over the three neutrino flavours – a range compatible with the one deduced from oscillations.

The imprint on the surface of last scattering of the acoustic waves driven into the hot gas of nuclei and photons also reveals a value for the relative abundance of baryons to photons of 6.5 +0.40.3 × 10-10 (from the Wilkinson Microwave Anisotropy Probe). Nearly 60 years ago, George Gamow realized that a universe born as hot plasma must consist mostly of hydrogen and helium, with small amounts of deuterium and lithium added. The detailed balance depends on basic nuclear physics, as well as the relative abundance of baryons to photons: the state-of-the-art result of this exercise yields 4.7+1.0-0.8 × 10-10. The agreement of the two observations is stunning, not just because of their precision, but because of the concordance of two results derived from totally unrelated ways of probing the early universe.

The physics of partons

Physics at the high-energy frontier is the physics of partons, probing the question of what the proton really is. At the LHC, it will be gluons that produce the Higgs boson, and in the highest-energy experiments, neutrinos interact with sea-quarks in the detector. We can master this physics with unforeseen precision because of a decade of steadily improving measurements of the nucleon’s structure at HERA, DESY’s electron-proton collider. These now include experiments using targets of polarized protons and neutrons.

HERA is our nucleon microscope, tunable by the wavelength and the fluctuation time of the virtual photon exchanged in the electron-proton collision. With the wavelengths achievable, the proton has now been probed with a resolution of one thousandth of its 1 fm size. In these interactions, the fluctuations of the virtual photons survive over distances ct ˜ 1/x, where x is the relative momentum of the parton. In this way, HERA now studies the production of chains of gluons as long as 10 fm, an order of magnitude larger than, and probably totally insensitive to, the proton target. These are novel structures, the understanding of which has been challenging for quantum chromodynamics (QCD).

Theorists analyse HERA data with calculations performed to next-to-next-to-leading order in the strong coupling, and at this level of precision must include the photon as a parton inside the proton. The resulting electromagnetic structure functions violate isospin and differentiate a u quark in a proton from a d quark in a neutron because of the different electric charge of the quark. Interestingly, the inclusion of these effects modifies the extraction of the Weinberg angle from data from the NuTeV experiment at Fermilab, bridging roughly half of the discrepancy between NuTeV’s result and the value in the Particle Data Book. Added to already anticipated intrinsic isospin violations associated with sea-quarks, the NuTeV anomaly may be on its way out.

While history has proven that theorists had the right to be confident in 1980 at the time of Lederman’s remark, they have not faded into the background. Despite the dominance of experimental results at the conference, they provided some highlights of their own. Developing QCD calculations to the level at which the photon structure of the proton becomes a factor is a tour de force, and there were other such highlights at this meeting. Progress in higher-order QCD computations of hard processes is mind-boggling and valuable, sometimes essential, for interpreting LHC experiments. Discussions at the conference of strings, supersymmetry and additional dimensions were very much focused on the capability of experiments to confirm or debunk these concepts.

Towards the highest energies

Theory and experiment joined forces in the ongoing attempts to read the information supplied by the rapidly accumulating data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. Rather than the anticipated quark-gluon plasma, the data suggest the formation of a strongly interacting fluid with very low viscosity for its entropy. Similar fluids of cold 6Li atoms have been created in atomic traps. Interestingly, theorists are exploiting Juan Maldacena’s connection between four-dimensional gauge theory and 10-dimensional string theory to model just such a thermodynamic system. The model is of a 10D rotating black hole with Hawking-Beckenstein entropy, which accommodates the low viscosities observed. This should give notice that very-high-energy collisions of nuclei may prove more interesting than anticipated from “QCD-inspired” logarithmic extrapolations of accelerator data. Such physics is relevant to analysing cosmic-ray experiments.

A century has passed since cosmic rays were discovered, yet we do not know how and where they are accelerated. Solving this mystery is very challenging, as can be seen by simple dimensional analysis. A magnetic field B of size R can accelerate a particle with electric charge q to an energy Ε < ΓqvBR, with velocity v˜c, and no higher (where Γ is a possible boost factor between the frame of the accelerator and ourselves ). This is the Hillas formula. Note that it applies to our man-made accelerators, where kilogauss fields over several kilometres yield 1 TeV, because the accelerators reach efficiencies that can come close to the dimensional limit.

Opportunity for particle acceleration to the highest energies in the cosmos is limited to dense regions where exceptional gravitational forces create relativistic particle flows, such as the dense cores of exploding stars, inflows on supermassive black holes at the centres of active galaxies, and so on. Given the weak magnetic field (microgauss) of our galaxy, no structures seem large or massive enough to yield the energies of the highest-energy cosmic rays, implying instead extragalactic objects. Common speculations include nearby active galactic nuclei powered by black holes of 1 billion solar masses, or the gamma-ray-burst-producing collapse of a supermassive star into a black hole.

The problem for astrophysics is that in order to reach the highest energies observed, the natural accelerators must have efficiencies approaching 10% to operate close to the dimensional limit. This is so daunting a concept that many believe that cosmic rays are not the beams of cosmic accelerators but the decay products of remnants from the early universe, for instance topological defects associated with a grand unified theory phase transition near 1024 eV.

There is a realistic hope that this long-standing puzzle will be resolved soon by ambitious experiments: air-shower arrays of 10,000 km2, arrays of air Cherenkov detectors, and kilometre-scale neutrino observatories. While no definitive breakthroughs were reported at the conference, preliminary data forecast rapid progress and imminent results in all three areas.

The air-shower array of the Pierre Auger Observatory is confronting the problem of low statistics at the highest energies by instrumenting a huge collection area covering 3000 km2 on an elevated plane in western Argentina. The completed detector will observe several thousand events a year above 10 EeV and tens above 100 EeV, with the exact numbers depending on the detailed shape of the observed spectrum.

The end of the cosmic-ray spectrum is a matter of speculation given the somewhat conflicting results from existing experiments. Above a threshold of 50 EeV cosmic rays interact with cosmic microwave photons and lose energy to pions before reaching our detectors. This is the origin of the Greissen-Zatsepin-Kuzmin cutoff that limits the sources to our supercluster of galaxies. This feature in the spectrum is seen by the High Resolution Fly’s Eye (HiRes) in the US at the 5s level but is totally absent from the data from the Akeno Giant Air Shower Array (AGASA) in Japan.

At this meeting the Auger collaboration presented the first results from the partially deployed array, with an exposure similar to that of the final AGASA data. The data confirm the existence of events above 100 EeV, but there is no evidence for the anisotropy in arrival directions claimed by the AGASA collaboration. Importantly, the Auger data reveal a systematic discrepancy between the energy measurements made using the independent fluorescent and Cherenkov detector components. Reconciling the measurements requires that very-high-energy showers develop deeper in the atmosphere than anticipated by the particle-physics simulations used to analyse previous experiments. The performance of the detector foreshadows a qualitative improvement of the observations in the near future.

Cosmic accelerators are also cosmic-beam dumps producing secondary beams of photons and neutrinos. The AMANDA neutrino telescope at the South Pole, now in its fifth year of operation, has steadily improved its performance and has increased its sensitivity by more than an order of magnitude since reporting its first results in 2000. It has reached a sensitivity roughly equal to the neutrino flux anticipated to accompany the highest-energy cosmic rays, dubbed the Waxman-Bahcall bound. Expansion into the IceCube kilometre-scale neutrino observatory is in progress. Companion experiments in the deep Mediterranean are moving from R&D to construction with the goal of eventually building a detector the size of IceCube.

However, it is the HESS array of four air Cherenkov gamma-ray telescopes deployed under the southern skies of Namibia that delivered the particle-astrophysics highlights at the conference. This is the first instrument capable of imaging astronomical sources in gamma rays at tera-electron-volt energies, and it has detected sources with no counterparts in other wavelengths. Its images of young galactic supernova remnants show filament structures of high magnetic fields that are capable of accelerating protons to the energies, and with the energy balance, required to explain the galactic cosmic rays. Although the smoking gun for cosmic-ray acceleration is still missing, the evidence is tantalizingly close.

• The next Lepton-Photon conference will take place in Daegu, Korea, in 2007.

Close nucleon encounters

cross-section

Scientists believe that the crushing forces in the core of neutron stars squeeze nucleons so tightly that they may blur together. Recently, an experiment by Kim Egiyan and colleagues in Hall B at the US Department of Energy’s Jefferson Lab (JLab) caught a glimpse of this extreme environment in ordinary matter here on Earth. Using the CEBAF Large Acceptance Spectrometer (CLAS), the team measured ratios of the cross-sections for electrons scattering with large momentum transfer off medium and light nuclei in the kinematic region that is forbidden for scattering off low momentum nucleons. Steps in the value of this ratio appear to be the first direct observation of the short-range correlations (SRCs) of two and three nucleons in nuclei, with local densities comparable to those in the cores of neutron stars.

SRCs are intimately connected to the fundamental issue of why nuclei are dilute bound systems of nucleons. The long-range attraction between nucleons would lead to a collapse of a heavy nucleus into an object the size of a hadron if there were no short-range repulsion. Including a repulsive interaction at distances where nucleons come close together, ≤0.7 fm, leads to a reasonable prediction of the present description of the low-energy properties of nuclei, such as binding energy and saturation of nuclear densities. The price is the prediction of significant SRCs in nuclei.

For many decades, directly observing SRCs was considered an important, though elusive, task of nuclear physics; the advent of high-energy electron-nucleus scattering appears to have changed all this. The reason is similar to the situation encountered in particle physics: though the quark structure of hadrons was conjectured in the mid-1960s, it took deep inelastic scattering experiments at SLAC and elsewhere in the mid-1970s to prove directly the presence of quarks. Similarly, to resolve SRCs, one needs to transfer to the nucleus energy and momentum >=1 GeV, which is much larger than the characteristic energies/momenta involved in the short-distance nucleon-nucleon interaction. At these higher momentum transfers, one can test two fundamental features of SRCs: first, that the shape of the high-momentum component (>300 MeV/c) of the wave function is independent of the nuclear environment, and second, the balancing of a high-momentum nucleon by, predominantly, just one nucleon and not by the nucleus as a whole.

The inclusive nature of the process ensures that the final-state interaction does not modify the ratios of the cross-sections

An extra trick required is to select kinematics where scattering off low momentum nucleons is strongly suppressed. This is pretty straightforward at high energies. First, one needs to select kinematics sufficiently far from the regions allowed for scattering off a free nucleon, i.e. x = Q2/2q0mN < 1, and for the scattering off two nucleons with overall small momentum in the nucleus, x < 2. (Here Q2 is the square of the four momenta transferred to the nucleus, and q0 is the energy transferred to the nucleus.) In addition, one needs to restrict Q2 to values of less than a few giga-electron-volts squared; in this case, nucleons can be treated as partons with structure, since the nucleon remains intact in the final state due to final phase-volume restrictions.

If the virtual photon scatters off a two-nucleon SRC at x > 1, the process goes as follows in the target rest frame. First, the photon is absorbed by a nucleon in the SRC with momentum opposite to that of the photon; this nucleon is turned around and two nucleons then fly out of the nucleus in the forward direction. The inclusive nature of the process ensures that the final-state interaction does not modify the ratios of the cross-sections. Accordingly, in the region where scattering off two-nucleon SRCs dominates (which for Q2 ≥ 1.4 GeV2 corresponds to x > 1.5), one predicts that the ratio of the cross-section for scattering off a nucleus to that off a deuteron should exhibit scaling, namely it should be constant independent of x and Q2 (Frankfurt and Strikman 1981). In the 1980s, data were collected at SLAC for x > 1. However, they were in somewhat different kinematic regions for the lightest and heavier nuclei. Only in 1993 did the sustained efforts of Donal Day and collaborators to interpolate these data to the same kinematics lead to the first evidence for scaling, but the accuracy was not very high.

An experiment with the CLAS detector at JLab was the first to take data on 3He and several heavier nuclei, up to iron, with identical kinematics, and the collaboration reported their first findings in 2003 (Egiyan et al. 2003). Using the 4.5 GeV continuous electron beam available at the lab’s Continuous Electron Beam Accelerator Facility (CEBAF), they found the expected scaling behaviour for the cross-section ratios at 1.5 ≤ x ≤ 2 with high precision.

Cross-section ratios

The next step was to look for the even more elusive SRC of three nucleons. It is practically impossible to observe such correlations in intermediate energy processes. However, at high Q2, it is straightforward to suppress scattering off both slow nucleons and two-nucleon SRCs. One needs only to reach the x ≥ 2 region where scattering off a deuteron is kinematically forbidden. Here, the experiment typically probes scattering off a fast nucleon with momentum opposite to the virtual photon, with two nucleons balancing the fast nucleon’s momentum.

Again, a scaling of the ratios was expected. In this case, however, the ratios of the cross-sections for a pair of nuclei of masses A1 and A2 and with A1 > A2 was predicted to be higher for 2 ≤ x ≤ 3 than for 1.5 ≤ x ≤ 2. This is because there is a high probability for a nucleon to have two nearby nucleons in a heavier, denser nucleus. Hence, one expected to find two steps. This is exactly what the CLAS experiment observed in data recently reported for these kinematics and shown in figure 3 (Egiyan et al. 2005). Moreover, the iron:carbon ratios for x ˜ 1.7 and 2.5 are consistent with the expectation that the probability of two- and three-nucleon SRCs should increase with A as the square and cube, respectively, of the nuclear density. For iron, the probability of two-nucleon SRCs reaches about 25%.

More data for exploring SRCs have already been taken at JLab, and several more efforts are already planned to study this interesting region of nuclear physics, which has important implications for the dynamics of the cores of neutron stars.

Rewards for optics in theory and practice

The 2005 Nobel prize in physics has been awarded to three physicists working in the field of optics, in recognition of past advances in the understanding of light as well as the present-day potential of laser-based precision spectroscopy. Roy Glauber of Harvard University receives half the prize for “his contribution to the quantum theory of optical coherence”, while John Hall of the University of Colorado and Theodor Hänsch of the Max-Planck-Institut für Quantenoptik in Garching share the other half for “their contributions to the development of laser-based precision spectroscopy, including the optical frequency comb technique”.

CCEnew5_11-05

The recognition of Glauber’s work comes appropriately enough in 2005, the centenary of Albert Einstein’s work on the photoelectric effect, in which he described radiation in terms of quanta, later termed photons. Glauber’s aim in his seminal paper of 1963 was to move from a semi-classical description of the photon field in a light beam towards a full quantum theoretical description, in particular to describe correlation effects. In Glauber’s words, “There is ultimately no substitute for the quantum theory in describing quanta.”

Glauber’s name is also familiar in particle physics, however, where he is widely known for his “Glauber model”, which nowadays has a range of applications in understanding heavy-ion interactions. In August 2005 he gave an opening talk at the Quark Matter 2005 conference in Budapest, 50 years after his original paper using diffraction theory to develop a formalism for calculating cross-sections in nuclear collisions. Glauber himself has regularly spent time as a visiting researcher in CERN’s theory division, from 1967 until the mid-1980s.

The work of Hall and Hänsch is by contrast a tour de force in experimentation. In developing a measurement technique known as the optical frequency comb, they have made it possible to measure light frequencies to within an accuracy of 15 digits. The “comb” exploits the interference of lasers of different frequencies, which produces sharp, femto-second pulses of light at extremely precise and regular intervals. This allows precise measurements to be made of light of all frequencies and has many applications in both fundamental and applied fields.

In particular, in particle physics the technique is allowing precise measurements of asymmetries between matter and antimatter, and possible drifts in the fundamental constants. Hänsch himself is a member of the ATRAP collaboration, which has successfully made antihydrogen at CERN’s Antiproton Decelerator (AD). Moreover, the frequency comb technique is being used in the ASACUSA experiment at the AD, which studies the spectroscopic properties of anti-protonic helium.

KEDR adds new precision to meson mass measurements

In October 2005 the VEPP-4M collider at the Budker Institute of Nuclear Physics started its latest run with the KEDR detector. This continues a series of experiments that are exploiting the method of resonant depolarization (which was proposed and developed at the Budker Institute) to make precise measurements of masses in the region of the Ψ to Υ mesons (Skrinsky and Shatunov 1989).

Progress in understanding the resonant depolarization technique, as well as a new detection system for Touschek electron pairs (intrabeam scattering), has resulted in a significant improvement in the accuracy of the beam energy determination with KEDR. The error in a single measurement of the beam energy has reached a level of 1 keV, corresponding to a relative accuracy of 0.7 ppm. Figure 1, for example, illustrates a very clear jump in the counting rate of Touschek pairs, allowing a precise measurement of the depolarization frequency directly related to the beam energy. In 2002 this led to a measurement of the mass of the J/Ψ with a relative accuracy of 4 ppm: MJ/Ψ = 3096.917 ± 0.010 ± 0.007 MeV (Aulchenko et al. 2003). Compared with the previous experiment in 1980, this represented a sevenfold decrease in the uncertainty in the mass.

CCEnew7_11-05

In 2004 the masses of the Ψ’ and Ψ(3770) were measured in a second run in KEDR. The results, which are shown in figure 2, were presented recently at the HEP2005 conference in Lisbon in July. The preliminary values of the masses of the Ψ’ and Ψ(3770) are 3686.117 ±  0.012 ±0.015 MeV and 3773.5 ±  0.9 ± 0.6 MeV, respectively.

The precise measurement of the masses of the J/Ψ and Ψ’ mesons provides a mass scale in the energy region around 3 GeV, which forms the basis for an accurate determination of the mass for all charmed particles and the τ lepton. Since the width of the τ is proportional to its mass to the fifth power, high-precision tests of the Standard Model are very sensitive to the accuracy of this mass. At present the accuracy of the τ’s mass is dominated by the accuracy of the measurement by the Beijing Spectrometer (Bai et al. 1996).

CCEnew8_11-05

KEDR began the measurement of the mass of the τ in spring 2005. Using the same method as the Beijing Spectrometer, the collaboration plans to determine the mass by measuring the energy dependence of the cross-section near threshold. The aim is also to improve the statistics of τ decays, benefiting from the precise knowledge of the beam energy. In KEDR the energy is measured by two methods: resonant depolarization for a high-accuracy measurement once a day, and Compton backward scattering for monitoring the beam energy drift during data collection. Data processing is currently in progress.

MAGIC and Swift capture GRB

The Major Atmospheric Gamma Imaging Cherenkov telescope (MAGIC) at La Palma, Canary Islands, has observed a gamma-ray burst seconds after its explosion was detected by NASA’s Swift satellite. It is the first time that a gamma-ray burst has been observed simultaneously in the X-ray and very-high-energy gamma-ray bands.

CCEnew1_10-05

MAGIC detects cosmic gamma rays through the showers of charged particles they create in the atmosphere. With a tesselated mirror surface area of nearly 240 sq. m, it is the largest air Cherenkov telescope ever built and has been designed to be more sensitive to lower-energy gamma rays than other ground-based instruments. In this case, it was the ability to track rapidly – and the prompt action of the operators – that allowed the telescope to observe GRB050713A, a long-duration gamma-ray burst, only 40 s after its explosion on 13 July. MAGIC’s lightweight and precise mechanics let it rotate completely in 22 s.

Observations of GRB050713A began only 20 s after an alert from Swift, a member of the Gamma ray bursts Coordinates Network, which distributes the locations of bursts detected by spacecraft. In the case of Swift, this is in real time, so MAGIC was able to move on to the burst while it was still active in the X-ray range.

A first look at the MAGIC data did not reveal strong gamma-ray emissions above 175 GeV, and indeed the flux limit derived at very high energies by MAGIC is extremely low, two to three orders of magnitude lower than the extrapolation from lower energies. The upper limit for the flux of energetic gamma rays is consistent with the expected flux of a gamma-ray burst at high red-shift, strongly attenuated by cosmological pair production. These observations were reported at the 29th International Cosmic Ray Conference held in Pune, India, on 3-10 August; a detailed analysis of the data is in progress.

• MAGIC is managed by 17 institutes from Germany, Italy, Spain, Switzerland, Finland, the US, Poland, Bulgaria and Armenia.

BES collaboration observes possible baryonium state

In a sample of 58 million J/ψ events, the BES collaboration at the Beijing Electron Positron Collider (BEPC) has found a clear signal (7.7σ statistical significance) for a new resonance, the X(1835). The signal appears in the π +π η mass distribution of the process J/ψ → ψγπ+πη, where the η meson is detected in two decay modes, η → π+πη (η→ γγ) and η →γρ (ρ → π+π). The results were reported at the Lepton-Photon 2005 conference held in Uppsala.

CCEnew6_10-05

The peak in the π+πη mass spectrum is well described by a Breit-Wigner resonance function, with a mass of 1834 MeV/c2 and a width of 68 MeV/c2 (BES Collaboration 2005). This mass and width are not compatible with any known meson resonance. However, the properties are consistent with its being the state responsible for the strong threshold enhancement in the pp- mass that BESII observed in J/ψ → γppbar two years ago. One possible interpretation of the enhancement is that it is the tail of a “deuteron-like” spin-0 proton-antiproton bound state (baryonium) and its properties match well predictions for a state with a mass around 1.85 GeV/c2 (Ding and Yan 2005). However, until the spin of the X(1835) is determined and other expected decay modes measured, alternative interpretations cannot be excluded.

Deep inside the proton

DIS 2005 was the 13th in the series of annual workshops on deep inelastic scattering (DIS) and quantum chromodynamics (QCD). Hosted by the Physics Department of the University of Wisconsin-Madison, the workshop was held on 27 April – 1 May at the Monona Terrace Community and Convention Center in Madison.

CCEdis1_10-05

The workshop, which brought together 280 experimentalists and theorists, began with plenary sessions that featured review talks. Parallel working group sessions followed, and the workshop ended with plenary sessions that included reports from the working groups and a conference summary. The topics of the working groups were: structure functions and low-x, diffraction and vector mesons, electroweak physics and beyond the Standard Model, hadronic final states, heavy flavours, spin physics, and the future of DIS. There were 240 talks in total, replete with many exciting new results.

The working group on structure functions focused on the future. Final measurements from the first period of data-taking at the Hadron Electron Ring Accelerator (HERA) at DESY were shown, alongside the first electroweak measurements from the new HERA data. Attention was paid to new extraction techniques for determining the parton distribution functions (PDFs) and to improving the standard methods. The goal is to improve PDF uncertainties, which play a crucial role for measurements not only at the Large Hadron Collider (LHC) at CERN, but also at Fermilab’s Tevatron and in neutrino-oscillation experiments.

New results from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven sparked much discussion of parton evolution and saturation at very low proton momentum fraction x. Strong particle suppression in forward rapidities in deuteron-gold collisions, reported by the BRAHMS, PHENIX and STAR collaborations, hint at the possible mechanism behind parton saturation. At the other end of the x spectrum, new results in the high-x resonance region from Jefferson Lab suggest that future data from there will significantly improve our understanding of proton structure.

The working group on diffraction surveyed the abundance of data over an extended kinematic range from the HERA experiments, which has enabled precise measurements of the diffractive structure functions and extraction of the diffractive parton distribution functions (DPDFs). Several new, independent next-to-leading-order (NLO) QCD fits suggest that the DPDFs are gluon-dominated. Recent results on deeply virtual Compton scattering and exclusive meson production from HERA experiments, and from the Common Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS) experiment at CERN and the CEBAF Large Acceptance Spectrometer (CLAS) at Jefferson Lab, are sensitive to the generalized parton distribution functions (GPDFs). These provide information on correlations between partons, their transverse momentum, and the contribution of the quark angular momentum to the proton spin. A new window on diffractive processes will open at the LHC with the TOTEM detector, integrated with CMS. The FP420 proposal to equip a region 420 m from the ATLAS and/or the CMS interaction point would add to this.

Preparations for searches and precise electroweak measurements at the LHC highlight the machine’s vast discovery potential

The working group on electroweak physics examined the first measurements from HERA of the cross-sections for charged and neutral-current DIS with polarized leptons, confirming the V-A structure of the electroweak interaction. Participants discussed the impact on the Standard Model Higgs mass of the latest high-precision top-quark mass measurement from the Tevatron. High-precision measurements of the W mass and the top mass need a good understanding of the structure of the proton, in particular nonperturbative effects, from HERA data. The discovery of single-top events is expected with the increasing integrated luminosity of Run II at the Tevatron, and measurements of the production cross-section could constrain new physics models that modify the coupling of the top quark to gauge bosons. The excess of events with high-pT isolated leptons reported by the H1 collaboration at HERA could be attributed to the anomalous coupling of top quarks to up quarks. Many recent searches at HERA and the Tevatron have produced inconclusive evidence of new physics, but the substantial increases in luminosity at both colliders make a discovery more likely. Preparations for searches and precise electroweak measurements at the LHC highlight the machine’s vast discovery potential.

The working group on hadronic final states studied the perturbative QCD calculations of jet cross-sections that have been understood with unprecedented accuracy at HERA.

These determine the strong coupling constant with a precision that is comparable to the most accurate value obtained in e+e interactions. These achievements pave the way for an understanding of jet production at the LHC. Large theoretical uncertainties (of order 100%) remain for the production of hadrons at small (forward) angles to the incoming proton’s momentum, which pro collisions with small momentum fractions x and momentum transfers Q. This is where new dynamical mechanisms associated with scattering at asymptotically high collision energies may turn on.

Recent results from HERA suggest that further theoretical improvements are needed to describe small-x scattering. These may come from developments in higher-order computations, resummation and parton shower models. The latest cross-sections for jet production in Run II at the Tevatron help to constrain the gluon density in the proton, while a comparison of the rates for pion and photon production at RHIC independently confirms the formation of an extended dense quark-gluon medium in the aftermath of gold-gold collisions.

The experimental status of pentaquarks remains ambiguous, but new high-statistics measurements from Jefferson Lab should soon provide a more definite answer. The ZEUS and HERMES experiments at HERA reported observations of a Θ+ state at around 1520 MeV, whereas H1 at HERA and BaBar at SLAC see nothing. On the other hand, H1 remains unique in reporting the observation of a charmed pentaquark. The CLAS experiment at Jefferson Lab has now accumulated a large sample of photoproduction events from dedicated runs, and with only 1% of the data analysed there is no sign of a Θ+.

The heavy-flavour working group heard that the new heavy-quark PDFs from Martin-Roberts-Stirling-Thorne (MRST) and the Coordinated Theoretical-Experimental Project on QCD (CTEQ) now describe the HERA data on charm structure functions quite well. Recent progress on soft resummation for heavy quarks in DIS should allow its inclusion in PDFs and the extraction of resummed parton densities. New calculations describing the production of D-mesons at the Tevatron can be further extended to DIS processes. A new model for heavy quarkonium production agrees with data from RHIC and the Tevatron, in particular with the J/ψ polarization measurements from the Collider Detector at Fermilab (CDF) at the Tevatron, and PHENIX at RHIC. NLO corrections were shown to improve the description of charmonium production in two-photon collisions. New measurements of the charm and beauty contribution to the proton structure function show good agreement with the predictions based on NLO QCD and gluon densities obtained from global PDF analyses.

New heavy-flavour results are moving beyond the production of single heavy mesons to measure fragmentation parameters, heavy-quark correlations, heavy-quark-jet characteristics and unexplored kinematic regions. While NLO QCD describes charm well, the situation for beauty is less clear. Precise measurements of b-quark production at high pT or large Q2 agree with theory, but measurements over the full pT and Q2 range are a factor of two higher. In another puzzle, the final measurement of charm production in neutrino-nucleon scattering by the Neutrinos at the Tevatron (NuTeV) experiment excludes a strange sea asymmetry large enough to explain their anomaly on sin2θW.

The spin physics working group basked in a wealth of new high-precision data from the HERMES, COMPASS and Jefferson Lab experiments on the spin structure functions, which extend the coverage at both low- and high-x and into the transition from the partonic to the hadronic regime. Since the contribution to the proton spin from the longitudinal spin of the quarks is now well established and small, recent measurements and global analyses focused on understanding other spin contributions. New data on transversity distributions were presented by the above-mentioned experiments, and from BELLE at KEK, and STAR and PHENIX at RHIC.

DIS 2005 featured a plenary session devoted to the future of DIS studies. Although HERA is expected to close in two years’ time, much of its integrated luminosity is still in the future. This is particularly true for the measurement of the helicity dependence of the charged-current cross-section. There is interest in running HERA for a while at lower energy to extract the longitudinal structure function FL. There was discussion of the physics potential of continuing HERA beyond 2007 with new injectors, or combining the LHC with a future linear electron collider to produce DIS collisions at the tera-electon-volt scale.

Another proposal, eRHIC, combines an electron accelerator with RHIC to produce an electron-proton and electron-nucleus collider with polarized beams at a centre-of-mass energy in the range 30-100
GeV. There is also a proposal to upgrade the DIS programme at Jefferson Lab from 6 GeV to 12 GeV, featuring DIS at large x and the use of what is effectively a target of free neutrons. Ideas also exist for DIS experiments at fixed targets, particularly at CERN; for neutrino experiments with Minerva at Fermilab; and future neutrino projects based on the Fermilab Proton Driver. These proposals often look at the GPDFs that can be accessed using deeply virtual Compton scattering and that illuminate the structure of hadrons in transverse space.

The workshop attendees emerged with a renewed sense of the importance of DIS and QCD measurements and theory to the future of particle and nuclear physics. They also gained an enhanced appreciation for the range of exciting developments in the field, and a determination to pursue experimental and theoretical opportunities.

• The workshop was sponsored by Argonne, the US Department of Energy, DESY, the US National Science Foundation and the University of Wisconsin-Madison.

Investigating the proton’s strange sea

A simple understanding of the proton is that it is an object composed of three quarks. However, the rich structure predicted by the theory of quantum chromodynamics (QCD) indicates that this picture is incomplete. A sea of gluons and virtual quark/anti-quark pairs is also present, and this plays an important role, for instance, in accounting for the proton’s total spin contributes to other properties of the nucleon. Their specific goal is to determine the exact contributions of the sea’s strange quarks to the proton’s charge distribution and magnetization. Four major experimental collaborations have weighed in, and their results are beginning to paint a cohesive picture of strange quarks in the proton.

CCEpro1_10-05

The contribution of the strange quark to these properties is the easiest to pinpoint, because the strange quark is the most accessible of all the sea’s constituents. Up and down quarks are the most likely quarks to be present in the sea, because they are the lightest. However, they have the same quantum numbers as the valence quarks, so it is nearly impossible to disentangle their contributions. Strange quarks are the second lightest, so are likely to be the second most significant part of the quark-gluon sea.

CCEpro2_10-05

Parity-violating electron scattering offers a promising method of accessing the strange quarks. These experiments study collisions between a beam of polarized electrons and target particles. Specifically, they measure the interference of the electromagnetic interaction, in which a photon is exchanged, and the neutral weak interaction, which involves the exchange of a Z0 boson. The electrons are polarized, meaning that they are spinning either along their direction of travel (right-handed) or opposite to it (left-handed). This allows the class of electroweak
interactions to be separated into the electromagnetic and weak components.

CCEpro3_10-05

The electromagnetic force is parity-conserving, or mirror-symmetric, so the electron’s handedness does not affect scattering rates. The weak force, however, is not mirror-symmetric: it is parity-violating. Therefore, owing to the neutral weak force, a different number of scattering events will be observed when the beam of electrons is right-handed compared with left-handed. A comparison of the weak and electromagnetic pieces allows the experimenters to disentangle the contribution of the up, down and strange quarks.

Meeting the challenge

The experimental challenge arises because the electromagnetic force is much stronger than the weak force, so many scattering events must be recorded to measure the tiny difference, or asymmetry, in scattering rates. In addition, careful attention has to be paid to the possibility of false asymmetries masquerading as the true asymmetry due to the weak force. These can arise, for example, if the beam position or angle on the target changes when the polarized beam is changed from right- to left-handed and vice versa. Typical requirements for these experiments are that these changes must be less than a few nanometres and nanoradians, respectively. Ensuring this stability demands close collaboration between the experimenters and the accelerator physicists and operators, and precise monitoring of the electron-beam characteristics from the source, through the accelerator, and to the experimental hall.

Four research programmes have adopted parity-violating electron scattering to search for the contributions of strange quarks to proton structure. They are the SAMPLE experiment at the MIT-Bates Linear Accelerator Center, the A4 experiment at the Mainz Microtron, and the G0 (G-zero) experiment and Hall A Proton Parity Experiment (HAPPEX) at the US Department of Energy’s Jefferson Lab. The various experiments are sensitive to different combinations of strange-quark contributions to the charge distribution and magnetization. These are represented by GsE and GsM, the strange electric and magnetic form factors, respectively. Experiments using a hydrogen target and a forward scattering angle, including G0, A4 and HAPPEX-H (HAPPEX on hydrogen), all measure a linear combination of GsE and GsE and GsM (the exact combinations differ for each experiment). Disentangling the two form factors requires measurements at both forward and backward angles or with a different target (e.g. helium).

The SAMPLE experiment at MIT-Bates, which is now complete, measured backward-angle electron scattering from hydrogen and deuterium targets at 200 MeV. Cherenkov light produced by electrons scattered with a momentum transfer, Q2, near 0.1 GeV2 was focused by an array of mirrors onto a set of 8 inch photomultiplier tubes. The researchers concentrated in particular on obtaining GsM, the strange quark’s contribution to the proton’s magnetic moment (Ito et al. 2004 and Spayde et al. 2004).

Researchers with the A4 experiment at Mainz use a new type of total absorption calorimeter, making use of 1022 very fast individual crystals of lead fluoride to detect scattered electrons, plus sophisticated read-out electronics. They have measured forward-angle (35°) electron scattering from hydrogen at two values of Q2, 0.23 and 0.11 GeV2 (Maas et al. 2004 and 2005).

The G0 and HAPPEX experiments at Jefferson Lab took advantage of the high-quality polarized electron beam from the Continuous Electron Beam Accelerator Facility (CEBAF). They have taken data with a 3 GeV beam with up to 86% polarization.

G0 required a unique beam pulse structure and a custom-built spectrometer package capable of measuring over large solid angles. The G0 spectrometer, based on a toroidal superconducting magnet, measures elastically scattered recoil protons over a wide range of forward-scattering angles (and thus a large range of Q2) simultaneously (Armstrong et al. 2005). A time-of-flight technique for identifying the scattered protons required the use of a pulsed beam, with electron bunches arriving every 32 ns. With this pulse structure, the 40 μA electron beam had a large instantaneous current equivalent to 640 μA, providing challenges for the accelerator.

HAPPEX used a pair of high-resolution, small-acceptance spectrometers to measure precisely forward-angle scattering at a single momentum transfer at a time. Initial measurements were made at Q2 = 0.48 GeV2 with a hydrogen target, and recently data were taken at Q2 = 0.1 GeV2 using both hydrogen and 4He targets (Aniol et al. 2004 and 2005). The hydrogen target allowed the HAPPEX researchers to measure the strange quark’s contribution to a combination of the charge and magnetization distributions in the proton. 4He is a nucleus with no net spin, so the helium target allowed them to isolate the strange electric form factor of the proton.

The results from all of these experiments present a cohesive picture of the strange quark’s contribution to the charge distribution and magnetization. All are consistent with this contribution being non-zero in the proton.

Figure 1 on p30 shows the G0 and HAPPEX results from hydrogen target data as a function of Q2. The measured combination of the strange form factors GsE and η GsM (η is a kinematic factor) appears to be non-zero, and an intriguing and unexpected dependence on momentum transfer is suggested by the data. At the lowest Q2 measured so far (0.1 GeV2), all four experiments provide information about different combinations of GsE and GsM, as depicted in figure 2. The results favour a positive value for GsM, suggesting that the strange quarks reduce the proton’s magnetic moment. A negative value for GsE, while only hinted at by the data, would imply that the strange quarks prefer to be on the outside of the proton, while the anti-strange quarks favour the interior. However, current experiments are not precise enough for us to state definitively that the strange quark contributions are non-zero.

Within QCD-inspired models, predictions of the strangeness form factors vary tremendously. Of course, the models are of less interest than the predictions of nonperturbative QCD itself, for which our most reliable tool is lattice QCD. In this direction, there has been remarkable progress with a very precise recent determination of the strangeness magnetic moment (GsM = −0.046 ± 0.019 μN), yielding an answer with an uncertainty of better than 1% of the proton’s magnetic moment (Leinweber et. al. 2005). Testing this prediction will push the upcoming measurements to the limits of their precision.

Meanwhile, the experiments continue. HAPPEX is taking data this autumn, and the collaboration expects to reduce the error bars by a factor of three for both targets. Both the G0 and A4 collaborations have turned their detectors around by 180° and will soon measure backward-scattered electrons at various values of Q2, which will be primarily sensitive to GsM. Combining forward- and backward-scattering results will allow both GsM and GsE to be individually determined. These additional measurements will allow experimenters to obtain GsE and GsM over a range of momentum transfers and thus pin down the importance of the contributions of the strange sea to the structure of the proton.

Pomerons return to Blois

Twenty years ago the first “Blois Workshop” was organized by Basarab Nicolescu of the University Paris VI and Jean Tran Thanh Van of the University Paris Sud in the historic Château de Blois. Now, the 11th conference in this international biannual series focusing on elastic scattering and diffraction returned to Blois on 15-21 May 2005. Organized by the original team plus Maurice Haguenauer of the Ecole Polytechnique, it set the scene for future high-energy studies in this field.

CCEblo1_10-05

Blois Workshops have taken place all over the world, and the meeting is now a scientific forum for researchers trying to unravel the foundations of elastic scattering and diffraction from first principles in quantum chromodynamics (QCD). Originally a rather specialized field, this has moved towards the centre of high-energy QCD studies. This is particularly because measurements at HERA and Fermilab show that diffractive events, where a scattered proton remains intact in a high-energy inelastic collision, constitute a surprisingly high proportion of the entire rate. Recent measurements from the ZEUS and H1 detectors at HERA show that approximately 10% of the deep inelastic lepton-proton scattering cross-section is diffractive.

High-energy elastic and diffractive scattering is traditionally explained in Regge theory as a result of pomeron exchange. Here the system exchanged between projectile and target carries the quantum numbers of the vacuum. Events with large gaps in the rapidity distribution occur even in hard collisions involving very high momentum transfers. Such “hard diffraction” has now become firmly established, initially at the Intersecting Storage Rings (ISR) and Super Proton Synchrotron (SPS) at CERN in proton and antiproton collisions, and now most clearly at HERA in positron-proton collisions, and in proton-proton collisions at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Tevatron at Fermilab.

With the advent of QCD, hard diffraction became attributed to the exchange of two or more gluons with net zero colour, and these processes are now an important observable for understanding fundamental aspects of the strong interactions. At the conference, Peter Landshoff from Cambridge and Sandy Donnachie from Manchester reviewed the apparent dichotomy between the soft and hard aspects of pomeron exchange and the phenomenological manifestations, such as the remarkable growth with energy of the cross-section for hard diffractive electroproduction of vector mesons. Theorists are beginning to develop formalisms that encompass the transition between hadron and quark-gluon degrees of freedom and the duality between the two descriptions.

QCD also predicts the existence of a C-odd three-gluon exchange, which through interference with the pomeron exchange leads to remarkable charge asymmetries in diffractive reactions.

CCEblo2_10-05

One interesting nuclear diffractive phenomenon is the demonstration of QCD “colour transparency” by the E791 fixed-target experiment at Fermilab, which measured the diffractive dissociation of a high-energy 500 GeV/c pion into two high-transverse-momentum jets while leaving the target nucleus intact. The experiment confirmed the remarkable prediction, based on the gauge interactions of QCD, that the small quark-antiquark Fock component of the pion projectile interacts coherently on every nucleon in the nucleus without absorption or energy loss, in dramatic contrast with traditional Glauber theory. The diffractive dijet experiment also provides crucial information on the quark-antiquark wavefunction of the pion. Other diffractive experiments that explore the structure of the photon are now in progress at HERA.

Contrary to parton-model expectations, the rescattering of the quarks with the spectator constituents shortly after the nucleon has been struck by the lepton critically affects the final state in deep inelastic scattering (DIS). The rescattering of the struck quark from gluon exchange generates dominantly imaginary diffractive amplitudes. This gives rise to a dijet effective “hard pomeron” exchange – a rapidity gap between the target and the diffractive system – while leaving the target intact. The diffractive cross-section measured in diffractive deep inelastic scattering can be interpreted in terms of the quark and gluon constituents of an effective pomeron as in the model by Gunnar Ingelman and Peter Schlein. Since the gluon exchange occurs after the interaction of the lepton current, the pomeron cannot be considered a pre-existing constituent of the target proton. The rescattering contributions to the DIS structure functions are not included in the target proton’s wavefunctions computed in isolation, and cannot be interpreted as parton probabilities. The resulting gluon exchange matches closely the phenomenology of the soft colour interaction model. Gluon exchange in the final state also leads to Bjorken-scaling the Sivers single-spin asymmetry, a T-odd correlation between the spin of the target proton and the production plane of a produced hadron or quark jet.

The connections between diffraction and coherent effects in nuclei such as shadowing and anti-shadowing are also now being understood. Diffractive deep inelastic scattering on a nucleon leads to nuclear shadowing at leading twist as a result of the destructive interference of multistep processes within the nucleus. In addition, multistep processes involving Reggeon exchange lead to anti-shadowing. In fact, since Reggeon couplings are flavour-specific, anti-shadowing is predicted to be non-universal, depending on the type of current and even the polarization of the probes in nuclear DIS.

Saturation under focus

A central focus of the 2005 Blois conference was the physics of “saturation”, a QCD phenomenon that limits particle production when the underlying gluonic scattering subprocesses significantly overlap in space and time. At very high energies, the gluon density is so high that two scatterings have the same probability as one. The theory of saturation is based on the Balitsky-Kovchegov equation and its extensions, and has analogues with stochastic methods used in other areas of statistical physics. The effects of saturation can be observed in the small-x, high-energy domain of deep inelastic lepton scattering at HERA, thus providing a window into nonlinear aspects of QCD.

The theory of saturation predicts a parameterization of the HERA data (“geometrical scaling”) that gives a remarkably good description of the deep inelastic structure functions at small-x in terms of a single scaling variable. The high occupation number of gluons can even lead to the formation of a “colour glass condensate”, which may be causing a decrease of particle creation at forward rapidities in heavy-ion collisions at RHIC.

The nonlinear gluon interactions of QCD also underlie the physics of the hard Balitsky-Fadin-Lipatov-Kuraev pomeron, which is postulated to control the energy dependence of hard reactions, as well as the distribution of particle production at very small values of x and extreme values of rapidity.

Remarkably, Juan Maldacena’s “anti-de-Sitter-/conformal-field-theory (AdS/CFT) duality” between conformal gauge field theory and string theory in 10 dimensions has begun to make an impact on QCD studies. The mapping of quark and gluon physics onto the fifth dimension of anti-de Sitter space is providing insight into the gluonium spectrum that controls the pomeron trajectory and light-quark hadron spectroscopy. Moreover, it explains the success of QCD counting rules for hard elastic scattering reactions using the methods of Joseph Pochinsky and Matt Strassler.

The AdS/CFT correspondence also explains the dominance of the quark interchange mechanism in hard exclusive reactions, and gives a model for the basic light-front wavefunctions of hadrons that incorporates conformal scaling at short distance and colour confinement at large distances. Lattice gauge theory is also making an impact on the QCD physics of high-energy collisions.

Much of the phenomenological work in pomeron physics was pioneered by loyal participants in the Blois conference series. Many of the original participants of the first Blois Workshop attended the 20th anniversary conference and presented their current work. While the discussions at the first Blois Workshop centred on results from the ISR at CERN and the first data from the SPS as a proton-antiproton collider (with predictions for the Tevatron and Superconducting Super Collider projects), the XIth International Conference had speakers reporting the latest experimental results from the Tevatron, from polarized proton-proton (and proton-carbon) experiments using fixed and jet targets at RHIC, and from HERA at DESY. HERA II running has started and a significant and welcome statistical increase in diffraction data is expected before the machine closes down in 2007.

The experimental efforts under way regarding forward physics at the Large Hadron Collider (LHC) at CERN were extensively discussed; the large increase in reach in x and Q2 in proton-proton and nucleus-nucleus collisions will provide significant tests and advances for QCD-inspired diffraction phenomenology and calculations. There was also speculation that the Froissart bound for the total proton-proton cross-section will be saturated at the LHC, with its value controlled not by pion exchange but by the exchange of the lightest glueball, as originally predicted by Nicolescu. An exciting possibility for the future is observing the Higgs boson in doubly tagged diffractive collisions pp → p + H + p at the LHC; the Higgs would be found as a peak in the missing mass spectrum, rather than in a specific decay channel.

A number of presentations addressed existing and new cosmic-ray detector arrays, where the discrepancy above the Greisen-Zatsepin-Kuzmin cut-off between the excess in the Akena Giant Air Shower Array (AGASA) and the fall-off in the High Resolution Fly’s Eye (HiRes) experiment continues to generate interest. Some proposals for very forward instrumentation at the LHC are dedicated to providing benchmarks for simulations of cosmic-ray shower initiation: the Centauro and Strange Object Research (CASTOR) detector in TOTEM/CMS for forward electromagnetic showers, and in the proposed LHCf for the measurement of forward π0s. Zero-degree neutron calorimeters, which are operating successfully in the experiments at RHIC, will have counterparts in the heavy-ion programme at the LHC, but would also be very useful and complementary tools in the measurement of diffraction in proton-proton collisions.

The diffractive production of vector mesons and single hard photons at HERA provides a way to select and determine different generalized parton distributions (GPDs), the quantum-mechanical parton wavefunctions, of the proton. Early results from HERMES and from the H1 and ZEUS experiments seem to be in close agreement with current GPD models. A number of talks discussed the physics of other exclusive diffractive reactions such as two-photon collisions, which are sensitive to the vector meson distribution amplitudes as well as the exchange mechanism. Double-charm production, an indication of an intrinsic heavy-charm component in the proton wavefunction, was demonstrated by the SELEX experiment at Fermilab. The production of pentaquarks and other exotic quark bound states was reviewed with the conclusion that the situation is still very confused, with seemingly contradictory results.

At this anniversary, a number of overviews were presented surveying the progress in diffractive physics, both experimentally and theoretically, made over the past 20 years. Alan Krisch of Michigan told the story of his pioneering measurements of hadronic spin effects, and the extraordinarily large spin correlations that were discovered in large-angle elastic proton-proton scattering and that are still only partially understood. Konstantin Goulianos of Rockefeller University gave an overview of diffraction, both soft and hard, as measured at the Tevatron and its connection to results from HERA. Gunnar Ingelman of Uppsala presented the enormous evolution in understanding of hard diffraction since it was first observed by UA8 at CERN’s proton-antiproton collider.

The overall atmosphere of the conference was one of expectation: for the data from the new runs that have started at the Tevatron and at HERA II, and from the LHC which will start running around the time of the next Blois conference. Indeed, the next two meetings may well see significant progress in the understanding of both soft and hard diffraction.

bright-rec iop pub iop-science physcis connect