Beijing was also the meeting place later in August for the Tenth International Symposium on Meson-Nucleon Physics and the Structure of the Nucleon (MENU2004), which was held at IHEP on 30 August to 4 September. This series of meetings covers a wide range of experimental and theoretical developments in meson-nucleon physics, baryon spectroscopy, photo/electro-production of mesons, dibaryons, structure of the nucleon, chiral symmetry-based effective field theories, quantum chromodynamics-inspired quark models of hadrons, and so on. Previous symposia have oscillated between Europe and North America, but now the BES collaboration at BEPC has become a new member of the “MENU club” through its studies of N* production from J/Ψ decays over the past few years, and for the first time MENU went east.
MENU2004 attracted about 150 participants from 23 countries around the world, and there were around 100 talks. The scope of the conference provided ample evidence that the meson-nucleon problem is still as interesting and viable as it was 21 years ago at the first MENU in Karlsruhe in 1983. This time the search for missing baryon resonances and pentaquark states, and the properties of the baryon resonances were the main issues. While the LEPS collaboration at the Spring-8 synchrotron radiation facility in Japan reported some new evidence of the θ pentaquark in their γd experiment, decisive results from high-statistics data at Jefferson Lab (Jlab) in the US are still awaited. Experiments at BEPC, Jlab, the Electron Stretcher Accelerator in Bonn and elsewhere, have reported some new evidence for missing N* resonances.
The next few years will see many important developments in this field, with new facilities at JPARC in Japan and GSI in Germany, major upgrades at BEPC and Jlab, and the transfer of the WASA detector from Uppsala to the cooler synchrotron, COSY, at Jülich. To celebrate the tenth anniversary of MENU, the meeting included an impressive concert of Chinese ethnic folk music, and the delighted participants are now looking forward to a bright future and the next MENU at Jülich in 2007.
The ATRAP experiment at CERN has made the first measurement of the velocity of slow antihydrogen atoms. This is an important step towards the goal of producing antihydrogen atoms cold enough – that is, slow enough – for precision spectroscopy.
Both the ATRAP and ATHENA experiments at CERN’s Antiproton Decelerator announced the production of large numbers of “cold” antihydrogen atoms in 2002. While these atoms were certainly much colder than those first observed at CERN in 1995, their actual energy was not known. Now the ATRAP collaboration has demonstrated a technique for determining the velocity of those antihydrogen atoms that pass through an oscillating electric field without ionizing.
In ATRAP, antihydrogen atoms form in a nested Penning trap and then move through an electric-field region prior to detection; only those atoms not ionized in the field are detected. The measurement of the atoms’ velocity depends on observing how the number of atoms detected varies with the oscillation frequency of a time varying field superimposed on a static field. The slowest atoms will be ionized and never reach the detector, while faster atoms may pass through unaffected depending on the phase of the field they encounter; as the frequency of the oscillating field is increased, fewer atoms will move fast enough to remain unionized. The team found that the most weakly bound atoms to make it through to detection have an energy of 200 meV. This corresponds to a velocity about 20 times higher than the average thermal velocity at a temperature of 4.2 K (Gabrielse et al. 2004). The speed of more tightly bound states, which could have lower velocities, could be measured by the same method with a higher static field, but this would require more time.
Further reading
G Gabrielse et al. 2004 Phys. Rev. Lett.93 073401.
Thirty years have passed since the discovery of weak neutral currents in the Gargamelle bubble chamber at CERN. Today the huge impact of this discovery on CERN, the field of high-energy physics and beyond, is highly visible; then, however, it was received with great scepticism both by CERN and the physics community.
Shortly after the Siena Conference in 1963, André Lagarrigue, André Rousset and Paul Musset worked out a proposal for a neutrino experiment that aimed to increase the event rate by an order of magnitude. This meant building a large heavy-liquid bubble chamber, later named Gargamelle (figure 1), and also forming a large collaboration. The core of the team consisted of members of Orsay, the École Polytechnique and the neutrino experiments with the CERN NPA (Nuclear Physics Apparatus) division 1 m bubble chamber, which were just finishing. In the end, the collaboration consisted of seven European laboratories and also included guests from Japan, Russia and the US.
At the end of the 1950s V-A theory was the “standard model” of weak interactions. Its major drawback was its bad high-energy behaviour, which prompted various ideas to cure the problem of infinities. Guided by quantum electrodynamics, a gauge theory, attempts were made to construct a gauge theory of weak interactions, and in the mid-1960s the hypothesized charged intermediate vector boson (W±) was complemented with a neutral partner to achieve the required cancellations. The invention of the Higgs mechanism solved the problem of having both a gauge theory and massive mediators of weak interactions. The progress made by Sheldon Glashow, Abdus Salam and Steven Weinberg was completed by the work of Martinus Veltman and Gerard ‘t Hooft, which proved the renormalizability of the theory. So, as 1971 turned to 1972, a viable theory of weak interactions that claimed weak neutral currents as a crucial ingredient was proposed, challenging the experimental groups to provide “yes” or “no” as an answer to the question “do neutral currents exist?”.
By that time two neutrino experiments were running, Gargamelle at the CERN Proton Synchrotron and the HPWF (Harvard, Pennsylvania, Wisconsin, Fermilab) counter experiment at what is now Fermilab. Both were confronted with this challenge without preparation. The searches for neutral currents in previous neutrino experiments resulted in discouragingly low limits, and it was somehow commonly concluded that no weak neutral currents existed. In fact, during the two-day meeting in November 1968 in Milan, where the Gargamelle collaboration discussed its future neutrino programme, the words “neutral current” were not even mentioned. On the contrary, the real highlight that attracted the interest of all was the recent observation of the proton substructure at SLAC, provoking the question of what structure would be revealed by the W in the neutrino experiments as opposed to the photon in electron-proton scattering.
Although the quest for neutral currents had been ignored, Gargamelle could meet the challenge once the matter of their discovery became urgent at the beginning of 1972. That is to say, scanning and event classification followed the same rules as established in the previous, NPA bubble-chamber experiment. There was no muon identification, since weak processes were supposed always to transform an initial-state neutrino into a final-state muon. Consequently, there was an unavoidable background of events in which a charged hadron leaves the visible volume of the chamber without visible interaction, thus faking a muon. Events with a muon candidate were collected in one category, A, while events consisting of secondaries that were all identified as hadrons were collected in a second category, B. These category-B events, the so-called neutron stars (n*), were thought to arise when undetected upstream neutrino interactions emitted a neutron that interacted in the chamber. It was then easy to deduce from these events the fraction that did not interact, thus simulating a muon, and to subtract them from the observed number of events in category A.
So, if weak neutral currents indeed existed, they would have induced events consisting of hadrons only, just as the n*s, and they would be waiting to be discovered as part of category B. Consequently, their investigation could be undertaken without any loss of time. The main task was then to find ways of distinguishing neutrino-induced from neutron-induced events.
Three hot months in 1973
The measurements of the inclusive neutral-current (NC) candidates were carried out between September 1972 and March 1973. The observation of an isolated electron in the anti-neutrino film, interpreted as an elastic weak neutral-current interaction on an electron, generated great excitement and inspired the efforts to check carefully each neutral-current candidate (Hasert et al. 1973a). For comparison, a charged-current (CC) sample was collected, where the same criteria were applied to the hadrons as for the neutral-current candidates. In particular, the total deposited hadron energy had to exceed 1 GeV. This severe cut was intended to keep the number of n* small.
At the collaboration meeting in March 1973 at CERN it looked as though a discovery was at hand. The number of neutral-current candidates was encouragingly large, as seen in table 1 (Hasert et al. 1973b). Their spatial distributions, as shown in figure 2, suggested first that the vertex distribution of the neutral-current candidates is neutrino-like, since it is flat like the charged-current events; and second that there is no indication of an exponentially falling distribution at the beginning of the chamber, as should be expected if the neutral-current candidates were dominantly induced by neutrons. Both arguments were corroborated by a Monte Carlo simulation of the Orsay group based on the simplifying assumption that upstream neutrino-induced neutrons enter the chamber directly along the neutrino direction.
Yet Jack Fry and Dieter Haidt contested that both arguments were not cogent for two strong reasons. First, the neutrino flux has a broad radial extension, which causes neutrino interactions in the coils surrounding the chamber and thus a flux of neutrons that enters the fiducial volume uniformly from the side. Second, high-energy neutrons generate a cascade, implying that the longer energy-dependent cascade length, rather than the interaction length, defines the relevant measure for the number of background events. Thus, it was unclear whether the neutral-current candidates really contained a novel type of neutrino-induced event or whether they were merely the expected, good-old neutron-induced stars.
In this situation a detailed neutron background calculation was indispensable. The programme had to take into account the geometry and matter distribution of the chamber, the magnet coils and the shielding, the neutrino flux in energy and radial distributions, the dynamics of the final state, and most of all the neutral hadron cascade. The demanding task consisted of describing realistically the complex final hadron state. The breakthrough was achieved when it became clear that only fast final-state nucleons can generate a cascade and eventually lead to an induced neutron background event satisfying the energy requirement and that, furthermore, the cascade is linear. All the ingredients to the programme were backed up by data, so the predictions did not depend upon free parameters. This ambitious programme (Fry and Haidt 1975) was set up, carried through in the following months, and led in July 1973 to the undisputable conclusion that the neutron-induced background explained only a small fraction of the neutral-current candidates, thus a new effect could be claimed and published (Hasert et al. 1973b). In an independent check, Antonino Pullia exploited the spatial distributions of neutral current and charged-current candidates, providing further evidence that the neutral-current sample was not dominated by neutron stars (Hasert et al. 1974).
Attack and victory
The new results were presented at the Electron-Photon Conference one month later at Bonn, together with the results of the HPWF experiment. At the end of the conference, Chen-Ning Yang announced the existence of weak neutral currents as the highlight of the meeting.
Shortly afterwards, the HPWF collaboration modified their apparatus with the net result that the previously observed signal of neutral currents disappeared. This news quickly reached CERN, where it had a dismaying effect and was a cause for distrust of the Gargamelle result. The opponents focused their criticism on the neutron background calculation and in particular on the treatment of the neutron cascade. Although the members of the Gargamelle collaboration withstood all the critical questions, the willingness to accept the validity of the Gargamelle observation had to wait until the end of the year. In a special exposure of Gargamelle to shots of protons with fixed momentum, the prediction of the cascade programme was verified quantitatively and unambiguously by the direct observation of proton-induced cascades in the chamber (figure 3). The results were presented at the American Physical Society conference in Washington in April 1974 (Haidt 1974).
One year after the discovery, at the time of the conference in London in June 1974, overwhelming confirmation for the existence of weak neutral currents came from Gargamelle itself with twice the original statistics (Hasert et al. 1974). In the meantime the HPWF collaboration had elucidated the reason why they lost the signal and also affirmed weak neutral currents. Further confirmation came from the new counter experiment of the California Institute of Technology and Fermilab (CITF) collaboration and from the observation of neutral-current-induced single pion events in the 12 ft bubble chamber at Argonne.
The impact
The discovery of weak neutral currents crowned the long-range neutrino programme initiated by CERN at the beginning of the 1960s and brought CERN a leading role in the field. The new effect marked the experimental beginning of the Standard Model of electroweak interactions and triggered huge activity at CERN and all over the world, both on the experimental and theoretical sides. The most immediate success was the prediction of the mass value of the elusive intermediate vector boson, W, on the basis of the Glashow- Salam-Weinberg model, combined with the first measurements of the weak mixing angle θW. This led to the idea of building a proton-antiproton collider, which was later realized at CERN and brought about the observation at CERN of the mediators of the weak force, the W and Z. The neutrino experiments at the CERN Super Proton Synchrotron increased their precision to the point that the first test of weak radiative corrections was enabled. The continuously increasing amount of knowledge on weak interactions justified building the Large Electron Positron collider, LEP, which with its high intensity reached sufficient precision at the energy range of the Z mass and beyond to test electroweak theory at the quantum level. All the results combined make the search for the Higgs, the last element of the electroweak Standard Model, a central issue for the Large Hadron Collider.
• This article is based on a talk at the symposium held at CERN in September 2003, “1973: neutral currents, 1983: W± and Z bosons. The anniversary of CERN’s discoveries and a look into the future.” The full proceedings have been published as volume 34 issue 1 of The European Physical Journal C, and as a book, Prestigious Discoveries at CERN, by Roger Cashmore, Luciano Maiani and Jean-Pierre Revol (Springer ISBN 3540207503, September 2004).
Aristotle’s report that “men in pits or wells sometimes see the stars”, made in On the Generation of Animals (book V, chapter 1), is a legend that was long believed, persisting until the 20th century without being experimentally tested. Similar stories were reported by Giambattista della Porta in 1560, Christof Schneider in 1626, John Herschel in 1836 and Charles Dickens in 1837, among others. But sunlight scattered from air molecules is generally much brighter than the brightest starlight, so it is impossible to see stars with the naked eye during daylight hours, no matter where one is looking from. Physicists today, however, have found methods for looking into stars from the bowels of a mountain. In addition to the observations of solar and supernova neutrinos, nuclear reactions of astrophysical interest are now being studied underground.
LUNA, the Laboratory for Underground Nuclear Astrophysics at Gran Sasso, has recently measured the cross-section for p + 14N → 15O + γ, the key reaction of the CNO cycle that fuels stars heavier than the Sun (see figure 1) (LUNA 2004). Shielding against cosmic radiation provided by the surrounding mountain has allowed measurements down to a centre-of-mass energy of 130 keV, and lower energies, close to those of stellar burning, are currently being explored. With the LUNA value of the astrophysical S-factor (a measure of the strength of the nuclear interaction) being about half that previously estimated, the predicted flux of solar neutrinos from the CNO cycle has correspondingly been halved and the age of the galaxy, as deduced from the stellar evolution of globular clusters, has been increased by about one billion years (see figure 2) (Degl’Innocenti et al. 2004).
LUNA was conceived during the conference dinner of “Nuclei in the Cosmos ’90”, which was held at Baden bei Wien in Austria. During hors d’oeuvre, Gianni Fiorentini from the University of Ferrara asked Claus Rolfs of the University of Bochum why nuclear reactions could not be measured in the laboratory at the energies at which they occur in stars. The answer was that cosmic radiation provides a formidable background for detecting the extremely slow reaction rates at these energies. By the time of dessert, they had realized that the solution was to install an accelerator in an underground laboratory. The director of the Laboratori Nazionali del Gran Sasso, Enrico Bellotti, was enthusiastic about the idea and the INFN president Nicola Cabibbo immediately endorsed it after receiving an informal letter of intent, which began: “We believe that the Gran Sasso laboratory offers a unique possibility for progress in the measurement of low-energy nuclear cross-sections, which are relevant for nucleosynthesis in stars and in the early universe, as well as for the evaluation of the solar-neutrino flux.”
Within a few months LUNA was born, given a name and approved by INFN as a collaboration involving physicists from Bochum, Cagliari, Ferrara, Frascati, Genoa, Gran Sasso and Turin, and headed by Rolfs. Later, physicists from Debrecen, Lisbon, Milan, Naples, Padua and Teramo joined the group. Carlo Broggini of Padua is currently the spokesperson, succeeding Piero Corvisiero of Genoa.
Nuclear reactions in stars, and at the Big Bang, occur at energies well below the Coulomb barrier Ec = Z1Z2e2/r, where nuclear processes are possible only through quantum tunneling, and their cross-sections are exponentially suppressed with decreasing energy. For the collision of two nuclei with atomic numbers Z1 and Z2 and reduced mass µ, the cross-section at centre-of-mass energy E can be written as in equation 1, where the exponential factor accounts for the barrier penetration, the astrophysical S-factor S(E) expresses the strength of the nuclear interaction and α is the fine structure constant. As an example, the cross-section for 3He + 3He → 4He + 2p drops from 0.07 b at E = 2 MeV down to 2 x 10-14 b at E = 16.5 keV.
Before LUNA, all experiments had been performed at energies such that E/Ec > 1/20, whereas stellar burning occurs at E/Ec ≈ 1/100. In order to extract S(E) at energies of astrophysical interest, available data had to be extrapolated over a relatively wide energy range, leading to substantial uncertainties. Although experiments had been optimized using the best available techniques, they were basically limited by the effect of cosmic rays. However, the problem can be overcome by carrying out such experiments in an underground laboratory.
LUNA began working initially with a 50 kV electrostatic accelerator (homemade by students at Bochum) coupled with a windowless gas target system. Now, in a second phase, a commercial 400 kV accelerator has been installed at Gran Sasso (see figure 3). The important features of both accelerators are a very small energy spread and a very high beam current, even at low energy. To avoid any interference with the passive detectors at Gran Sasso, the LUNA accelerators are installed in two small, dedicated rooms, separated from other experiments by about 60 m of rock.
The reaction 3He + 3He → 4He + 2p has been measured in the energy window – the so-called Gamow peak – relevant to the Sun (see figure 4) (LUNA 1999). At the lowest energy (E = 16.5 keV) the event rate was as low as two per month. This means that for the first time an important nuclear fusion reaction has been measured in the laboratory at the energies occurring in the Sun. This has reduced the (partial) uncertainty on the Be and B solar-neutrino fluxes produced from this reaction to 3%.
In the Sun the reaction p + d → 3He + γ must occur after deuterium is formed, so that the precise value of its cross-section is unimportant for solar physics, as long as it is much larger than that of the preceding reaction, p + p → d + e+ + ν. On the other hand, during Big Bang nucleosynthesis the rate of p + d → 3He + γ competes with the expansion of the universe, which dilutes the proton density, so that the cross-section for this reaction is crucial for establishing the primordial deuterium abundance. LUNA has measured this cross-section with an accuracy of the order of 10%, from 22 keV down to 2.5 keV (LUNA 2002). Combining the LUNA results with other input from nuclear physics and observational data of the deuterium abundance, shows that the nucleon to photon density ratio in the first few minutes of the universe is given by η = (5.9 ± 0.5) x 10-10, which is in excellent agreement with the value from cosmic microwave background observations of η = (6.3 ± 0.3) x 10-10, corresponding to a universe that is 400,000 years old.
In the near future, LUNA will measure 3He + 4He → 7Be + γ, which represents the main uncertainty for the prediction of B and Be solar neutrinos and is an important ingredient for estimating the abundance of lithium left from the Big Bang. Measurements in the region of solar energies will be performed at Gran Sasso, whereas those at higher energies will be taken in Bochum.
During physics workshops experts generally get together to present the outcome of recent work, confront and discuss new ideas, gain inspiration for further work, and incidentally start new collaborations. Sometimes, however, the conditions are so favourable that all workshop activities seem to be oriented towards a unique common goal. Each participant feels like a member of one team co-operating to accomplish a well defined goal. This is just what happened during the international workshop on “Transversity: New Developments in Nucleon Spin Structure” in June 2004, which brought together some 40 leading experimental and theoretical physicists in the field of nucleon spin structure at the European Centre for Theoretical Physics (ECT*) in Trento, Italy.
Many interesting talks were presented by renowned experts, supplemented by shorter, but no less inspiring, talks by PhD students and postdocs. The talks illustrated and substantiated the rapid developments in the new field of transverse spin physics. Indeed, the results presented were so encouraging that the idea emerged spontaneously to devote part of the scheduled (and unscheduled) discussion time to the preparation of a document, soon christened The Trento Convention, which would contain all relevant notations and conventions that are crucial for the achievement of further progress in this field. The document, which is now well under way, will soon be submitted to the e-print archives. While it has been set up by a few representatives (A Bacchetta, U D’Alesio, M Diehl and A Miller), it is in a sense co-authored by all the workshop participants. Just like the famous First Vatican Council that took place in Trento almost 500 years ago in 1530, the document represents a common frame and a common language for an unambiguous comparison between theory and experiment. It will be an indispensable tool to boost further developments in this field.
Why is such a seemingly technical subject as transverse spin physics so fascinating? From recent cosmological observations, for instance by the WMAP satellite, we know that visible matter represents only 4% of the universe. Of this small percentage only a minute fraction can be attributed to the mass of the quarks, for which – most likely – the Higgs mechanism has to be invoked. The remaining, and by far the largest, part of the mass of the visible universe has a dynamical origin. It is the dynamics of the quarks and gluons in the nucleon, as governed by the theory of strong interactions – quantum chromodynamics (QCD) – that needs to be fully understood to be able to account for the mass of the nucleon and hence that of the visible universe. It is this quest that drives theorists and experimentalists alike to study the transverse spin structure of the nucleon, giving access to subjects such as the orbital motion of quarks – a crucial ingredient of parton dynamics.
In the famous EMC (European Muon Collaboration) experiment at CERN, it became evident in 1988 that only a small fraction of the proton spin is carried by the helicities of the valence quarks. Since then much work has been done in unravelling the origin of the proton’s spin, in particular identifying the carriers of nucleon angular momentum in the framework of QCD. Thanks to a tremendous effort in both experiment and theory, we now know how to encode information on the dynamics of polarized quarks and gluons into a formalism that can be rigorously derived from QCD. This effort has also led to the definition of new observables and the introduction of methods to measure new effects associated with these observables. Transverse spin is an example of such a new observable, and it has generated considerable interest as it enables the study of the spin structure of the nucleon while “switching off the gluon contribution”. Moreover, the first observation of transverse spin effects in experiments – as presented at the workshop – gives indirect evidence for the existence of quarks with non-zero orbital angular momentum in the nucleon.
Only slightly more than a decade ago it was realized by Robert Jaffe of MIT among others that there is a third leading-order quark distribution function apart from the well known structure function F2 and the spin-dependent distribution function g1. This distribution of transversely polarized quarks in a transversely polarized nucleon, also known as “transversity”, is nowadays acknowledged as a crucial ingredient of the spin structure of the nucleon. However, until less than a year ago no data on this distribution function existed. On the other hand, measurements of transverse spin distributions would not only enable the study of the issues mentioned above (on spin effects without gluons and orbital motion), but would also make it possible to verify QCD predictions on the deformation of this quark distribution in polarized nucleons (known as the nucleon tensor charge) and the novel QCD evolution properties of this distribution function. Unfortunately, transversity is experimentally very difficult to access because it involves a simultaneous flip of the helicity of both the struck quark and the target; it is, to use some jargon, a “chiral-odd” object. For that reason another chiral-odd object is needed to arrive at a measurable (i.e. chiral even) cross-section. This can be realized in polarized Drell-Yan (p↑p↑ → l+l– X) or semi-inclusive processes with hadron beams (pp↑ → πX) or lepton beams (lp↑ → l´πX).
In semi-inclusive processes, experimentally an azimuthal asymmetry is searched for in the produced π mesons. Such an azimuthal asymmetry – i.e. with pions preferentially produced in one hemisphere rather than the other with respect to the scattering plane – may arise for two reasons. Assuming a string breaking mechanism, a quark-antiquark pair with a non-zero internal angular momentum is produced (see figure 1) and/or the struck quark already had some intrinsic orbital angular momentum. These mechanisms are known as the Collins and Sivers effects, respectively. For years it was commonly believed that they had to be suppressed because they violate invariance under the time-reversal transformation. However, as was shown by Stanley Brodsky of SLAC and Dae Sung Hwang of Sejong University in Seoul (the latter being present at the workshop), a residual interaction with the jet remnants enters the description of these reaction processes and prevents the time-reversal argument from being applicable. The proper description of this additional interaction was addressed by several speakers at the workshop, including Andreas Metz of Bochum and Dennis Sivers of Portland, indicating that there exists a possible link to chiral-symmetry breaking effects in QCD.
The first data on azimuthal asymmetries observed in deep-inelastic lepton scattering on transversely polarized proton targets were presented at the workshop by Andy Miller for the HERMES experiment at DESY (figure 2) and Rainer Joosten for COMPASS at CERN. These data represent only the beginning of a whole new generation of experiments that enable measurements of single-spin asymmetries as small as a few percent, while they are differential in two to three kinematical variables. Other collaborations, such as STAR and PHENIX at the Relativistic Heavy Ion Collider at Brookhaven and CLAS at the Thomas Jefferson Laboratory, presented other asymmetry measurements that give information on related processes. Moreover, the HERMES collaboration showed first results for the azimuthal asymmetry related to the Sivers mechanism, which – the data being non-zero – provided direct evidence of the existence of quark orbital angular momentum.
On the theoretical side much progress has been obtained in studying the universality of the transverse-momentum-dependent parton distribution and fragmentation functions. Both John Collins and Andreas Metz argued that this universality, or process independence, is now almost completely established for deep-inelastic scattering, e+e– annihilation and the Drell-Yan process, although it is still under debate for proton-proton annihilation because of the complicated field-theoretical structure of the diagrams involved. In parallel, an increasing number of groups – including Leonard Gamberg at Pennsylvania State University, Umberto d’Alesio from Cagliari, Aram Kotzinian of CERN and others – are calculating these functions either within models or by means of lattice QCD simulations, as discussed by Philip Haegler of Regensberg, in order to interpret both new and existing data. At the workshop it became clear that, despite the large amount of work already done and in progress, it is still too early to draw definite conclusions. Many speakers insisted that a new global analysis of all direct and indirect measurements is needed to make further progress. For that reason the development of The Trento Convention is very timely.
This is a rapidly changing field, and new experimental and theoretical avenues are currently being explored. The former, presented by Delia Hasch of Frascati, include azimuthal asymmetries with inclusive detection of two pions, upgrades of existing experiments and the use of polarized antiproton beams and targets to extract transversity from Drell-Yan measurements at the future HESR ring at GSI (which was presented by Frank Rathmann). New theoretical avenues include the first exploratory calculations of chiral-odd objects on the lattice, and the study of the relationship and complementarity between transverse-momentum-dependent (chiral-odd) parton distributions and the (chiral-odd) generalized parton distributions, which give a picture of the transverse distribution of partons in three-dimensional space, as described by Markus Diehl of DESY and Matthias Burkhart of New Mexico State/ECT*.
Although only a first small sample of data on azimuthal asymmetries with transversely polarized targets is as yet available, the field is already confronted with rapid experimental and theoretical developments giving new insights into the QCD structure of the nucleon in general and the role of orbital angular momentum and transverse spin in particular. In Trento this gave rise to an outspoken enthusiasm, illustrated by the many lively discussions. All of this reflects the enormous activity in this relatively new branch of QCD physics.
The KamLAND collaboration has announced an improved measurement of the oscillation between the first two neutrino families based on a 766.3 tonne-year exposure to reactor antineutrinos. This latest analysis also provides evidence of the distortion in the energy spectrum expected from the effects of electron-antineutrino oscillations.
The KamLAND detector, which is based on 1 kilotonne of ultra-pure liquid scintillator, is located near Toyama in Japan, where it is exposed to electron-antineutrinos from 53 nuclear power reactors in Japan, as well as Japanese research reactors and reactors outside Japan. The new results use data collected between March 2002 and January 2003 – or three times the amount of data used in the original measurement, which provided the first evidence that reactor antineutrinos “disappear”. Moreover, improvements in the analysis have allowed the fiducial volume of the detector to be increased by 33%.
With the new analysis, KamLAND observed 258 events with electron-antineutrino energies above 3.4 MeV, compared with 365.2 events expected if there were no neutrino oscillations. This puts the confidence level for the disappearance of reactor antineutrinos at 99.995%. The collaboration also found that the observed energy spectrum disagrees with the expected spectral shape in the absence of neutrino oscillations at the 99.9% confidence level. It does, however, agree with the distortion expected from electron-antineutrino oscillation effects.
The first analysis from KamLAND, taken together with results from solar neutrino experiments, already restricted the parameter space for two neutrinos, favouring the large mixing angle solution. The latest two-neutrino oscillation analysis of the larger data sample gives a best-fit point at Δm2 = 8.3 x 10-5 eV2 and tan2θ = 0.41. This disfavours the larger values of Δm2 that KamLAND previously allowed. A two-neutrino global analysis of data from KamLAND and from solar neutrino experiments, together with the assumption of CPT invariance, further restricts the parameter space, as shown in the figure, with a best fit for the combined analysis at Δm2 = 8.2 + 0.6 – 0.5 x 10-5eV2 and tan2θ = 0.40 + 0.09 – 0.07.
DIS 2004 – the XII International Workshop on Deep Inelastic Scattering – took place in Strbské Pleso in the High Tatras mountains in Slovakia on 14-18 April. The DIS series of workshops provides a forum for bringing together the latest experimental and theoretical results, both to increase the understanding of quantum chromodynamics (QCD) and to unravel the complicated structure of the proton. Organized by the Institute of Experimental Physics at the Slovak Academy of Sciences in Kosice, in association with other Slovak high-energy physics institutes, DIS 2004 attracted 260 participants. After the welcome address by Slovak president Rudolf Schuster, the programme followed its usual well-tried format. It began with almost a full day of plenary review talks, before the participants divided up into working groups on the following topics: structure functions and low x; diffraction and vector mesons; hadronic final states; heavy flavours; electroweak and physics beyond the Standard Model; and spin physics. After three days the plenary session reconvened for reports from the working groups.
One of the hot topics was pentaquarks. Here, the experimental situation is puzzling. The narrow Θs(1530) state with B=S=1, predicted by the chiral soliton model, is seen by several experiments, including ZEUS and HERMES at HERA, but it has not yet been observed by HERA B. On the other hand H1 reports a narrow B=-C=±1 state at 3099 MeV, perhaps a little heavy for a Θc pentaquark, which is not seen by ZEUS.
Another exciting area is spin physics, which is about to enter a new era. High-precision data are due soon from the COMPASS experiment at CERN, from the Thomas Jefferson National Accelerator Facility (Jefferson Lab) and from the Relativistic Heavy Ion Collider at Brookhaven. These data will supplement the extensive information coming from HERMES and thereby probe the intricate spin structure of the proton. Jefferson Lab already has the first precise measurements of the structure function for scattering from transversely polarized neutrons, g2n, which allow a study of twist-3 operators.
Heavy quarks and gluons
The presentations of more accurate data on heavy flavours, together with a better implementation of QCD, showed that the discrepancy between the data and predictions on b-bbar production, found both at Fermilab’s Tevatron and at HERA, has largely been resolved. Moreover, measurements were presented of the structure function F2b for the first time. This directly probes the b-quark content of the proton, while contributing only about 2% of the total proton structure function F2 in the accessed region of high momentum transfer, Q2, and Bjorken x of 0.01 (see figure 1).
Indeed, there was considerable discussion on the partonic structure of the proton, which, in addition to its intrinsic interest, is so important for improving the predictions for searches at the Large Hadron Collider (LHC) at CERN. The experiments at HERA have opened up the domain of small x, and the present precision of the data allowed searching questions to be discussed at the workshop. What are the properties of the gluon? How large is the kinematic domain in which the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution of parton densities is valid? Are ln(1/x) effects evident in the data? Is there any evidence of absorptive effects coming from parton recombination, or even of parton saturation? The experiments at HERA observe diffractive DIS events at about 10% of the rate of inclusive DIS events: what role does this diffractive process play? What is important for HERA to measure now?
One of the discoveries of HERA is the unexpected behaviour of the gluon. Analyses of the data at small x surprisingly reveal a valence-like gluon at low scales, whereas it is the sea quarks that grow with decreasing x. Moreover, the gluon is not well determined. For example at Q2 = 100 GeV2 there is at least a 10% uncertainty arising just from the statistical and systematic errors on the data, which becomes much greater at small and large x. Discussions at the workshop concentrated on the equally, and perhaps more, important theoretical uncertainties on the determinations of the partons.
One major source of uncertainty was considerably reduced when Sven-Olaf Moch, Jos Vermaseren and Andreas Vogt presented, for the first time, the complete next-to-next-to-leading-order (NNLO) splitting functions. Their exact results lie in the middle of the approximate bounds previously determined, so the existing NNLO global analyses based on these bounds will be approximately valid. As of this workshop DIS studies have entered the NNLO era. Of course this is for evolution of the parton density within a pure DGLAP framework, which must break down at sufficiently low x and low Q2. Presentations at the workshop showed that additional ln(1/x) effects are being brought under control; the indications are that they are not large in the kinematic domain explored at HERA.
There was also much discussion of parton saturation. The alternative non-linear equations, which describe saturation, are in fact equivalent; they simply view the parton recombination process from different Lozentz frames. Despite the theoretical activity the consensus of the workshop was that there was no evidence of saturation in the perturbative domain of the HERA data. This does not mean that absorption corrections – signalling the onset of saturation – are negligible. Indeed the recent diffractive DIS data from HERA allow an estimate of such effects, and hence of their influence on global parton analyses.
Looking at more detailed assumptions used in parton analyses, it became clear that the NuTeV experiment’s anomaly in sin2θW may not exist. If we allow for strange quarks with s≠ sbar, for isospin violating effects, and for contributions from quantum electrodynamics, then each is found to reduce the anomaly significantly. Indeed there are several aspects of the partonic structure that are within reach, if much more accurate measurements are made, including the valence d-quark distribution from charged-current positron-proton scattering; s – sbar from dimuon production in neutrino scattering; and the valence u+d distributions at low x ≈ 0.1 from high statistics electron and positron data from HERA.
The identification of new physics at the LHC, Tevatron or HERA is likely to require precise predictions from the Standard Model, which in turn rely on accurately known partons. However, the gluon distribution, which is at the heart of parton analyses, is poorly known. It is determined by the scaling violations of F2, by F2-charm and by jet data from the Tevatron and HERA. It became clear at DIS 2004 that it is crucial to measure the longitudinal structure function FL; it is a direct “orthogonal” measure of the gluon. Simulations that were presented showed how running HERA at four different proton energies would have a decisive effect on determining the gluon, and, in turn, much improve the determination of the QCD coupling.
Precision diffractive DIS data are now available, and diffractive parton densities were presented. These densities are not universal. Care is required to take them from one diffractive process to another, since we must allow for the probability that the rapidity gaps survive population by secondaries from the underlying event. The probabilities depend on the diffractive process. There was much discussion of the new data for these exclusive diffractive processes, both from HERA and the Tevatron. The analysis of the recent HERA data for the photoproduction of dijets was particularly illuminating in this respect.
A prosperous future?
In summary, DIS continues to flourish, with the presentation of a wealth of new results that produced vigorous debate. Much remains to be learnt and we are only just getting to grips with many basic problems, for which the data are either insufficient or even absent. It is inconceivable that HERA will not measure FL – but it remains to be done. There are a host of processes for which a ten-fold increase in luminosity would be invaluable, even forgetting the possibility of the discovery of surprise exotic phenomena. It became clear at the workshop that probing the proton at high energies is revealing more and more information about QCD, which needs to be theoretically understood, with important implications for all high-energy phenomena and the LHC in particular. It would be a tragedy if the HERA programme ran out of time while the physics potential of the machine is just coming to its prime.
“Tracing the onset of deconfinement in nucleus-nucleus collisions” was the name of a workshop held at the European Center for Theoretical Studies in Nuclear Physics and Related Areas (ECT*) in Trento, Italy, on 24-29 April. Around 40 theorists and experimentalists from Europe, Japan and the US came together to discuss recent progress in the study of the energy dependence of particle production in nuclear collisions. The workshop focused on a prominent issue in high-energy nuclear physics: whether anomalies measured for central lead-lead collisions at low energies at CERN’s Super Proton Synchrotron (SPS) signal a phase transition from confined to deconfined strongly interacting matter.
Mark Gorenstein of Kiev began the workshop by recalling the basic ideas that motivated the energy scan programme at the SPS and comparing the old predictions with recent data from the SPS and from the Alternating Gradient Synchrotron (AGS) and the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory. (Last year the NA49 collaboration presented numerous results on collisions at the five energies – 20, 30, 40, 80 and 158 AGeV – of the energy scan programme at the SPS.) Gorenstein pointed out that the unusual energy dependence of hadron production expected in the case of the onset of deconfinement is in fact observed.
This introductory talk was followed by experimental reviews of the most recent results from SIS, the heavy-ion synchrotron at GSI, Darmstadt, as well as from the AGS, SPS and RHIC. These presentations focused on the energy dependence of a number of observables. In particular, Marco van Leeuwen of the Lawrence Berkeley National Laboratory discussed data on strange hadron yields and concluded that the relative strangeness production shows a sharp maximum (the “horn”) at about 30 AGeV. Results on the change in the energy dependence of pion multiplicity (the “kink”) and the anomaly in the shape of transverse mass spectra (the “step”) were also reported. Finally, the results were compared with the latest predictions of the hadron gas model and microscopic string-hadronic models (RQMD and URQMD) of the collision process. Neither model is able to reproduce adequately the observed anomalies, as figure 1 indicates.
Several speakers explored new phenomena related to the onset of deconfinement. Fluctuations were the focus of this discussion. Large deviations from purely statistical behaviour are expected when the trajectory on the phase diagram of the expanding and cooling matter passes close to the hypothetical critical point. Models predict that this point may be located in the region accessible in nuclear collisions at SPS energies (see figure 2).
The view on properties of the quark-gluon plasma (QGP) at T = (1-2) Tc, where Tc is the phase transition temperature, has recently radically changed. Instead of being regarded as a weakly interacting gas of quasiparticles, the QGP is now viewed as a near-perfect liquid. The lattice QCD results show that charmonia states (of a charm quark and antiquark) remain bound at such temperatures, contrary to the previous belief. Edward Shuryak of Stony Brook argued that similar bound states should exist for light quarks and gluons, in particular correlated coloured pairs of light quarks may be present. Andrei Starinets from Seattle demonstrated that the transition from weakly to strongly coupled QGP can be theoretically studied in an N = 4 supersymmetric Yang-Mills theory using Juan Maldacena’s conjecture of duality between gauge theories and string theories. The strongly coupled QGP explains the validity of a hydrodynamical description of matter flow and predicts a (surprisingly small) value for its viscosity.
The workshop closed with a review of new experimental projects, which in future may extend the studies begun by the energy scan programme at the SPS at CERN. Christoph Blume from Frankfurt reported on the proposal to extend the SPS programme to study the collisions of light ions and protons in the energy range 10-160 AGeV. The new experiment would be based on the upgraded NA49 detector. This programme, if approved, may start in 2006 and run in parallel to the Large Hadron Collider. It should result in a unique measurement of the two-dimensional (energy-system-size) dependence of hadron production that is necessary for a precise understanding of the onset effects observed in central lead-lead collisions, and for a search of the critical phenomena.
Ideas for the study of nuclear collisions in a fixed-target experiment at RHIC (energy 10-100 AGeV) were discussed by David Hofman of Chicago. This experiment could be based on the existing BRAHMS detector and would take data in parallel to the collider experiments at RHIC. Several options for the target are under discussion: a foil or wire target in the beam halo, a gas jet and a low-energy “crossing” ion beam. The unique feature of this project would be an almost continuous measurement of the energy dependence of inclusive hadron production while the RHIC beams are ramped to full energy.
Finally, Volker Friese of GSI presented the current status of the new experimental facility, FAIR, at GSI. The Condensed Baryonic Matter (CBM) experiment is being designed to study lead-lead collisions at energies of 2-35 AGeV with very high intensity beams. The properties of dense hadronic matter will be the focus of the study, and the first data taking is scheduled for 2012.
Overall, the workshop indicated the large theoretical and experimental interest in the study of nuclear collisions in the low SPS energy range, where a number of phenomena have been observed that could be related to the onset of deconfinement. This has motivated the idea of establishing an annual series of workshops on this matter.
A decade after achieving its first beam, the US Department of Energy’s Thomas Jefferson National Accelerator Facility completed data collection on its 100th and 101st experiments. The pair of experiments, named “Quark propagation through cold QCD matter” and “Q2 dependence of nuclear transparency for incoherent ρ0 electroproduction”, ran simultaneously in Jefferson Lab’s Hall B from December 2003 to early March this year.
The 100th experiment probed quantum chromodynamics (QCD), the theory of the strong interaction, with emphasis on two of the fundamental processes of QCD: hadronization and gluon emission from quarks. The experiment made use of the Continuous Electron Beam Accelerator Facility (CEBAF), essentially to knock single quarks out of hadrons. The energy that the struck quark absorbs in the collision not only knocks the quark out of the particle it was bound within but also creates new quarks and gluons. At least one of these new quarks pairs up with the original quark, while the rest join to form other multi-quark particles – the process of hadronization.
Members of the CEBAF Large Acceptance Spectrometer (CLAS) collaboration are studying this process to explore both how long it takes for the created quarks to pop into existence and combine into new particles, and exactly how these new particles are created. To this end, the experiment used five different targets composed of nuclei of various size: deuterium, carbon, iron, tin and lead. Understanding the process of hadronization inside the nucleus through such measurements may provide a clearer understanding of quark confinement.
The 101st experiment was a search for “colour transparency”. According to QCD, pointlike colourless systems, such as a meson with a pointlike configuration produced in an exclusive process, should be able to travel through nuclear matter without interacting with other particles. When this happens the medium the particles are travelling through is said to be colour transparent.
In this experiment the team looked for ρ-mesons that were created when the electrons interacted with target nuclei. Some of these mesons may have acted as pointlike colourless systems; detecting them would provide a long-sought-after clear indication of the onset of colour transparency.
A new narrow charm-strange meson – a charm quark bound with a strange antiquark – has been found by the SELEX experiment at Fermilab. The new particle is a heavier relative of similar states found in other experiments last year, and its puzzling behaviour adds another chapter to the continuing story of this intriguing family of mesons.
In spring 2003 the BaBar experiment at SLAC announced the discovery of a new charm-strange meson, the D+sJ(2317), which was swiftly confirmed by CLEO at Cornell and BELLE at KEK. CLEO also found evidence for the existence of a heavier partner, with a mass slightly more than 40 MeV higher. While these mesons had been predicted theoretically, their masses were lower and their lifetimes longer than expected. Following these announcements, the SELEX collaboration began to re-examine its own data from fixed-target collisions at Fermilab’s Tevatron.
SELEX, which had stopped data-taking in 1997, was designed to make high-statistics studies of the production of charm particles in Fermilab’s charged hyperon beam. In this most recent study the collaboration analysed a sample of nearly 1010 interactions produced by Σ–. In particular, the team used events containing decays of the charm-strange ground state, Ds± → K+K–π±. To search for new excited states of the Ds+ they selected events in which it was produced together with an eta meson, identifying the eta through the two photons to which it decays, η →γγ. Then, when the team plotted the mass spectrum of Ds+η events, they found a clear peak of some 49 events, with a significance of 7.2 σ, at a mass of 2635.9 ± 2.9 MeV/c2 (Evdokimov et al. 2004).
As a particle of this mass could also decay to D0K+ the team searched for this decay mode in those events in the Σ– sample that contained the decay D0 → K–π+. The events selected in this way clearly showed the state D+sJ(2573), already known, but also revealed a peak with 14 events at the slightly higher mass of 2613.5 ± 1.9 MeV/c2. Combining the results of the two decay modes – Ds+η and D0K+ – indicates the existence of a new state, the D+sJ(2632), with a mass of 2362.6 ± 1.6 MeV/c2 and a very narrow width. Just why the new state is so narrow remains unclear, as it is massive enough to decay easily to D0K+. It is also surprising that this decay mode is dominated by the decay to Ds+η. However, as the SELEX team points out, if the new state does belong to charm-strange spectroscopy in the usual way, it should have a closely spaced partner. The challenge is now on to investigate this spectroscopy thoroughly.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.