On 7-10 July this year around 120 physicists convened at the Physikzentrum, at Bad Honnef in Germany, to attend the Hadron Physics at COSY workshop. Experimenters and theorists discussed the key questions that can be addressed with cooled beams of polarized and unpolarized protons and deuterons, in particular at the cooler synchrotron, COSY, at the Forschungszentrum Jülich.
The workshop began with an overview of charge symmetry breaking (CSB) by Gerald Miller of Seattle. If the up and down quarks had the same mass, quantum chromodynamics would be invariant under the exchange of up and down quarks. However, such charge symmetry is broken by electromagnetic effects and the mass difference of the up and down quark. This makes the neutron heavier than the proton, which is essential for the stability of the hydrogen atom. Investigating CSB effects is an important way to determine the masses of the up and down quark, two fundamental parameters of the Standard Model. On the theoretical side, Weinberg’s concept of effective field theories has provided an important tool, enabling precision calculations of CSB to be made.
Alena Opper of Ohio reported on the recent experimental breakthroughs in identifying CSB in the reactions np → dπ0 and dd → απ0. CSB is also being investigated in pion production at COSY. A photon detector would create the unique possibility of studying the mixing of the two lightest scalar mesons, the isoscalar f0(980) and the isovector a0(980), which is induced by CSB. The reactions pn → dπη and dd → απη promise to be particularly clean tools for independently quantifying the a0 – f0 mixing model, to help resolve the controversy as to whether or not these systems have a quark-antiquark structure.
The hyperon-nucleon interaction is an ideal testing ground for studying the breaking of another symmetry, SU(3)flavour, in hadronic systems. So far, relatively few data obtained in the 1960s and recently at KEK are available on hyperon-nucleon scattering at low energies, so production experiments must be used instead. Ben Gibson of Los Alamos and Ashot Gasparyan of ITEP reported on theories for K–d → γYN and pp → K+YN. They showed how to isolate final-state interaction effects by studying polarization observables and how to extract independently the spin dependence of the hyperon-nucleon scattering length model. This is relevant for understanding hypernuclei and the structure of strange matter.
In the near future, COSY will provide precise data on meson production in reactions involving polarized baryons. These data will be both a challenge and an opportunity for effective field theories. New data on η, ω and φ meson production were presented in separate sessions. In addition, the production of scalar mesons is of special interest because of the possible mixing with glueballs, as Eberhard Klempt of Bonn explained.
Baryon resonances are an important part of the research programmes of many accelerator facilities for hadron physics. Maxim Polyakov of Bochum presented his predictions for the pentaquark, Θ+, a baryon that does not fit into the standard three-valence quark structure (see “New five-quark states found at CERN”). Wolfgang Eyrich of Erlangen reported on hyperon production in proton-proton reactions and pointed out the possibilities for confirming the existence of the Θ+ with hadronic reactions. Another candidate for “exotic” structure, the Roper resonance N*(1440), is also under intense experimental investigation. Polarized electrons will be used at the electron-scattering facility MAMI in Mainz to discriminate the Roper contribution from the background, while the experiment CLAS at JLAB uses electroproduction to study isobars as a function of the photon virtuality. The COSY facility offers polarized protons and deuterons as incident particles, as well as the alpha particle in inverse kinematics. Here, the alpha particle can be used as a scalar-isoscalar probe.
Hadronic interactions are also of fundamental interest for the spontaneous breakdown of chiral symmetry, as described in the review by Volker Metag of Giessen. The current masses of the up and down quarks are only a few per cent of the proton mass, indicating that the bulk of hadronic mass is due not to the Higgs mechanism but to the spontaneous breakdown of chiral symmetry. An important question is whether chiral symmetry is restored at high nuclear densities. Studies of deeply bound pionic atoms and two-pion production on nuclei are viable tools for exploring these issues.
The extension of the GSI laboratory at Darmstadt, approved earlier this year, offers the prospect of studying hadron physics with antiprotons at energies up to 15 GeV. Hans Gutbrod of GSI presented the planned new research facilities, while Bernhard Franzke, also from GSI, outlined the design of the High Energy Storage Ring (HESR). Helmut Koch of Bochum then introduced the highlights of the charm-physics programme at HESR, and the role of charm in the nuclear medium was addressed by Jim Ritman of Giessen.
For three days in October the population of Binn, a beautiful village in the Oberwallis in Switzerland, increased by almost 20% when 23 experimental and theoretical particle physicists attended a workshop on cross-section measurements at the Large Hadron Collider (LHC).
The main purpose of this small workshop, organized by Günther Dissertori and Michael Dittmar of the CMS group at ETH Zürich, was to investigate how well the different types of physics reactions and reaction ratios expected at the LHC can be measured and calculated. About half the time at the workshop was devoted to thought-provoking review talks, while the rest remained free for questions, discussion and critical comments.
The workshop began with an introduction to general aspects and problems of cross-section measurements at the LHC. The participants were reminded that in addition to the experimental uncertainties from efficiency, backgrounds and the machine luminosity, there are potentially important theoretical uncertainties in the calculations, such as those arising from uncertainties in the parton distribution functions (PDFs) and from unknown higher-order corrections. It was also pointed out that normalizing various interesting high-Q2 reactions to the well-understood and abundant production of W and Z bosons at the LHC could dramatically reduce systematic errors. This might particularly help with errors arising from the absolute-luminosity measurement and from PDF uncertainties. For some reactions the estimated theoretical and experimental uncertainties are of similar size, and an especially large effort will be needed to understand and, if possible, reduce the uncertainties.
A look ahead
The next two sessions were devoted to the theoretical and experimental aspects of cross-section calculations and their potential limitations at the LHC. While today it is obviously not possible to know exactly how the ATLAS and CMS general-purpose detectors will perform once the LHC starts, it is already clear that many important measurements and calculations will be much more difficult than corresponding ones at previous high-energy colliders, such as the Large Electron_Positron Collider (LEP). Several talks therefore addressed the question of systematic limitations at previous high-energy collider experiments, particularly for reactions that are relevant to the LHC. For example, there was general agreement that absolute measurements of αs with a precision comparable to that achieved at LEP will be impossible at the LHC.
Since all the high-Q2 reactions of interest at the LHC result from collisions of the quark and gluon constituents of the proton beams, the accuracy of a cross-section calculation relies on a precise knowledge of the parton distribution functions – that is, the quantities that describe how the momentum of a fast-moving proton is shared among its constituents. There were presentations on the latest PDF results from the H1 and ZEUS experiments at HERA, which clearly demonstrated the impressive precision that has already been achieved. The precise information on the quark and gluon PDFs that can be extracted from the HERA structure function data leads in turn to very accurate (±1-2%) predictions for the W and Z production cross-sections, and their ratios, at the LHC.
Even so, it is possible that measurements at the LHC could constrain the PDFs even further. For example, the ratio of W+/W– production at the LHC is very sensitive to the u/d quark PDF ratio in the proton. Using the most up-to-date PDFs yields an uncertainty of about 1.5% on the prediction for the W+/W– ratio. A measurement error smaller than this at the LHC will further constrain the u/d ratio. In principle a precision measurement of the kinematic distribution of charged leptons from Z decay could yield information on the weak mixing angle sin2θW, but it is likely that the residual PDF uncertainties will in practice preclude any improvement in the accuracy of sin2θW achieved so far at LEP and the SLAC Linear Collider.
Another session focused on the status of higher-order perturbative corrections in quantum chromodynamics (QCD). Although there has been steady progress in the calculation of these corrections in recent years, with some cross-sections now known to next-to-next-to leading order (NNLO) accuracy, there are still some important background processes that are known only to leading order (for example, tbart and bbarb production in QCD). Without at least the full next-to leading order (NLO) corrections to such processes, it is very difficult to estimate the uncertainty in the prediction, and this can have important implications for the unambiguous identification of new physics signals at the LHC.
An important development in the past year has been the implementation of exact NLO contributions in parton-shower Monte Carlo event generators. In particular, the MC@NLO program, which is based on the HERWIG Monte Carlo event generator, already includes many of the most important processes. However, the experimenters were quick to come up with a long wish-list of additional processes. It was not difficult to agree on a common list of “most wanted” QCD corrections, but the theorists were adamant that without a breakthrough in calculational techniques, the existing theoretical technology will not be able to cope with providing many of the required NLO or NNLO corrections before the LHC starts.
LHC processes in detail
During the second day of the workshop, four sessions were devoted to more detailed presentations and discussions of particular LHC processes. These were the production of Drell-Yan lepton pairs, the production of events with massive boson pairs and subsequent leptonic decays, and the measurement of jet-production processes that arise from quark-quark, quark-gluon and gluon-gluon scattering. During the second half of the afternoon the participants split up into more specialized working groups to focus on PDFs, high-mass Drell-Yan lepton pairs, the physics of jets and the systematics of potential backgrounds for exotic processes. The final Saturday evening session addressed the question of systematic uncertainties in background processes, which could limit the LHC’s potential to make discoveries.
Using a few examples from ATLAS and CMS search simulations for (SUSY) Higgs particles and other exotics, some particularly problematic signatures could be identified and analysed. Unfortunately it became obvious to most participants that the hypothetical large statistical significance of some of these exotic particles is not the full story, as in reality these signatures will be completely hidden in the uncertainties of the backgrounds.
While it is obviously difficult to quantify fully the limits of each individual signature, it is possible to conclude that to achieve signal-to-background ratios below, for example, 0.5 (0.25) – excluding the few exotic signatures that provide narrow mass peaks – the backgrounds need to be controlled to better than 10% (5%). Only in rare cases can such precision be obtained from Monte Carlo simulations, therefore data-based background estimates will almost certainly be required. The Saturday evening session concluded with short reports from the working groups about outstanding problems that should be addressed in detail during the coming years.
At the final summary session on Sunday morning, there was unanimous agreement that the meeting had been a great success, particularly because it enabled participants to step outside the routine of daily work and to think in depth about some of the most important issues facing experimenters and theorists in preparing for the start of the LHC. The success of the meeting could also be judged by a comment from one participant during an additional “special session” on Sunday afternoon: “This was the first conference where I not only participated in all the sessions but even listened to all the talks.” Falling asleep would in any case have been difficult during this last “session”: a four-hour hike to the Mässersee and back!
New results from the Sudbury Neutrino Observatory (SNO) are beginning to pin down more precisely the parameters for mixing between different types of neutrino.
Unlike other neutrino detectors, SNO can detect neutrinos in three different ways through its use of heavy water. Only electron neutrinos give rise to charged-current reactions with deuterons in the water, while all types of active neutrino can scatter elastically off the deuterons or induce neutral current (NC) reactions. In the NC reactions the neutrino splits the deuteron into a proton and a neutron, and the gamma rays emitted when the neutron is subsequently captured by another nucleus provide the signature for the reaction.
For the earlier analyses based on events detected with pure heavy water, the SNO collaboration assumed an energy-independent survival probability for the neutrinos. While this allowed the team to say that their data show that electron neutrinos must oscillate to another type, it was not sufficient for calculating the constraints on parameters in MSW mixing. For the new measurements, the team added 2 tonnes of high-purity sodium chloride to the 1000 tonnes of heavy water in the detector. This increased the detection efficiency for NC events three-fold, due to neutron capture on chlorine-35 nuclei. The increased sensitivity allowed them to measure the total active boron-8 solar-neutrino flux, which was found to agree with standard solar-model calculations. A global analysis of solar and reactor neutrino results, including the new measurements, yields Δm2 = 7.1 + 1.2/ -0.6 x 10-5 eV2 and θ = 32.5 + 2.4/-2.3°, disfavouring maximal mixing at a confidence level equivalent to 5.4σ.
Aschaffenburg, a medieval town near Frankfurt in Germany, was the attractive setting for the 10th International Conference on Hadron Spectroscopy, held from 31 August to 6 September. The reason the meeting was held in Germany for the first time in its history was the decision of GSI in Darmstadt to include hadronic physics in future research by constructing an antiproton source and a storage ring (HESR/PANDA) for antiprotons, thus extending the programme that was so successfully started with LEAR at CERN.
The conference, jointly organized by GSI and the Institute for Experimental Physics of the Ruhr-Universität Bochum, was attended by around 200 scientists from all over the world. The timing of the conference was very fortunate as many new and surprising results appeared in the months just prior to the meeting.
The highlights of this year’s conference were the discussions about the nature of the recently discovered narrow states, the widths of which are compatible with the experimental mass resolutions, of the order of 10 MeV.
In the mesonic sector, new results came from BaBar, Belle, CLEO and BESII. Two narrow states with masses of 2317 and 2460 MeV with open charm and strangeness have been seen, and may be the missing 0+ and 1+ states. Their masses and widths are, however, difficult to explain in standard quark model calculations, and so explanations in terms of D-K “molecules”, D-π “atoms” and charmed four-quark states are under intense discussion. The observed pattern fits very well to chiral model predictions, giving additional weight to these ideas.
Belle has found another very narrow (Γ < 3.5 MeV) state at 3872 MeV decaying into J/Ψπsup>+π–. Its properties do not fit the values expected for the still missing 1 3D2 ccbar state well, therefore an explanation in terms of a D0D0bar* molecule is not excluded. BESII has reported a bump in the ppbar mass distribution, which might belong to a ppbar bound state below the ppbar threshold. The interpretation of this structure, however, is still controversial.
As far as baryons are concerned, there is evidence from four different laboratories for the existence of a narrow state, Θ+ with a mass of 1540 MeV decaying to nK+. Having positive strangeness, it is a very exotic particle and cannot be constructed in a conventional three-quark picture. All signals have a significance of five standard deviations and their confirmation continues. Such a state was predicted six years ago using a soliton model giving rise to an antidecuplet with JP = 1/2+. The implications of these findings were discussed in several talks at the conference and in an open panel, resulting in many ideas for future measurements that will clarify the true nature of these states.
Additionally, there were some very interesting contributions concerning the properties of hadrons inside nuclear matter, the discovery of baryons with double charm and its implications, and the role of the sigma/kappa structures in low-energy ππ and πK scattering. Although new results concerning the sigma/ kappa problem were presented, it is not yet clear if these structures can be attributed to particles or are effects of dynamical origin.
One outcome of the conference is that it has become very clear that the advent of precision data in the heavy quark sector is of high relevance for future developments in hadron physics, even in the light quark sector. It seems that the quark-antiquark and three-quark descriptions of hadrons have reached their limitations and have to be extended or replaced by new ideas, such as chiral models, soliton pictures, molecular states, and so on, which have to be taken more seriously than in the past.
The proceedings of HADRON 2003 will be dedicated to Lucien Montanet from CERN, who died on 19 June this year. He belonged to the pioneers of hadron physics and his eminent role in this field was highlighted during a special plenary session in his honour. The next conference in the series is scheduled to take place two years from now in Rio de Janeiro, where there should be further exciting results. Undoubtedly, hadron physics has a bright future.
The big international summer conferences provide the venue for high-energy physicists to show their newest results in front of a large, influential audience. While delegates always hope for something new and exciting, the pace of research doesn’t always oblige. This year, 650 physicists attended the XXI International Symposium on Lepton and Photon Interactions at High Energies (Lepton Photon 2003) at Fermilab. While radical revisions to our view of the universe are probably not required, interesting new results were presented, and a series of excellent review talks covered the broad sweep of particle physics.
As one might expect for a conference at Fermilab, the first day was devoted largely to collider physics and electroweak-scale phenomena. The experimental state of play with results from LEP, the Tevatron and HERA was covered by a series of speakers: Patrizia Azzi of Padova on top physics, Terry Wyatt of Manchester on electroweak, Michael Schmitt of Northwestern and Emmanuelle Perez of Saclay on Higgs/SUSY and other searches, respectively, and Bob Hirosky of Virginia on QCD. The overall picture remains one of consistency with the Standard Model, but the good news is that with roughly 200 pb-1 of data now recorded at CDF and DØ, the Tevatron’s Run II has entered previously unexplored territory. The experiments are opening up new areas of parameter space and potential discovery reach for new physics, both with their direct searches and through precise measurements of the properties of the top quark and the W and Z bosons. On the theory front, CERN’s Paolo Gambino talked about the status of electroweak measurements and global fits, including the muon (g-2) measurement and NuTeV’s anomalous value of sin2θW, and this led to a spirited discussion in the question and answer session after the talk. Gian Giudice, also from CERN, described theoretical predictions for new physics at colliders, and Thomas Gehrmann of Zurich covered developments in QCD. Cornell University’s Peter Lepage described recent notable progress in lattice QCD calculations, and the last two speakers, Augusto Ceccucci of CERN and Yuval Grossman of Technion, described rare K and B decays. There are recent results from experiments at Fermilab, CERN and Brookhaven, and new initiatives that will significantly extend our sensitivity to new TeV-scale physics through such windows are underway or planned. A reception and poster session wound up the day, with music by the Chicago Hot Six, led by trombonist Roy Rubinstein – otherwise known as Fermilab’s assistant director.
The second day was devoted to heavy-flavour physics and the CKM matrix. Probably the most talked about new result of the conference was presented by Tom Browder of Hawaii, who reported on Belle’s determination of sin 2Φ1 (sin 2ß) from B→ΦKS decays. In the Standard Model, this should be the same as the value extracted from the familiar B→J/ΨKS process, namely 0.74 ±0.05. In 2002, both BaBar and Belle had found negative values, but the errors were large. With more data, Belle reported a new value at the meeting of -0.96 ±0.50 (+0.09/-0.11). By itself this measurement is 3.5σ from the Standard Model, but the situation was confused by a new measurement from BaBar of the same quantity, which has moved much closer to the Standard Model and is +0.45 ±0.43 ±0.07. This puzzle will take either more data or more study before it is resolved.
The Fermilab organizers introduced a number of innovations to the 2003 edition of the conference, one of which was designed to attract the media. Shortly after the new Belle and BaBar results were reported, the first of a series of informal media briefings brought physicists and journalists together over a sandwich lunch to discuss the physics of the previous session.
Hassan Jawahery of Maryland described progress from the B-factories towards constraining the other two angles of the unitarity triangle, and Kevin Pitts of Illinois outlined the complementary capabilities of hadron colliders, which allow access to the BS and to b-baryons. Dresden’s Klaus Schubert reported on significant progress in determining the magnitudes (as opposed to the phases) of the CKM matrix elements, using results from a broad array of experiments. The CKM matrix appears unitary at the 1.8σ level, but some important inputs are still awaited. Gerhard Buchalla of Munich explored tools to understand the QCD aspects of heavy hadron decays. Liverpool’s John Fry covered measurements of rare hadronic decays, while Mikihiko Nakao of KEK did the same for electroweak and radiative rare decays.
In the session on charm and quarkonium physics, Bruce Yabsley of Virginia Tech discussed the limits on new physics from charm decay, concluding that the results from CLEO-c are eagerly awaited. Tomasz Skwarnicki of Syracuse described a revitalized scene in heavy-quarkonium physics, thanks in part to large data samples at BES-II in Beijing, CLEO-III at Cornell and Fermilab’s E835 experiment. He pointed to solid theoretical progress and to new experimental opportunities at BaBar and Belle, as well as CLEO-c. Jussara de Miranda of the Brazilian Centre for Research in Physics asked the question: “why is charm so charming?”, concluding that it provides a powerful bridge to the parton world.
Rounding off the second day was another innovation, a special open session on the Grid, to which the public was invited. Ian Foster of Argonne and Chicago introduced the concept of Grid computing, and CERN’s Ian Bird described its application to the LHC. Bob Aiken from Cisco, Stephen Perrenod from Sun and David Martin of IBM explained the industry’s perspective, while Dan Reed from the National Center for Supercomputing Applications provided the view from an academic computer centre.
Day three began with a session on astroparticle physics, with Pennsylvania’s Licia Verde reporting the exciting results from the Wilkinson Microwave Anisotropy Probe (WMAP). The detailed cosmic microwave background images provided by WMAP bring new insights into the early universe and into the amount of baryonic matter in the universe. Polarization measurements with WMAP also give a handle on the formation of the first stars. The results point to a flat universe consisting of just 4% baryonic matter, with the first stars forming earlier than previously thought at around 200 million years.
Harvard’s Bob Kirshner focused on the universe’s invisible 95%. Supernovae observations indicate an accelerating universe, consisting of about a third matter and two-thirds dark energy. Concluding that there is a bright future for dark energy, he looked forward to a resolution of the questions surrounding the cosmological constant through forthcoming supernovae studies.
Esteban Roulet of Bariloche gave an update on very-high-energy cosmic rays, whose spectrum is described by a “knee” at around 1015 eV and an “ankle” at 1019 eV. Recent work has shown that the chemical composition becomes heavier above the knee, and that above the ankle the extragalactic component dominates. Roulet also raised the question of doing astronomy with ultra-high-energy cosmic rays, saying that although it would be like trying to do optical astronomy with a telescope at the bottom of a swimming pool, the field has promise.
Lyon’s Maryvonne de Jesus gave a comprehensive overview of the status of dark-matter experiments. So far, only the DAMA experiment at Italy’s Gran Sasso underground laboratory has reported a positive signal for WIMPS. A new detector, DAMA/LIBRA, is expected to report its first result towards the end of 2003. A range of experiments planned or in preparation in Europe and the US can exclude much, but not all of the area allowed by the DAMA measurement. Giorgio Gratta of Stanford discussed tritium and double beta decay experiments. The tritium experiments study the endpoint of the decay spectrum from tritium decays to helium, an electron and a neutrino, which is sensitive to neutrino mass. Results from spectrometer-based experiments at Mainz (< 2.2 eV) and Troitsk (< 2.05 eV) are the most sensitive so far. In the future, KATRIN at Karlsruhe should achieve a sensitivity of around 0.25 eV.
In his review of accelerator neutrino experiments, Koichiro Nishikawa of Kyoto focused on the controversial LSND result, which suggests an oscillation scenario incompatible with the accepted picture. Final results from CERN’s NOMAD experiment do not entirely exclude the LSND region, and attention has now passed to MiniBOONE at Fermilab, which has so far recorded some 125,000 events. The first results are expected this autumn. Further ahead are the long-baseline projects NUMI/MINOS at Fermilab-Soudan, and the ICARUS and OPERA detectors, which will observe a neutrino beam from CERN at the Gran Sasso laboratory in Italy.
Reactor-neutrino experiments were covered by Kunio Inoue of Tohoku, who pointed out that all modern experiments are extrapolations of the 1956 Reines and Cowan original, but with vast improvements in scale and flux, along with a better understanding of reactor neutrinos. The fact that Japan’s Kamioka mine has 70 GW of reactor power at distances of 130-240 km and an existing cavern, led to the KamLAND experiment, whose first results provide evidence for neutrino disappearance from a reactor source. Reactor experiments also probe θ13, the last remaining mixing angle in the neutrino sector. The current best measurement of θ13 less than 10 degrees comes from the French CHOOZ experiment. Turning to solar neutrinos, Carleton’s Alain Bellerive gave a comprehensive historical overview from Homestake to Sudbury, but kept the audience waiting for new results from SNO.
In his wide-ranging theoretical review, Alexei Smirnov pointed out that the Lepton Photon conference devoted a full day and a half to discussion of the unitarity triangle for quarks, but no time to the equivalent for leptons. To redress the balance he constructed a leptonic unitarity triangle, which he used to discuss the possibility of measuring CP violation in the leptonic sector. He also pointed out that the only chance of reconciling LSND’s oscillation result with other experiments is by invoking extra, sterile, neutrinos. Deborah Harris of Fermilab then took the podium to give a detailed analysis of the beamline strategies and detector options for facilities ranging from the near-term, such as the Japanese J-PARC to Kamioka beam, to long-term projects such as the neutrino factory.
Day four was devoted to hadron structure and detector research and development. It began with a review of structure functions from deep-inelastic scattering at HERA given by Paul Newman of Birmingham. HERA shut down in 2000 for an upgrade that was designed to boost the luminosity five fold. Now running again as HERA-II, a few tens of inverse picobarns of data are expected this year, and this is sufficient to begin studies of polarization dependence. The theoretical perspective was given by Robert Thorne of Cambridge, who discussed the parton distributions that are essential for analyses at high-energy hadron colliders such as Fermilab’s Tevatron and CERN’s LHC. Toki-Aki Shibata of the Tokyo Institute of Technology discussed measurements with polarized hadrons, both in deep-inelastic scattering and proton_proton collisions, as a probe of the spin structure of hadrons and a tool to develop QCD. After the break, Yuji Yamazaki of KEK continued the QCD theme in his discussion of diffractive processes and vector meson production. Diffractive processes have historically been described in terms of pseudo-particle exchange within Regge theory. Today they can be described at least in part by perturbative QCD, though there remains work to be done.
In his review of heavy-ion collisions, David Hardtke of Berkeley asked the question: “have we seen the quark-gluon plasma at RHIC?” He concluded that although the density and the temperature produced in RHIC gold-gold collisions is at or above the predicted phase transition, no direct evidence has been seen for excess entropy production.
Ties Behnke of DESY and SLAC had the honour of presenting the only detector R&D talk of the conference, and he focused on detectors for a future linear electron-positron collider, for which the requirements are rather different from those for hadron machines.
The final day’s programme looked forward. Veronique Boisvert of CERN reported on a meeting held during the conference by the Young Particle Physicists’ organization, and complimented the conference organizers on the visible role that young physicists had played at Lepton Photon 2003. CERN’s director-general, Luciano Maiani, reported on the status of the LHC and the steps that have been taken over the past couple of years to firm up its financial situation. Dipole magnet production remains the main item that is setting the pace, and first beam is still foreseen for 2007. Vera Luth of SLAC reported on IUPAP’s Commission on Particles and Fields, the umbrella under which the Lepton Photon and ICHEP conferences are held. She observed that the current difficulties experienced by scientists trying to obtain visas to enter the US were of great concern and had a negative impact on the attendance at the conference.
Looking further ahead, Francois Richard of Orsay described the physics motivation for a TeV-scale linear collider, including the constraints that can now be placed on SUSY models from WMAP’s cosmic microwave background measurements, assuming that cosmic dark matter actually consists of neutralinos. SLAC’s Jonathan Dorfan reported for ICFA, and Maury Tigner of Cornell for the International Linear Collider Steering Committee. An international report on the desired parameters of a linear collider is expected this autumn, and this will be followed by the setting up of a committee of “wise persons” from the Americas, Asia and Europe to make a technology recommendation by the end of 2004. In parallel, a task force will recommend how to set up an internationally federated “pre-global design group” intended to evolve into a design group for the accelerator as the project gains approval. Inter-governmental contacts have started with a pre-meeting that was held in London in June.
Ed Witten of Princeton’s Institute for Advanced Study described supersymmetry and other scenarios for new physics. Witten admitted that an anthropic principle could account for the fine tuning and hierarchies that plague the Standard Model, but he hoped that this would not turn out to be the case. The experimenters in the audience enjoyed hearing such a prominent theoretician argue that we need new results that put experiments ahead of theory again – “as is customary in science”. The closing talk, an outlook for the next 20 years, was given by Hitoshi Murayama of Berkeley. He argued that there is a convergence, at the TeV scale, of questions to do with flavours and generations (neutrino masses and mixings, CP violation, B physics), questions to do with unification and hierarchies of forces, and questions to do with the cosmos (dark matter and dark energy). The TeV scale offers the answers to many of these questions, but also forms a cloud that blocks our vision. The next decade holds the exciting promise of dispersing this cloud and giving us the first clear view of what lies ahead.
In pitch-black darkness, four kilometres below the surface of the sea, a cone of blue Cerenkov light suddenly illuminates an array of photomultiplier tubes situated some 80 m above the sea floor – the signature of a relativistic muon passing through the detector. Did it come from a “common” cosmic-ray event in the Earth’s atmosphere, or was it generated by a neutrino emitted in some high-energy process in a distant galaxy that has interacted in the water near the detector or in the rock below? Neutrino astronomy is a relatively new and exciting domain that for the first time uses telescopes that do not rely on photons for primary signal transmission.
One such telescope under construction is NESTOR – the Neutrino Extended Submarine Telescope with Oceanographic Research – which will ultimately consist of a “tower” of Cerenkov detectors anchored a few nautical miles off the south-west tip of the Peloponnese in mainland Greece. Figure 1 gives an artist’s impression of part of the tower, showing several hexagonal “floors” or “stars” made of 15 m long rigid titanium arms. The arms are equipped at their extremities with upward and downward-looking 38 cm diameter photomultiplier tubes (PMTs) in glass pressure housings, which help to differentiate between upward and downward-travelling muons. The full tower, with 12 stars of 32 m diameter, will have a total height of 410 m from the sea floor and an effective area of 20,000 m2 for neutrinos with energies of 10 TeV.
The NESTOR collaboration successfully deployed the first floor of the detector tower to a depth of 4000 m (see figure 2) at the end of March 2003. Since then, more than five million events, selected by a fast four-fold coincidence trigger, have been accumulated in an initial “physics” run. Clearly the physics or astronomy possible with a single-floor detector is limited, but it has provided invaluable experience in the operation of the detector, in data handling and in the techniques for signal processing and track reconstruction.
The techniques for deployment and payload exchange at a depth of 4000 m are well tested. Recovery of the cable termination and junction box to the surface enables the use of simpler, dry-mating connectors and avoids expensive operations with manned or autonomous submarine vehicles. To appreciate the scale of the challenge, imagine sitting at the top of Mont Blanc with a fishing line in Lake Geneva, positioning a package at the end of the line on the lake bottom to within a few metres, and then months later going back to retrieve the package in good order. These achievements were recently highlighted in the cover article in the July issue of Sea Technology, a leading marine industry journal (Anassontzis and Koske 2003).
Tracking the muons
The electrical pulses from the PMTs are digitized in a central titanium sphere on each floor and transmitted over a 30 km electro-optical cable to the shore station, where the raw data are recorded. At the heart of the system are novel ASICs, analog transient waveform digitizers (ATWDs), developed at Lawrence Berkeley National Laboratory, which can sample the PMT signals from 200 MHz to 3 GHz. From the arrival times of the signals and their intensity, the cone of Cerenkov light created by a muon has to be reconstructed to determine the direction of the muon and eventually infer the physical parameters of the incident neutrino.
In the control room the parameters of the detector are continuously monitored. These include the floor orientation (compass and tilt meters), the temperatures, humidity and hygrometry within the titanium sphere, and the external water-current velocity, temperature and pressure, as well as data from other environmental instruments mounted on the sea-bottom station (pyramid). In addition, the electrical power-distribution network and the high voltages applied to the PMTs are controlled and monitored. A run-time monitor carries out fast, on-line data processing, in parallel to the data taking, so as to check continuously the detector performance – this consists of monitoring the stability of such crucial parameters as the PMT rates, pulse height distributions, trigger timing, majority logic rates and the overall data acquisition (DAQ) performance (dead time).
A fraction of the data is fully analysed on-line to check the quality and ensure that the trigger is unbiased. Trigger rates, as a function of the signal thresholds and coincidence level settings, as well as the total photoelectron charge inside the trigger window, agree very well with Monte Carlo predictions based on Atsushi Okada’s atmospheric muon flux model (Okada 1994), the natural 40K radioactivity in the sea water and PMT dark currents. Calibration in the sea uses LED “flasher” modules mounted above and below the detector floor. These provide a rigorous, independent check of the trigger and pulse intensity of all PMTs and the full DAQ system.
In the off-line analysis, the raw data from each PMT are first passed through a signal-processing stage, which performs a base-line subtraction and corrects for attenuation. This is based on calibration parameters determined in the laboratory before deployment. As most parameters are frequency dependent, fast Fourier transforms are used. At the end of the processing stage multiple pulses are resolved and the arrival time, pulse height and total charge of each pulse (hit) is determined with precision.
In order to reconstruct tracks, events with more than five active PMTs within the trigger window are selected. The estimation of the track parameters is based on a χ2 minimization using the arrival times of the PMT pulses. In most cases the procedure converges to two or occasionally several minima, often due to an inherent geometrical degeneracy known as a “mirror solution”. To resolve this ambiguity, a second level algorithm is used that takes account of the measured number of photoelectrons at each PMT and the number expected from the candidate track, and performs a likelihood hypothesis comparison.
Figure 3 shows the digitized PMT waveforms, after signal processing of a selected event, whilst figure 4 shows a pictorial representation of the reconstructed track that corresponds to this event. Several tests of the track reconstruction procedures have been carried out using both data and Monte Carlo generated events. The results demonstrate that the estimation of the track parameters is unbiased.
Figure 5 shows the measured zenith angular distribution (solid points) of reconstructed events using a fraction (~30%) of the collected data. The reconstructed tracks used in this measurement have been selected by means of the minimum χ2 fit (χ2 probability > 0.1), the track quality based on the number of photoelectrons per PMT and on the total accumulated photoelectrons per hit per track (> 4.5). The histogram shows the predicted angular distribution of atmospheric muon tracks (for the NESTOR floor geometry and reconstruction efficiency) derived from Monte Carlo calculations using Okada’s phenomenological model.
Further improvements in the reconstruction efficiency are to be expected, but effective neutrino detection will require the deployment of at least four floors, which we hope to achieve in the coming year. However, we already know we have a detector that is well understood and that the data quality is excellent, supporting many of the choices made regarding, for example, the site and detector layout.
Improving signal to noise
As with any experiment, a good signal-to-noise ratio is the key to success. Cosmic-ray muons represent the principal background for a neutrino telescope and this depends on the depth of the water “shielding” – the attenuation between 1000 and 4000 m is more than two orders of magnitude. With NESTOR, further improvement is possible as the deepest point in the Mediterranean, 5200 m, is nearby. Limiting the cosmic-ray background also removes many of the uncertainties in track reconstruction, such as wrongly assigning a downward-going muon as an upward-going track. The NESTOR collaboration’s decision to have upward, as well as downward-looking phototubes seems to be well justified. At 4000 m an unambiguous upward-coming or horizontal muon track must have been generated by a neutrino.
Another source of background is bioluminescence (light emitted by micro-organisms in the water), which reduces exponentially with water depth. The signal bursts, of the order of 1 to 10 seconds duration, are easily distinguishable from those of muon tracks, but they contribute to dead time and ultimately reduce the detector efficiency. At the NESTOR site the average dead time measured at 4000 m is around 1%; values of up to 40% have been reported at other sites. A third background source, independent of depth, comes from the natural radioactive ß-decay of 40K. Together with the thermionic noise from the PMTs, this represents a low-intensity uncorrelated baseline signal level of 50 kHz, which is easily subtracted by the tracking algorithm.
The use of a rigid structure to mount groups of PMTs removes uncertainties in their relative positions and aids reconstruction at the “floor level”. This approach seems preferable to having individual or small clusters of phototubes on independent strings. The orientation of the whole star is monitored so that only relative horizontal displacements between complete stars require external telemetry using acoustic or optical references. Experience of track reconstruction with several floors is needed to demonstrate that these advantages compensate for the additional constraints in deployment operations.
The choice of the NESTOR site for its depth, water clarity, low sedimentation and very low underwater currents is already paying dividends. The proximity to shore is an important safety consideration in case of bad weather at sea and facilitates the staging of deployment and recovery operations. Crews can be exchanged during lengthy procedures, and additional equipment or specialist help can be brought in when required. The NESTOR site is only 7.5 nautical miles from land and 11 nautical miles from the shore station in Methoni – 20 minutes by fast launch.
Deploying this first stage of the detector in 4000 m of water and making it work has required an enormous effort from the small team most directly involved and has only been possible due to the unfailing support of many authorities, organizations, companies and individuals too numerous to mention.
The European astroparticle physics community’s aim is to build a very large volume neutrino telescope (km cube) in the northern hemisphere. It is to be hoped that this “feasibility demonstration” will encourage better co-ordination between the various groups working in the field and will help to attract the necessary funding and manpower for this large project. Such a detector would complement the already approved ICECUBE project at the South Pole.
More than 100 years ago, X-rays opened up a new era in medicine and science, allowing doctors “to see the man inside us”. Since the 1970s we have been using neutrinos in a similar way to monitor the “physiology” of the deep solar interior, and in 1987 neutrinos revealed the “pathological” state of a collapsing star, in supernova 1987a, heralding a new era in astronomy. Indeed, “If there are more things in heaven and Earth than are dreamt of in our natural philosophy, it is partly because electromagnetic detection alone is inadequate,” as Lawrence Krauss, Sheldon Glashow and David Schramm wrote in 1984 when they proposed a programme of antineutrino astronomy and geophysics, which would open vast new windows for exploration both above us and below.
However, unlike with X-rays, the potential of neutrino observations could not initially be fully exploited because the neutrino survival probability was not known, as testified by the 30-year-long solar neutrino puzzle. As the physics of the emission process was mixed with uncertainties in the evolution of neutrinos, it was difficult to learn much from neutrinos. But this situation changed dramatically with the results from the Sudbury Neutrino Observatory, which clearly proved the oscillation of electron neutrinos. Now we know the fate of neutrinos, so we can really learn from them. It is therefore time to tackle the kind of programme that was proposed by Krauss and colleagues, which includes a detailed study of the Sun, the cosmic abundance of the relic neutrinos from past supernovae, and last but not least the interior of the Earth.
The KamLAND experiment in Japan has already opened up a new field of research that exploits the special ability of neutrinos to reveal what is hidden to other probes of the Earth’s interior. The experiment, which confirmed neutrino oscillations by detecting antineutrinos emitted from nuclear reactors, can also discriminate events from antineutrinos of terrestrial origin, the so-called geoneutrinos (KamLAND collaboration 2003). Nine such events have been reported from the first exposure of six months, providing us with a first glimpse of the interior of the Earth.
Neutrinos and the Earth’s heat
One hundred and forty years after Jules Verne’s voyage, the deep interior of the Earth remains de facto an unexplored frontier to mankind and, despite recent progress in geological and planetary research, the number of open problems possibly exceeds the number of known facts. A central issue concerns the source of terrestrial heat. The Earth re-emits in space the radiation that comes from the Sun (1.4 kW/m2), adding to it a tiny flux of heat produced from its interior (about 80 mW/m2) to give a total of 40 TW, the equivalent of some 10,000 power plants. The origins of terrestrial heat are not understood in quantitative terms: such a heat flow can be sustained over geological times by any energy source, be it nuclear, gravitational or chemical. In the words of John Verhoogen: “Radioactivity itself could possibly account for at least 60%, if not 100%, of the Earth’s heat output…If one adds the greater rate of radiogenic heat production in the past, possible release of gravitational energy (original heat, separation of the core…), tidal friction…and possible meteoritic impact …the total supply of energy may seem embarrassingly large.” The relevant questions are: how large is the radiogenic contribution to heat flow? Which nuclei are relevant? Where are they?
The answer to which nuclei are relevant is relatively simple. The main sources of natural radioactivity are currently uranium, thorium and potassium, through the decay chains: 238U → 206Pb + 84He + 6e + 6νbar + 51.7 MeV; 232Th → 208Pb + 64He + 4e + 4νbar + 42.8 MeV; 40K + e → 40Ar + ν + 1.513 MeV (11%); 40K → 40Ca + νbar + e + 1.321 MeV (89%). Specifically they release, for natural isotopic abundances, 0.95 (uranium), 0.27 (thorium) and 3.6 x 10-5 (potassium) erg per second per gramme of the corresponding chemical element.
To answer the question about how large the radiogenic contribution is to the heat flow, we need to know the abundances of the radiogenic elements in the Earth’s different layers (figure 1), as the radiogenic heat flow depends on three basic pieces of data: the total masses of uranium, thorium and potassium, which are related to the total radiogenic heat flow H by the equation H = 9.5 MU + 2.7 MTh + 3.6 10-4 MK, where H is in TW and the masses are in units of 1017 kg. However, observational data on the amounts of uranium, thorium and potassium in the Earth’s interior are rather limited, as only the crust and the upper part of the mantle are accessible to geochemical analysis. As uranium, thorium and potassium are lithofile elements, they accumulate in the continental crust.
Estimates for the uranium mass in the crust are in the range (0.2-0.4) x 1017 kg, and while concentrations in the mantle are much smaller, the total amounts are comparable due to the much larger extension of the mantle. Estimates for the mantle are in the range (0.4-0.8) x 1017 kg. Note, however, that these estimates are much more uncertain than for the crust as they are obtained by analysing samples emerging from the upper mantle (at a depth of a few hundreds of kilometres) and extrapolating the results to the completely unexplored lower mantle (approximately 3000 km). Based on geochemical arguments, uranium should be negligible in the core, which is completely inaccessible to observation.
As for the abundance ratios, one estimates Th/U ~ 4, which is consistent with the meteoritic value, whereas for potassium one generally finds on Earth that K/U ~ 10,000, a puzzling value as it is a factor of seven below that of the oldest meteorites. The abundance of potassium in the Earth’s interior, the possibility that some is buried in the Earth’s core, and its contribution to terrestrial heat, are issues that are still debated among geochemists (Rama et al. 2003).
Geoneutrinos allow a direct and global measurement of the actual abundances of uranium, thorium and potassium, which can provide important information for discriminating among different models for heat production and, more generally, for the formation and evolution of the Earth. In fact, for each element there is a well-fixed ratio of heat to neutrinos (antineutrinos): Lνbar = 7.4 MU + 1.6 MTh + 27 x 10-4 MK; Lν = 3.3 x 10-4 MK, where the luminosities L are in units of 1024 particles per second.
The neutrinos from the Sun completely swamp those emitted from the Earth, but not so with antineutrinos. These can be detected with a distinctive signature via the inverse beta-decay reaction: νbar + p → n + e+ -1.804 MeV, which is possible with antineutrinos from the uranium and thorium chains, but not with antineutrinos from potassium. A liquid scintillator detector could record some 20-50 events from uranium and thorium geoneutrinos per kilotonne per year, depending on the assumed abundances and on the location. Geoneutrinos from uranium and thorium can be further distinguished through the different energy spectra.
The theoretical discussion of geoneutrinos was introduced in the 1960s by Gernot Eder, and extensively reviewed 20 years later by Krauss, Glashow and Schramm. Now these ideas have become more relevant. With KamLAND a handful of geoneutrino events has been extracted from the data after the subtraction of reactor and background events. The KamLAND results thus provide a first look at the amount of radiogenic material inside the Earth. In this context, a group of physicists from the universities of Cagliari and Ferrara, together with Earth scientists from Siena, has recently built a reference model for estimating neutrino fluxes according to the best geological and geochemical information (see for instance Fiorentini et al. 2003). The team has studied the possibility of detecting geoneutrinos at various underground laboratories (figure 2).
A look forward
While the KamLAND results, obtained from a short exposure, are an important first step, they are not sufficient for a determination of the geoneutrino flux and for discriminating between different models of heat production in the interior of the Earth. However, continuing observation will allow for a significant statistical increase, which will be particularly important if some nearby reactors are temporarily switched off. The comparison with measurements from Borexino at the Gran Sasso Underground Laboratories, where the reactor background is much smaller, will provide a significant addition to the data. Detectors at other underground laboratories could also make important contributions to a full map of antineutrinos from the Earth. Moreover, a detector far away from the continental crust would provide direct information on radioactivity from the mantle, which is the most uncertain issue. In principle, one could fit a detector of a few kilotonnes in a (conventionally propelled) submarine and move it around the world, at depths of a few hundred metres in an experiment lasting several years (figure 3).
Other proposals have been put forward for studying the Earth with neutrinos. For example, Ara Iaonnisian and Alexei Smirnov have considered solar neutrinos for oil prospecting. Detection would consist of measuring modulations of the 7Be flux in a large deep underwater detector-submarine that could change its location. In the 1980s, Alvaro De Rujula, Sheldon Glashow, Robert Wilson and Georges Charpak proposed using neutrinos produced by a multi-TeV proton synchrotron as a tool for geological research. Few-TeV neutrinos are suitable for “tomography” of the Earth because they have a range comparable to the Earth’s diameter. Related ideas are now being revived in the context of neutrino factories.
In 1942 Bruno Pontecorvo, one of the founding fathers of neutrino physics, published an important paper, little-known among particle physicists, in the Oil and Gas Journal, entitled “Neutron well logging – A new geological method based on nuclear physics”. It described the “neutron log”, an instrument sensitive to water and hydrocarbons that is now widely used by geologists. It clearly stemmed as an application from the celebrated studies at Rome on slow neutrons, and it testifies to Pontecorvo’s promptness in transforming basic physics into a tool that could be used in other disciplines.
Likewise, neutrinos have now reached a phase where they can be exploited in different fields of science. In this respect, the determination of the radiogenic contribution to terrestrial heat, an important and so far unanswered question, is probably the first fruit we can expect to obtain.
Since the discovery of strangeness by Murray Gell-Mann and Kazuhiko Nishijima almost five decades ago, interest in this degree of freedom has remained alive, with investigations now spanning the range from quarks to nuclei. For nuclei in particular, strangeness has given experimenters a new tool for probing all nuclear levels while avoiding “Pauli blocking”, which is an inherent problem for conventional nuclei where low levels are already filled with neutrons and protons. Hypernuclear systems are therefore very promising subjects of study in nuclear physics, and there have been considerable efforts in theory and experiment alike to uncover the behaviour of nuclei “doped” with one or several hyperons.
Hypernuclei can also be produced using electromagnetic processes, such as kaon photoproduction off a nucleus. Compared with the conventional hadronic mechanism, electromagnetic production of hypernuclei has a clear advantage: since the photon and kaon interact weakly with the nucleus, the process can occur deep inside the nuclear interior. Furthermore, electromagnetic processes are fully controllable in comparison with hadronic ones, so the photoproduction of a kaon plus a hypernucleus provides a clean way to study hypernuclear spectra. In addition, the associated production off a deuteron can be used to study the hyperon-nucleon interaction in the final state, while the quasi-free kaon photoproduction off nuclei serves as an important tool for investigating kaon-nucleus and hyperon-nucleus optical potentials.
While the history of kaon photoproduction off the nucleon goes back to the mid-1950s, a more serious phenomenological study was begun in 1966 by H Thom at Cornell (Thom 1966). Extensive investigations continued until the early 1970s, but then interest temporarily declined, mainly due to the lack of experimental facilities. The field was revived in the late 1980s after the construction of a new generation of high duty-factor electron accelerators providing continuous, high-current and polarized beams in the energy region of a few GeV. Now, the operation of these accelerators, such as ELSA in Bonn, MAMI in Mainz, CEBAF at the Jefferson Laboratory in Virginia, GRAAL in Grenoble and Spring-8 in Osaka, has boosted efforts to advance our knowledge in the strangeness sector through the electromagnetic production of kaons. Recently, there has been a great deal of excitement concerning the detection of a five-quark state – the “pentaquark” – in kaon photoproduction at ELSA, CEBAF and Spring-8. and while it is clear that kaon electroproduction contains a lot of rich information, this article will concentrate on several other aspects that make kaon photoproduction an up-to-date research topic.
Kaon photoproduction processes cannot be properly understood without an elementary production operator that describes the production mechanism off the nuclear constituents (proton or neutron). At the elementary level, the investigation is most effectively performed in the framework of an isobar approach based on nucleons and hyperons – that is, the corresponding operator is constructed from the Feynman diagrams shown in figure 1. (Quark models can also be used but they are beyond the scope of this article.) Furthermore, the isobar approach has the advantage that it has several simple phenomenological applications, as discussed later.
As can be seen from figure 1, several “ingredients” enter the elementary photoproduction operator via intermediate particles. Among them, the most crucial are: the number of meson and baryon resonances participating in the reaction, the properties of those resonances, hadronic and electromagnetic vertex factors, and methods to maintain fundamental symmetries of the operator.
Understanding these ingredients from first principles is currently beyond our capabilities, and this leads to some intrinsic problems on the theoretical side. The best we can do in the mean time is to restrict the number of resonances in the model to as few as possible by following the recommendation from the Particle Data Book, that is, including only the resonances that have significant decay widths to kaon channels, and fitting all unknown factors or constants (Hagiwara et al. 2002). Although this procedure may seem too pragmatic, the result shows good agreement with experimental data and has a wide range of applications in processes involving kaons and hyperons.
The search for ‘missing’ resonances
The physics of nucleon-resonance excitation continues to provide a major challenge to hadronic physics due to the nonperturbative nature of quantum chromodynamics (QCD) at these energies. While methods such as chiral perturbation theory are not amenable to N* physics, lattice QCD has only recently begun to contribute to this field. Thus, most of the theoretical work on the nucleon excitation spectrum has been performed in the realm of quark models. Interestingly, these models predict a much richer resonance spectrum than has been observed in πN → πN scattering experiments (see figure 2), the main source of the Particle Data Book. The obvious question is: where are the other resonances? One may argue that perhaps they are hiding behind some prominent resonances and we need better resolution to single them out. On the other hand, quark models themselves have suggested that those “missing” resonances may couple strongly to other channels, such as the Λ and Σ channels.
Recently, much improved data have become available in the γp → K+Λ, γp → K+Σ0 and γp → K0Σ+ channels, from total cross-sections to polarization observables. The new SAPHIR total cross-section data for the γp → K+Λ channel, shown in figure 3, indicate for the first time a structure around a centre-of-mass energy of W = 1.9 GeV. This structure could not be resolved before due to the low quality of the old data. Indeed, these new data can guide us to select the most important resonances in this process. Cornelius Bennhold and I have interpreted this structure as the evidence for the missing resonance D13(1895) (Mart and Bennhold 2000). This was previously predicted by a constituent quark model to have a mass of 2080 MeV (Capstick and Roberts 1994). Although we found a remarkable agreement between the predicted and the extracted photocouplings, firmer evidence awaits rigorous calculations using a coupled-channels approach.
Searching for “missing” resonances is not only the business of studies in kaon-hyperon production – several other studies have tried to find these resonances in vector meson production. A new organization, BRAG (Baryon Resonance Analysis Group), has been set up to form a network between researchers working in this field and to optimize the available resources. More than 100 physicists from 18 countries have now joined this network. Clearly, this field will attract more attention from the hadronic physics community around the world in the coming years.
From deuterons to hypernuclei
The electromagnetic production of kaons can also be performed off a deuteron target, where one of the nucleons inside the deuteron absorbs the photon and transforms into a kaon and a Λ hyperon, as shown in figure 4a. Since a Λ cannot bind to a nucleon, photoproduction leads to a break-up process. However, before the hyperon leaves the nucleon, both particles interact strongly. Thus, a phenomenological description of this process requires information on the hyperon-nucleon (YN) potential; or in other words, this process paves the way for investigating the YN potential via electromagnetic processes.
Contrary to the case of the nucleon-nucleon (NN) potential, the properties of the hyperon-nucleon interaction are still somewhat uncertain. In the case of NN forces, one has the rich set of NN scattering data at one’s disposal to adjust NN force parameters. This set is essentially absent in the case of the YN system, since performing YN scattering experiments is very difficult. Kaon photoproduction off the deuteron is therefore well suited to tackle this problem by testing various available models for the YN potential. Such a study has been performed by Hisahiko Yamamura and colleagues (Yamamura et al. 2000). By using the modern Nijmegen soft-core YN potential, they found sizeable final-state effects caused by this potential near K+ΛN and K+ΣN thresholds.
If kaon photoproduction is performed off a helium nucleus (a helion), then a hypertriton is formed in the final state (see figure 4b). The electromagnetic production of the hypertriton is a clean mechanism for studying this lightest hypernucleus. The production is expected to act as a complementary tool for investigating hypernuclear spectra. In a study using realistic 3He and hypertriton wave functions, obtained as solutions of Faddeev equations, we have found that Fermi motions in the nucleus play a significant role in this process and the cross-section is of the order of several nanobarns (Mart et al. 1998). By extending these investigations to heavier nuclei, we could eventually cover the hypernuclear chart.
Inside the nucleon
The internal structure of the nucleon is responsible for its ground-state properties, such as hadronic and electromagnetic form factors and the anomalous magnetic moment, while at higher energies this finite internal structure creates a series of resonances in the mass region of 1-2 GeV, as shown in figure 2. The ground-state properties and the resonance spectra are not all independent phenomena, however; they are related by a number of sum rules. One of these is the Gerasimov-Drell-Hearn (GDH) sum rule (see equation 1), which connects the magnetic moments of the nucleons and their helicity structures in the resonance region. The derivation of this sum rule is based on general principles: Lorentz and gauge invariance, crossing symmetry, causality and unitarity. The only assumption in deriving equation 1 is that the scattering amplitude goes to zero in the limit, photon energy ν→∞, thus there is no subtraction hypothesis.
Although the GDH sum rule was proposed more than 30 years ago, no direct experiment has been performed to investigate whether or not the sum rule converges. However, with the advent of the new high-intensity and continuous-electron-beam accelerators, accurate measurements of the right-hand side of equation 1, as well as contributions from individual final states, are now possible.
The contribution from kaon-hyperon final states is of particular interest because strange quarks are explicitly present in the final states. Using the elementary operator obtained from figure 1, calculations of the contribution for photon energy up to 2.2 GeV show that about 3.5% of the nucleon magnetic moment comes from the strange quark contribution (Sumowidagdo and Mart 1999). Very recently, a refined calculation up to about 16 GeV (see figure 5) has yielded a relatively smaller value of 1.25%, which is due to the oscillating behaviour of the cross-section (Mart and Wijaya 2003). Despite these small values, however, the calculations indicate that contributions from strangeness to the magnetic moment of proton is obviously significant. It is a challenge to the experimenters to confront the prediction shown in figure 5 with their future data. There is still much of interest to study in kaon photoproduction.
Four international collaborations have recently announced strong experimental evidence for a five-quark exotic baryon named the theta-plus (Θ+), composed of two up quarks, two down quarks and a strange antiquark (uuddsbar). Originally called the Z+ by the Particle Data Group, but renamed the Θ+ in April this year, as suggested by Russian theorist Dmitri Diakonov, the exact nature of the so-called “pentaquark” is not yet clear. Further experiments should reveal whether it is a tightly bound five-quark object or a molecular meson-baryon state, and will provide measurements of its spin, and the angular distribution and energy dependence of its production.
Physicists have searched for a five-quark state for more than 35 years. Recent experimental efforts were largely motivated by Diakonov and colleagues Victor Petrov and Maxim Polyakov. In 1997 they predicted an exotic isoscalar baryon having spin-parity (1/2)+ and strangeness S = +1 (D Diakonov et al. 1997). In the antidecuplet of five-quark resonances that they predicted, the Θ+ is the lowest mass member at about 1530 MeV, having a width of less than 15 MeV. The theorists suggested that the particle would be seen as a sharp peak in the nK+ or pK0 mass spectrum.
The first publicly announced experimental evidence emerged from the Laser Electron Photon Facility at SPring-8 (LEPS) collaboration in Osaka, Japan (T Nakano et al. 2003). The LEPS involvement began at a conference in 2000, when Diakonov convinced collaboration members to search for the exotic Θ+ state. Using data from an unrelated experiment on φ-meson production, the SPring-8 team studied the inclusive reaction γn→K+K–n on 12C by measuring both K+ and K– at forward angles. They realized that they had a signal in August 2002, but kept their result quiet until the Particles and Nuclei International Conference (PANIC) in October 2002. After months of independent analyses to confirm the result, and after correcting for Fermi momentum, they reported a 4.6 σ nK+ peak at 1540 MeV, less than 25 MeV wide and consistent with the exotic baryon predicted by Diakonov et al.
Meanwhile, the DIANA collaboration from the Institute of Theoretical and Experimental Physics (ITEP) in Moscow, Russia, was examining a 1986 data set from low-energy K+Xe collisions in a xenon bubble chamber. They analysed the effective mass of the pK0 system in the charge-exchange reaction K+Xe→K0pXe’, finding a baryon resonance with a mass of 1539 MeV and a width less than 9 MeV at an estimated statistical significance of 4.4 σ. Their findings will appear in Physics of Atomic Nuclei (V V Barmin et al. 2003).
The CLAS collaboration at the US Department of Energy’s Thomas Jefferson National Accelerator Facility (Jefferson Lab) revealed the most statistically significant result to date at the Conference on the Intersections of Particle and Nuclear Physics (CIPANP) in May. Their results have been submitted for publication in Physical Review Letters (S Stepanyan et al. 2003). Using data from August 1999, the CLAS team studied an exclusive measurement of the reaction γd→K+K–pn. Energy-tagged photons struck a liquid-deuterium target and the particles generated were detected in the CEBAF large-acceptance spectrometer. In the final state, the reaction produced a K– meson and a proton, along with the five-quark object, which then decayed into a neutron (identified by missing mass) and a K+ meson. The CLAS collaboration reports a 5.3 σ Θ+ peak in the nK+ invariant mass spectrum at around 1542 MeV, with a measured width of 21 MeV. They have received approval for 30 days of beam time from the Program Advisory Committee, so as to characterize fully the exotic Θ+ baryon, and the experiment could be conducted as soon as early 2004.
The most recent experimental evidence for the pentaquark comes from the SAPHIR collaboration at the Electron Stretcher Accelerator (ELSA) in Bonn, Germany. Again using older data, taken in 1997 and 1998, they measured the reaction γp→nK0sK+ with the decay K0s→π+π– in the SAPHIR detector at ELSA. In an upcoming issue of Physics Letters B they report evidence for the Θ+ in the invariant mass spectrum of the nK+ system (J Barth et al. 2003). They observe a 4.8 σ peak with a mass of 1540 MeV and a width of less than 25 MeV. After searching for a signal in the pK+ invariant mass distribution in γp→pK+K–, they conclude that the Θ+ must be isoscalar.
The details of the theory proposed by Diakonov et al. are hotly debated, as a brief scan of the pre-print servers will confirm. However, it is undeniable that the four collaborations that have announced convincing evidence of the Θ+ baryon report consistent experimental results. This would seem to confirm the existence of the particle. If the groups’ analyses are correct, this new exotic baryon could have profound implications for baryon spectroscopy and hadronic physics in general.
Polarized solid targets have been used in nuclear and particle-physics experiments since the early 1960s, and with the development of superconducting magnets and 3He/4He dilution refrigerators in the early 1970s, proton-polarization values of 80-100% have been routinely achieved in various target materials at two standard magnetic field and temperature conditions (2.5 T; < 0.3 K and 5 T; 1 K). Due to the much lower magnetic moment of the deuteron compared with that of the proton, deuteron polarization values have been considerably lower, typically 30-50%. Now, however, research at the University of Bochum is yielding materials with deuteron polarizations as high as 80%.
During the past 10 years, polarized solid targets have been successfully used at CERN and SLAC to investigate the spin structure of the nucleon. For this purpose hydrogen- and deuterium-rich compounds such as butanol, deuterated butanol (D-butanol), ammonia (NH3 and ND3) and lithium deuteride (6LiD) have served as proton and neutron target materials, respectively. The basic technique used to obtain a high polarization of the nuclear spins in the solid targets is to transfer the almost complete polarization of electrons at high magnetic fields and low temperatures to the nuclei via a microwave field with a frequency close to the electron Larmor frequency. This process – called dynamic nuclear polarization (DNP) – works for any nucleus with spin. Because all the target materials used are diamagnetic compounds, some amount of paramagnetic impurities (radicals or crystalline defects with unpaired electron spins) have to be implanted into the host material (doping). However, the efficiency of DNP in achieving high polarizations depends strongly on the nature of the paramagnetic centre and on the interactions to which the respective unpaired electron is exposed.
According to spin temperature theory, a narrow electron paramagnetic resonance (EPR) line enables the creation of high inverse spin temperatures – and thus of high nuclear polarizations. Guided by this prediction, a team at the University of Bochum has therefore been performing a systematic study of target-material doping for several years, using EPR spectroscopy to study the characteristics of the different paramagnetic dopants.
In the search for radicals with a small EPR line width, any effects that tend to broaden the line have to be minimized. The anisotropic g-factor of the unpaired electron is a particular danger, because its influence on the line width increases with increasing magnetic field. Hyperfine interactions of the electron also cause broadening. Radiation doping of deuterated materials therefore appears useful because the resulting paramagnetic centres combine a small g-factor anisotropy with weak deuteron hyperfine splitting.
A good example of the useful paramagnetic centres discovered so far is the so-called F-centre in 6LiD, in which the EPR line width is almost entirely given by the magnetic interaction of the F-centre electron, with its six neighbouring 6Li nuclei. A maximum deuteron polarization of 56% at 2.5 T is already being achieved with the target for the COMPASS experiment (NA58) at CERN, which was developed and produced at Bochum. Measurements at Saclay are also showing that even higher polarization values can be obtained in this material at higher magnetic fields.
The research at Bochum has also concentrated on the improvement of deuterated alcohol targets. These play an important role in experiments at intermediate beam energies because all the nuclei in these materials (apart from deuterium) are spinless – unlike 6Li which has spin 1. The first breakthrough in this field came with the application of the radiation doping method to D-butanol. In this way, paramagnetic defects with characteristics very similar to those of the F-centres in 6LiD could be created. Although a systematic investigation concerning the optimum irradiation dose has not yet been performed for this material, deuteron polarizations of 54% at 2.5 T and 71% at 5 T have already been reached.
A further substantial increase of the deuteron polarization has been achieved in both D-butanol and D-propanediol, chemically doped with radicals of the trityl family (which was developed and delivered by Amersham Health, Malmö). Basically, the paramagnetic part of these molecules consists of a methyl-type radical with the three H-atoms replaced by larger but spinless complexes. They possess a very small EPR line width compared with those of the commonly used nitroxide radicals, porphyrexide and TEMPO. Both D-butanol and D-propanediol doped with radicals of the trityl family could be polarized up to around 80% at a magnetic field of 2.5 T. The nuclear magnetic resonance (NMR) signals for positively polarized butanol-d10 and negatively polarized 1,2-propanediol-d8 are shown in figure 1. They consist of two lines corresponding to the two transitions (m = +1→m = 0) and (m = 0→m = -1) of a spin-1 system, respectively. The resonances are separated by the so-called quadrupole splitting, which is a consequence of the interaction of the deuteron quadrupole moment with the lattice electrical field gradient at the site of the deuteron. The measurement of the intensity ratio of the two transitions provides a very accurate method for determining the polarization.
These new developments will allow polarization experiments to be performed on the deuteron and neutron with a much higher precision than was previously possible. Doubling the maximum polarization means doubling the statistical accuracy for the same measuring time. Alternatively, for a given accuracy, the measuring time required is reduced by a factor of four. For these reasons, trityl-doped D-butanol was recently successfully used for the neutron part of an experiment to study the Gerasimov-Drell-Hearn sum rule at the Mainz microtron, MAMI, using a polarized tagged photon beam. Figure 2 shows the horizontal dilution refrigerator developed by the Bonn Polarized Target Group for this particular experiment. For the first time, after more than 40 years of polarized solid targets, it is now possible to perform these kinds of experiments with low-intensity beams to the same precision and over the same time span, no matter whether the proton or the neutron is the subject of investigation.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.