Comsol -leaderboard other pages

Topics

REX-ISOLDE accelerates the first isomeric beams

CCErex2_12-05

Since 2001, the combination of the Isotope-Separator On-Line (ISOLDE) and the Radioactive Beam Experiment (REX) has provided accelerated beams of radioactive ions. Now, with the aid of a laser-separation technique, specific metastable excited states – isomers – can be selected and “post-accelerated” in REX. This allows not only nuclear-decay experiments, but also the production of short-lived excited states.

Purity is an important parameter of any radioactive beam. At ISOLDE, the Resonant Ionisation Laser Ion Source (RILIS) allows the selection of a single chemical element. Combined with the mass selection of the ISOLDE separators, this results in a high-purity beam composed of essentially a single isotope. In a further step, narrow-bandwidth lasers can select different long-lived isomers from the same isotope (Köster et al. 2000a and 2000b). This has already allowed the separation of two different beta-decaying states in 68Cu as well as the unambiguous identification of three isomeric states in 70Cu (Van Roosbroeck et al. 2004). (These are both neutron-rich radioactive isotopes of copper, which occurs naturally as the stable isotopes 63Cu and 65Cu.)

Now, in experiment IS435, isomeric beams of 68Cu and 70Cu have been post-accelerated in REX-ISOLDE to 2.8 MeV per nucleon. The beams were then directed onto a target in the centre of the Miniball set-up, which was used to detect emitted gamma rays, and hence the existence of excited states in the different nuclei. The experiments are showing that the technique can produce isomeric beams of sufficient purity to study individual excited states in a radioactive nucleus, as the preliminary results for 68Cu indicate.

The radioactive nucleus 68Cu has two beta-decaying states, the ground state with spin 1 and positive parity (Iπ = 1+) and a metastable (isomeric) one with Iπ = 6. Both states are well known and decay to the stable 68Zn nucleus. In nuclei, protons and neutrons tend to fill energy levels in pairs, with the angular momentum of the pairs coupling to zero. So in 68Cu, with an odd number of protons (29) and an odd number of neutrons (39), the multiplet structure of the low-lying energy states is largely determined by the coupling of the two odd nucleons, which occupy different orbitals outside the full core of pairs. The structures containing the ground state and the beta-decaying isomeric state are expected to be significantly different, as in these states, although the odd proton is in the same orbital (2p3/2), the odd neutron is in very different orbitals (2p1/2 and 1g9/2). Previous investigations using transfer reactions (Sherman et al. 1977) and beta-decay and lifetime measurements (Hou et al. 2003) have indicated the existence of different multiplet structures (figure 1), but not much is known about the composition of the states.

CCErex1_12-05

The aim of experiment IS435 was to study these two multiplet structures. In one case, an almost pure (∼90%) beam of the ground state of 68Cu (1+) was accelerated and underwent Coulomb excitation in order to investigate the coupling between the proton p3/2 and neutron p1/2 orbitals. Figure 2 shows the gamma-ray energy spectrum from the Miniball detector. It clearly reveals a gamma transition of 84 keV indicating that the Iπ = 2+ state of the ground-state multiplet is excited with quite a high probability. This hints at a significant electric quadrupole (E2) component in the transition connecting these two states, contradicting the conclusions of some previous studies (Hou et al. 2003).

Figure 3 shows the gamma-ray energy spectrum from the excitation of the isomeric 6 state of 68Cu. Here a Doppler-broadened line is clearly visible at 178 keV together with lines at 84 keV and 693 keV that are not Doppler-broadened. Applying the Doppler correction to the spectrum (the red line in the figure) leads to a narrowing of the 178 keV line and a broadening or complete disappearing of the 84 keV and 693 keV lines respectively. This indicates that the gamma rays from these two transitions were not emitted in flight but from nuclei at rest – in other words from states that have relatively long half-lives compared with the other states excited, and that therefore come to rest before they decay. Taking into account the energy of the beam leads to an estimate of the half-lives of these two transitions of the order of a few nanoseconds. Comparing the results from the transfer reaction and beta decay indicates positions for the two states as given by the dashed lines in figure 1.

CCErex3_12-05

The distinctively different patterns of the spectra in figures 2 and 3 prove that two different isomeric beams with sufficient purity have been post-accelerated at REX-ISOLDE and that completely different structures in the 68Cu nucleus have been populated and studied. This is the first instance of such studies being carried out with the help of post-accelerated isomeric beams.

CCErex4_12-05

A number of techniques for nuclear studies, including Coulomb excitation and nuclear transfer reactions, will clearly benefit from the use of isomeric beams. The very high selectivity of RILIS combined with the very good beam spot and precise energy definition after the REX linac make REX-ISOLDE a unique place for this type of measurement during the coming years.

CCErex5_12-05

• IS435 was performed by CERN; IKS KU Leuven; INRNE, Bulgarian Academy of Sciences, University of Sofia; Universita di Camerino; LMU, Munich; MPI, Heidelberg; University of Köln; TU Darmstadt; TU Munich; Warsaw University; IPN Orsay; Lund University; INP, NCSR “Demokritos”; University of Gent; Miniball and the REX collaboration.

Further reading

L Hou et al. 2003 Phys. Rev. C68 54306.

U Köster et al. 2000a Nucl. Instr. and Meth. B160 528.

U Köster et al. 2000b Hyperfine Interactions 127 417.

J D Sherman et al. 1977 Phys. Lett. B67 275.

J Van Roosbroeck et al. 2004 Phys. Rev. Lett. 92 112501.

High pT Physics at Hadron Colliders

by Dan Green, Cambridge University Press. Hardback ISBN 0521835097, £70 ($110).

61AYj1TChfL

Over the past several years, Fermilab physicist Dan Green has developed an excellent course on “High pT Hadron Collider Physics”. This is now published as a Cambridge monograph that successfully traces the important past and future roles of hadron colliders in testing and probing the limits of the Standard Model for electroweak and strong interactions. In so doing, it provides an accessible and pedagogic introduction to key features of parton-parton collisions in pp or pbar p interactions. It is not, however, an up-to-date survey of the field. Rather, the centre-piece of Green’s book concerns the motivation and experimental strategy for detecting, and subsequently studying, the Higgs scalar particle (the last undetected element of the minimal Standard Model).

Written by an experimentalist, the book is qualitative in nature and can even be enjoyed by final-year undergraduates, although to profit from it formal introductions to particle-physics phenomenology and quantum field theory are essential. (Such courses are fortunately part of most relevant Master’s programmes, and the reader is directed to excellent texts on the subject.) A key feature is the use of dimensional (heuristic) arguments to estimate key production and decay processes in hadron collisions. In addition, and uniquely, the COMPHEP freeware program has been extensively used to back up the dimensional arguments with lowest order computations. Despite some incompatibilities of nomenclature, this innovation is (to the reviewer) extremely successful.

The first chapter presents a concise summary of the Standard Model particles and their couplings, as well as a description of the Higgs mechanism for mass generation of the W and Z bosons. It lays out the key properties of the Higgs boson, and concludes with a list of issues that are not answered by the Standard Model. These issues are (rather superficially) discussed in chapter six, with chapters two to five directed towards experimental and phenomenological issues relevant to the Higgs search.

Chapter two describes, in an extremely accessible way, the detector requirements for identifying key high-p,sub>T parton-parton collision processes and the associated instrumental or irreducible physics-related backgrounds. The treatment of jet energy reconstruction and di-jet mass reconstruction is excellent. Inevitably, given the author’s background in the D0 and CMS experiments at Fermilab, the book leans towards examples of these experiments. In a few cases, some important instrumental innovations are not given adequate space (e.g. the real-time selection of heavy quarks as in the CDF experiment). Students could also have benefited from a description of the relative merits of the CDF and D0 experiments, and of course of the future ATLAS and CMS detectors at CERN’s Large Hadron Collider (LHC).

The third chapter is good reading for any new graduate student. Green introduces key features of collider physics: the central rapidity plateau and its energy dependence; the basic parton-parton collision processes and their kinematics; the main gauge boson and gauge-boson pair production processes; and jet fragmentation. In all cases experimental data (usually not the latest) are used to justify heuristic arguments and COMPHEP calculations. A series of exercises complements the chapter.

Chapters four and five concentrate respectively on the more important results from Fermilab’s Tevatron and on the Higgs search strategy at the LHC experiments (for which chapter four’s material is invaluable as a guide to the experimental backgrounds to be expected from any Higgs signal at the LHC). As a reviewer, I enjoyed the experimental approach of these two chapters and their highly readable nature. However, the extremely important sections on heavy-quark (b and t quark) production were rather incomplete, given the unique measurements at the Tevatron and the important implications for the LHC. While the somewhat arbitrary choice of figures in chapter four (taken in most cases from the experiments) is adequate for lecture notes, it detracts from the book’s quality that an effort was not made to include the latest available data, and to combine data from the CDF and D0 experiments. Chapter five concerns the experimental strategy for detecting and studying the Standard Model Higgs particle at the LHC, and relies heavily on relevant preparatory studies from the ATLAS and CMS experiments.

Finally, the concluding sixth chapter discusses extensions to the Standard Model such as supersymmetry, as well as some of the open questions alluded to in chapter one. While extensions relevant to the LHC physics programme must be discussed, it felt as if this was a hurried addition. Judy Garland’s quotation from The Wizard of Oz: “Toto, I’ve a feeling we’re not in Kansas anymore,” is rather appropriate.

Published at a time when the CDF and D0 experiments are increasing their data samples by more than an order of magnitude, and in advance of the LHC, Green’s book has limited shelf life in its present edition. However, despite some shortcomings, its core is an excellent introduction for any graduate student starting out in experimental hadron-collider physics and can be strongly recommended. Dan Green should be congratulated on the overall quality of his text. Presumably, any new edition beyond 2007 will provide some interesting updates.

From Fields to Strings: Circumnavigating Theoretical Physics (Ian Kogan Memorial Collection)

by Misha Shifman, Arkady Veinshtein and John Wheater (eds), World Scientific. Hardback ISBN 9812389555 (three volume set), £146 ($240).

On the morning of 6 June 2003, Ian Kogan’s heart stopped beating. It was the untimely departure of an outstanding physicist and a warm human being. Ian had an eclectic knowledge of theoretical physics, as one can easily appraise by perusing the list of his publications at the end of the third volume of this memorial collection.

CCEboo2_11-05

The editors of these three volumes had an excellent idea: the best tribute that could be offered to Ian’s memory was a snapshot of theoretical physics as he left it. The response of the community was overwhelming. The submitted articles and reviews provide a thorough overview of the subjects of current interest in theoretical high-energy physics and all its neighbouring subjects, including mathematics, condensed-matter physics, astrophysics and cosmology. Other subjects of Ian’s interest, not related to physics, will have to be left to a separate collection.

The series starts with some personal recollections from Ian’s family and close friends. It then develops into a closely knit tapestry of subjects including, among many other things, quantum chromodynamics, general field theory, condensed-matter physics, the quantum-hole effect, the state of unification of the fundamental forces, extra dimensions, string theory, black holes, cosmology and plenty of “unorthodox physics” the way Ian liked.

These books provide a good place to become acquainted with many of the new ideas and methods used recently in theoretical physics. It is also a great document for future historians to understand, first hand, what physicists thought of their subject at the turn of the 21st century. There is much to learn and profit from this trilogy. Circumnavigating theoretical physics is indeed fun. It is unfortunate, however, that it had to be gathered in such sad circumstances.

50 Years of Yang-Mills Theory

by Gerardus ‘t Hooft (ed), World Scientific. Hardback ISBN 9812389342, £51 ($84). Paperback ISBN 9812560076, £21 ($34).

Anniversary volumes usually mark a significant birthday of an individual, or perhaps an institution. But this fascinating compilation celebrates the golden jubilee of a theory – namely, the type of non-Abelian quantum gauge field theory first published by Chen Ning Yang and Robert L Mills in 1954, and now established as a central concept in the Standard Model of particle physics. It was a brilliant idea (by the editor, Gerardus ‘t Hooft, I assume) to signal the 50th birthday of Yang-Mills theory by gathering together a wide range of articles by leading experts on many aspects of the subject. The result is a most handsome tribute of both historical and current interest, and a substantial addition to the existing literature.

CCEboo1_11-05

There are 19 contributions, only two of which have been published elsewhere. They are grouped into 16 sections (“Quantizing Gauge Fields”, “Ghosts for Physicists”, “Renormalization” and so on), each accompanied by brief but illuminating comments from the editor. The style of the contributions ranges from an equation-free essay by Frank Wilczek, to a paper by Raymond Stora on gauge-fixing and Koszul complexes. Somewhere in between lie, for example, François Englert’s review of “Breaking the Symmetry”, and Stephen Adler’s exemplary account of “Anomalies to All Orders”.

One recurrent theme is how unfashionable quantum field theory was in the 1950s and 1960s. As ‘t Hooft puts it: “In 1954, most of those investigators who did still adhere to quantum field theory were either stubborn, or ignorant, or both. In 1967 Faddeev and Popov not only had difficulties getting their work published in Western journals; they found it equally difficult to get their work published in the USSR, because of Landau’s ban on quantized field theories in the leading Soviet journals.” One of the most interesting papers in the book is the 1972 English translation of their 1967 “Kiev Report”, produced via an initiative of Martinus Veltman and Benjamin Lee. It is more detailed than their famous 1967 paper in Physics Letters, and includes a discussion of the gravitational field.

Alvaro De Rújula inimitably brings to life the strong interactions between theorists and experimentalists in the heady days of 1973-1978. He includes a candid snap of Howard Georgi and Sheldon Glashow, circa 1975, which made me wish there were more such shots of the leading players from that era. De Rújula’s is the only contribution to address the experimental situation, despite the editor’s admission that the lasting impact of Yang-Mills theory depended on “numerous meticulous experimental tests and searches”. But, after all, this is a volume celebrating the birthday of a theory.

Many contributors look to the future, as well as the past. These include Alexander Polyakov on “Confinement and Liberation”, Peter Hasenfratz on “Chiral Symmetry and the Lattice”, and Edward Witten on “Gauge/String Duality for Weak Coupling”.

I have only had space enough to (I hope) whet the reader’s appetite. This unusual and elegant festschrift is a treat for theorists – and, as a bonus, you get a full-colour representation on the cover of a 17-instanton solution of the Yang-Mills field equations (designed by the editor).

Experiments finally unveil a precise portrait of the Z

From 1989 to 1995, the Large Electron-Positron collider (LEP) at CERN provided collisions at centre-of-mass energies, 88-94 GeV. This range includes the mass of the Z boson, which is thus produced as a resonance, the Z pole (figure 1). In this first phase of LEP running (LEP-1), the four large state-of-the-art detectors ALEPH, DELPHI, L3 and OPAL recorded 17 million Z decays. Over a similar period, from 1992 to 1998, the SLD experiment at SLAC in the US collected 600,000 Z events at the world’s first high-energy linear collider, the SLAC Linear Collider (SLC), with the added advantage of a longitudinally polarized electron beam.

Now, the five big experimental collaborations have submitted a joint paper for publication in Physics Reports. Signed by 2500 authors, “Precision electroweak measurements on the Z resonance” summarizes and combines thousands of cross-section and asymmetry measurements. The data sample consists of the world set of electron-positron interactions at the Z pole. The Z boson decays to all kinematically accessible fermion-antifermion pairs, i.e. all leptons and quarks, except the top quark. Hence the collected data allow very detailed investigations of the properties of the Z-boson and Z-to-fermion couplings.

Combining the wealth of measurements has been a long and painstaking task. The large data sample has demanded advanced analysis techniques to reduce systematic measurement uncertainties in the sophisticated detectors to below the statistical precision. This is one of the main reasons for the long delay between the end of data-taking at Z-pole energies and the publication of this report. Any measurement used in the combined review had to have been published in a journal beforehand. Furthermore, to exploit the power of the combined data sets of the experiments, it was necessary to investigate how each measurement could be meaningfully and properly combined with other measurements, while accounting for correlated systematic effects.

Early in the LEP programme, the high-precision measurements resulting from complex analyses made it clear that a dedicated effort by experts was required to tackle such inter-collaborational aspects of the scientific work. This led to the formation of the LEP Electroweak Working Group (LEP-EWWG), the first of several LEP-wide working groups. The LEP-EWWG consists of members from the experimental collaborations and is responsible for properly combining both published and preliminary results of the LEP experiments. It makes use of the expertise of its members in scrutinizing the measurements for combination purposes, in particular in evaluating correlations between measurements. The group also maintains close contact with many theorists, who are advancing calculations of the many observables and their radiative corrections, thus reducing the theoretical uncertainties to the level required by the precision of the data. The great success of the LEP-EWWG has spawned similar efforts at other accelerators, for example, between experiments at B-factories and between experiments at Fermilab’s Tevatron.

CCEzbo1_11-05

One of the first and foremost combined measurements of the Z resonance at LEP concerns the mass and total decay width of the Z boson and the number of light neutrino species (figure 1). The determination of these quantities is based on total cross-sections measured accurately at precisely known centre-of-mass energies; here the LEP beam-energy calibration is crucial. In 1986, during the preparation of the LEP physics programme, it was estimated that the Z-boson mass and width could possibly be measured to an accuracy of about 50 MeV. Today, the Z-pole report shows that an accuracy nearly 25 times better has been finally achieved. The mass of the Z is now known with a relative precision of 2.3 × 10−5, MZ = 91187.5±2.1 MeV – approaching that of the Fermi constant – and the Z width is known to better than 1‰, GZ = 2495.2±2.3 MeV. Precision luminosity measurements for normalizing the total cross-section measurements were indispensable in determining to better than 3‰ accuracy the number of light neutrino species, and thus the number of fermion families, to be the three known, with Nn = 2.9840±0.0082.

In measurements of Z decays to heavy quarks, beauty and charm, the SLD experiment, despite smaller Z statistics, has made competitive measurements by virtue of the small beam spot and beam-pipe size of the SLC. This allowed the vertex detector to be positioned very close to the interaction point, in turn leading to precision tagging of b and c quarks produced in Z decays. The LEP-EWWG is therefore collaborating intensively and successfully with colleagues from SLD in the area of heavy-quark production at the Z pole.

CCEzbo2_11-05

By measuring production cross-sections and forward-backward asymmetries both for the inclusive hadronic final state and for identified charged lepton and quark flavours, the experiments scrutinized the couplings between fermions and the Z boson in great detail. While the LEP experiments provided high-statistics measurements, SLD with beam polarization made a unique contribution in measuring both left-right and left-right forward-backward asymmetries. With both sets of measurements, the effective vector and axial-vector coupling constants for leptons and quarks have now been determined with a precision several orders of magnitude better than before (figure 2). The comparison in terms of the effective electroweak mixing angle is shown in figure 3. The two most precise determinations of this quantity, based on the left-right asymmetry measured by SLD and the bb- forward-backward asymmetry measured at LEP, differ by 3.2s. Both measurements are still statistics-dominated, but is this the first hint of new physics or just a fluctuation?

CCEzbo3_11-05

The precision of the results is such that small changes with respect to the Born-term expectation are measured quantitatively. These electroweak radiative corrections are sensitive to all kinds of virtual particles, notably the top quark and the Higgs boson, neither of which is directly produced at Z-pole energies. Analysing the precision measurements within the framework of the Standard Model, particularly once LEP started up, allowed good predictions of the mass of the top quark a few years before the quark itself was discovered and its mass measured by the Tevatron experiments CDF and D0 in 1995 (figure 4). The close agreement between prediction and direct measurement is one of the greatest triumphs of particle physics. Similar agreement is found in the case of the W-boson mass.

CCEzbo4_11-05

Based on this success in predicting the masses of heavy particles, the precision electroweak measurements are now also used to predict the mass of the as yet unobserved Higgs boson, in the framework of the Standard Model, in conjunction with measurements of the mass and width of the W boson at LEP-2 and the Tevatron and the mass of the top quark measured at the Tevatron. These analyses predict the Higgs boson to weigh at most a few hundred giga-electron-volts (figure 5), but we must wait for the Large Hadron Collider to show if this prediction is correct.

CCEzbo5_11-05

The Z-pole report has been in the works for the past six years, pushed forward by a team of editors: Richard Kellogg, Klaus Moenig, Günter Quast, Mike Roney, Peter Rowson, Pippa Wells and Martin Grünewald (chair of the LEP-EWWG and lead editor), and in the early stages Robert Clare and Roger Jones. Meetings on reviewing the status, discussing the draft, and planning the next steps were held every few months at CERN, with participants attending in person, by videoconference or by telephone. In fact, some of the editors have yet to meet each other in person – an event is foreseen later this year. This work proceeded in parallel with the regular LEP-EWWG work, involving many more physicists, which provides updated combinations of both published and preliminary results twice a year, for winter and summer conferences.

The effort of the LEP-EWWG will now focus on electron-positron collisions at centre-of-mass energies above the Z-pole – the LEP-2 running. These measurements test fermion-antifermion and boson-pair production at the highest possible energies, thereby investigating the properties of the charged W bosons – the mass, width and decay properties, as well as gauge couplings between the electroweak gauge bosons – in similar detail to that achieved for the Z boson. With the analyses using the available Z-pole data now concluded, the combined Z-pole results will stand for a long time, to be improved only if a future linear collider takes physics data at the Z resonance.

Fifty years of antiprotons

On 1 November 1955, Physical Review Letters published the paper “Observation of antiprotons” by Owen Chamberlain, Emilio Segrè, Clyde Wiegand and Tom Ypsilantis, at what was then known as the Radiation Laboratory of the University of California at Berkeley. This paper, which announced the discovery of the antiproton (for which Chamberlain and Segrè would share the 1959 Nobel Prize for Physics), had been received only eight days earlier. However, the story of the discovery of the antiproton really begins in 1928, when the eccentric and brilliant British physicist, Paul Dirac, formulated a theory to describe the behaviour of relativistic electrons in electric and magnetic fields.

CCEann1_11-05

Dirac’s equation was unique for its time because it took into consideration both Albert Einstein’s special theory of relativity and the effects of quantum physics proposed by Edwin Schrödinger and Werner Heisenberg. While it worked well on paper, Dirac’s rather straightforward equation carried with it a most provocative implication: it permitted negative as well as positive values for the energy E. Initially few physicists seriously considered Dirac’s idea because no-one had ever observed particles of negative energy. From the standpoint of both physics and common sense, the energy of a particle could only be positive.

CCEann2_11-05

Attitudes towards Dirac’s equation changed dramatically in 1932, when Carl David Anderson reported the observation of a negatively charged electron in a project at the California Institute of Technology that originated with his mentor, Robert Millikan. Anderson named the new particle the “positron”. Both Dirac and Anderson would win Nobel Prizes for Physics for their discoveries. Dirac shared the 1933 Nobel prize with Schrödinger, and Anderson shared the 1936 Nobel prize with Victor Hess. However, the existence of the positron, the antimatter counterpart of the electron, raised the question of an antimatter counterpart to the proton.

CCEann3_11-05

As Dirac’s theory continued to explain successfully phenomena associated with electrons and positrons, it followed – from the revised standpoints of both physics and common sense – that it should also successfully explain protons. This would then demand the existence of an antimatter counterpart. The search for the antiproton was under way, but it would get off to a very slow start, as it would be another two decades before a machine capable of producing such a particle became available.

CCEann4_11-05

Enter the Bevatron

Anderson discovered the positron with a cloud chamber during investigations of cosmic rays, but it was extremely difficult, if not impossible, to use the same approach for finding the antiproton. If physicists were going to find the antiproton, they were first going to have to make one.

However, even with the invention of the cyclotron in 1931 by Ernest Lawrence, earthbound accelerators were not up to the task. Physicists knew that creating an antiproton would require the simultaneous creation of a proton or a neutron. Since the energy required to produce a particle is proportional to its mass, creating a proton-antiproton pair would require twice the proton rest energy, or about 2 billion eV. Given the fixed-target collision technology of the times, the best approach for making 2 billion eV available would be to strike a stationary target of neutrons with a beam of protons accelerated to an energy of about 6 billion eV.

In 1954, Lawrence commissioned the Bevatron accelerator to reach energies of several billion electron-volts – then designated as BeV (now universally known as GeV) – to be built at his Radiation Laboratory in Berkeley. (Upon Lawrence’s death in 1958, the laboratory was renamed the Lawrence Berkeley National Laboratory.) This weak-focusing proton synchrotron was designed to accelerate protons up to 6.5 GeV. Though never its officially stated purpose, the Bevatron was built to go after the antiproton. As Chamberlain noted in his Nobel laureate lecture, Lawrence and his close colleague, Edwin McMillan, who co-discovered the principle behind synchronized acceleration and coined the term “synchrotron”, were well aware of the 6 GeV needed to produce antiprotons and made certain the Bevatron would be able to get there.

Armed with a machine that had the energetic muscle to make antiprotons, Lawrence and McMillan put together two teams to go after the elusive particle. One team was led by Edward Lofgren, who managed operations of the Bevatron. The other was led by Segrè and Chamberlain. Segrè had been the first student to earn his physics degree at the University of Rome under Enrico Fermi. He had, with the aid of one of Lawrence’s cyclotrons, discovered technetium, the first artificially produced chemical element. He was also one of the scientists who determined that a plutonium-based bomb was feasible, and his experiments on the scattering of neutrons and protons and proton polarization broke new ground in understanding nuclear forces. Chamberlain had also studied under Fermi, and under Segrè as well. He was Segrè’s assistant on the Manhattan Project at Los Alamos while still a graduate student, and later joined Segrè at Berkeley to collaborate on the nuclear-forces studies.

Making an antiproton was only half the task; no less formidable a challenge was to devise a means of identifying the beast once it had been spawned. For every antiproton created, 40,000 other particles would be created. The time to cull the antiproton from the surrounding herd would be brief: within about 10-7 s after it appears, an antiproton comes into contact with a proton and both particles are annihilated.

According to Chamberlain, again from his Nobel lecture, it was understood from the start that at least two independent quantities would have to be measured for the same particle to identify it as an antiproton. After considering several possibilities, it was decided that they should be momentum and velocity.

Measuring momentum

To measure momentum, the research team used a system of magnetic quadrupole lenses, which was suggested to them by Oreste Piccioni, an expert on quadrupole magnets and beam extraction, who was then at Brookhaven National Laboratory. The idea was to set up the system so that only particles of a certain momentum interval could pass through. As the Bevatron’s proton beam struck a target in the form of a copper block, fragments of nuclear collisions would emerge in all directions. While most of these fragments were lost, some would pass through the system. For specifically defined values of momentum, the negative particles among the captured fragments would be deflected by the magnetic lenses into and through collimator apertures.

To measure velocity, which was used to separate antiprotons from negative pions, the researchers deployed a combination of scintillation counters and a pair of Cherenkov detectors. The scintillation counters were used to time the flight of particles between two sheets of scintillator, 12 m apart. Under the specific momentum defined by Segrè, Chamberlain and their collaborators, relativistic pions traversed this distance 11 ns faster than the 51 ns it took for the more ponderous antiprotons. Signals from the two scintillators were set up to coincide only if they came from an antiproton. However, because it is possible for two pions to have exactly the right spacing to imitate the signal from an antiproton, the researchers also used the Cherenkov detectors.

One Cherenkov detector was somewhat conventional in that it used a liquid fluorocarbon medium. It was dubbed the “guard counter” because it could measure the velocity of particles moving faster than an antiproton. The second detector, which was designed by Chamberlain and Wiegand, used a quartz medium, and only particles moving at the speed predicted for antiprotons set it off.

In conjunction with the momentum and velocity experiments, Berkeley physicist Gerson Goldhaber and Edoardo Amaldi from Rome led a related experiment using photographic-emulsion stacks. If a suspect particle was truly an antiproton, the Berkeley researchers expected to see the signature star image of an annihilation event. Here the antiproton and a proton or neutron from an ordinary nucleus, presumably that of a silver or bromine atom in the photographic emulsion, would die simultaneously.

Success!

The antiproton experiments of Segrè and Chamberlain and their collaborators began in the first week of August, 1955. Their first run on the Bevatron lasted five consecutive days. Lofgren and his collaborators ran their experiments for the following two weeks. The Segrè and Chamberlain group returned on 29 August and ran experiments until the Bevatron broke down on 5 September. On 21 September, a week after operating crews had revived the Bevatron, Lofgren’s group was to begin a four-day run, but instead it ceded its time to Segrè and Chamberlain. That day, the future Nobel laureates and their team found their first evidence of the antiproton based on momentum and velocity. Subsequent analysis of the emulsion-stack images revealed the signature annihilation star that confirmed the discovery. In all, Segrè, Chamberlain and their group counted a total of 60 antiprotons produced during a run that lasted approximately 7 h.

The public announcement of the antiproton’s discovery received a mixed response. The New York Times enthusiastically proclaimed “New Atom Particle Found; Termed a Negative Proton”, while the particle’s hometown newspaper, the Berkeley Gazette, sombrely announced “Grim new find at UC”. The Berkeley reporter had been told that should an antiproton come in contact with a person, that person would blow up. Today, 50 years on, antiprotons have become a staple of high-energy physics experiments, with trillions being produced at CERN and Fermilab, and no known human fatalities.

New exhibition unites people and ideas

The 1st European Research and Innovation Exhibition – the Salon Européen de la Recherche et de l’Innovation – took place in Paris on 3-5 June 2005 under the patronage of Jacques Chirac, president of France. The aim of the exhibition, which is to become an annual event, is to provide a place for players from a broad sector of activities to come together, creating a crossroads where people and ideas from both the public sector and the corporate world can meet. This year, the 130 exhibitors included CERN, the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) of the Centre National de la Recherche Scientifique (CNRS), and the Dapnia laboratory of the Commissariat á l’Energie Atomique, who together presented a stand showing examples of technology transfer.

CCEint1_11-05

Jean Audouze, senior CNRS researcher, is the founder and chairman of the exhibition’s Scientific Committee. Consisting of scientific leaders in the world of research and innovation, this committee is responsible for the programme of events, in particular conferences and round-table discussions. Audouze himself has had a great deal of experience in communicating physics on the highest and broadest levels, as scientific adviser to the president of France (1989-1993) and as director of Paris’s well known science museum, the Palais de la Découverte (1998-2004).

How would you describe the role of research today, in the World Year of Physics?

Research is the driving force behind economic, cultural and social progress. The French government, much as the other European polit­ical leaders, has set a goal of devoting 3% of gross domestic prod­uct to research and development by 2010. Together, France and Europe are actively preparing for the future to meet the dynamic momentum of countries like the US, China and Japan, with whom competition is already very fierce. According to the OECD [Organization for Economic Co-operation and Development], gross domestic expenditure on research and development by member countries amounted to over $650 billion in 2001. The countries of the European Union contributed about $185 billion of this amount. France spent about $31 billion on research and development, which places it in second position in Europe and fourth worldwide, behind the US, Japan and Germany. Many researchers have started their own companies since 1999. Business incubators are playing a crucial role in the development of new companies, assisted by organi­zations that provide financing specifically for the creation of innov­ative companies. The biotechnology and nanotechnology sectors are at present leading in terms of the creation of new businesses

Can you explain the event’s objectives?

The exhibition combined information from fundamental research with its applications. It provided an opportunity for researchers, public and private institutions, universities and the top engineering and business schools in France (les grandes écoles), industrial and commercial companies, R&D departments, incubators, financing organizations, laboratory suppliers, local governments, technology parks (technopoles), research associations and foundations to meet. They could present their activities, develop contacts to encourage professional development, discuss the establishment of new projects, start new partnerships, and negotiate financing for new businesses or research programmes.

CCEint2_11-05

What was the outcome of the three days?

The final balance is very positive. A total of around 24,000 people attended the event. In addition, the presence of many visitors at the conferences, at the round tables concerning the European research programme and the diffusion of scientific culture in Europe, and at events with the participation of the Nobel laureates in physics has shown the strong interest the public has in scientific topics.

How have politicians reacted to the measures required to maximize the value of scientific research?

The politicians have responded well to the scientists’ needs, indeed a few programmes have received specific financing allocations. They appreciated the creative way the technological developments were presented to the public, and the debates on social impact to arouse awareness of the importance of science for everyday life.

What is the outlook for continuing the dialogue in research, education and industrial promotion?

The perspective for the future is to make this event an annual rendezvous with the participation of other European institutions and national stands.

The World Year of Physics 2005 is an international celebration of physics. Events throughout the year have been highlighting the vitality of physics and its importance in the coming millennium, and have commemorated Einstein’s pioneering contributions in 1905. How can the World Year of Physics bring the excitement and impact of physics, science and research to the public?

I am convinced that the World Year of Physics has been a success in terms of popularizing physics and in conveying enthusiasm for the subject among a large public. In each country, and especially in France, many very exciting events were set up with that goal and have attracted quite big audiences. We astrophysicists have a project to make 2009, the 400th anniversary of the use of the astronomical lens by Galileo, the World Year of Astronomy and Astrophysics.

How can worldwide collaborations and fundamental research laboratories such as CERN, CNRS and Dapnia inspire future generations of scientists?

This inspiration is induced by at least two factors: first, CERN, CNRS and Dapnia are involved in the most exciting aspects of fundamental research, e.g. the very nature of matter and the universe; second, their research programmes are planned for the coming decades: the forthcoming operation of the Large Hadron Collider at CERN and projects like VIRGO (which aims to detect gravitational waves) for CNRS and Dapnia should be very enticing for European newcomers to science.

• The CERN, IN2P3 and Dapnia stand showed examples of technology transfer and was prepared by CERN’s Technology Transfer and Communication groups. In addition, CERN’s Daniel Treille gave a talk “Miroirs brisés, antimatière disparue, mati&egravere cachée: le CERN mène l’enquête”.

Uppsala 2005: leptons, photons and a lot more

Twenty-five years ago at the Rochester meeting held in Madison, Leon Lederman said, “The experimentalists do not have enough money and the theorists are overconfident.” Nobody could have anticipated then that experiments would establish the Standard Model as a gauge theory with a precision of one in 1000, pushing any interference from possible new physics to energy scales beyond 10 TeV. The theorists can modestly claim that they have taken revenge for Lederman’s remark. However, as the Lepton-Photon 2005 meeting underlined, there is no feeling that we are now dotting the i’s and crossing the t’s of a mature theory. All the big questions remain unanswered; worse still, the theory has its own demise built into its radiative corrections.

CCEupp1_11-05

The electroweak challenge

The most evident of unanswered questions is why are the weak interactions weak? In 1934 Enrico Fermi provided an answer with a theory that prescribed a quantitative relation between the fine-structure constant, α, and the weak coupling, G ˜ α⁄MW2, where MW can be found from the rate of muon decay to be around 100 GeV (once parity violation and neutral currents, which Fermi did not know about, are taken into account). Fermi could certainly not have anticipated that his early phenomenology would develop into a renormalizable gauge theory that allows us to calculate the radiative corrections to his formula. Besides regular higher-order diagrams, loops associated with the top quark and the Higgs boson also contribute, and are consistent with observations.

CCEupp2_11-05

One of my favourite physicists once referred to the Higgs as the “ugly” particle. Indeed, if one calculates the radiative corrections to the mass appearing in the Higgs potential, the same gauge theory that withstood the onslaught of precision experiments at CERN’s Large Electron-Positron collider, the SLAC linear collider and Fermilab’s Tevatron grows quadratically. Some new physics is needed to tame the divergent behaviour, at an energy scale, L, of less than a few tera-electron-volts by the most conservative of estimates. There is an optimistic interpretation, just as Fermi anticipated particle physics at 100 GeV in 1934, that the electroweak gauge theory requires new physics at 2˜3 TeV, to be revealed by the Large Hadron Collider (LHC) at CERN and, possibly, the Tevatron.

CCEupp3_11-05

Dark clouds have built up on this sunny horizon, however, because some electroweak precision measurements match the Standard Model predictions with too high a precision, pushing L to around 10 TeV. Some theorists have panicked and proposed that the factor multiplying the unruly quadratic correction, 2 MW2 + MZ2 + Mh2 – 4Mt2, must vanish exactly. This has been dubbed the Veltman condition. It “solves” the problem because the observations can accommodate scales as large as 10 TeV, possibly even higher, once the dominant contribution is eliminated.

CCEupp4_11-05

If the Veltman condition does happen to be satisfied, it would leave particle physics with an ugly fine-tuning problem reminiscent of the cosmological constant; but this is very unlikely. The LHC must reveal the “Higgs” physics already observed via radiative corrections, or at least discover the physics that implements the Veltman condition, which must still appear at 2 ˜ 3 TeV although higher scales can be rationalized for other tests of the theory. Supersymmetry is a textbook example. Even though it elegantly controls the quadratic divergence by the cancellation of boson and fermion contributions, it is already fine-tuned at a scale of 2 ˜ 3 TeV. There has been an explosion of creativity to resolve the challenge in other ways; the good news is that all involve new physics in the form of scalars, new gauge bosons, non-standard interactions, and so on.

CCEupp5_11-05

Alternatively, we may be guessing the future while holding too small a deck of cards, and the LHC will open a new world that we did not anticipate. The hope then is that particle physics will return to its early traditions where experiment leads theory, as it should, and where innovative techniques introduce new accelerators and detection methods that allow us to observe with an open mind and without a plan.

CP violation and neutrino mass

Another grand unresolved question concerns baryogenesis: why are we here? At some early time in the evolution of the universe quarks and antiquarks annihilated into light, except for just one quark in 1010 that failed to find a partner and became us. We are here because baryogenesis managed to accommodate Andrei Sakharov’s three conditions, one of which dictates CP violation. Precision data on CP violation in neutral kaons have been accumulated over 40 years, and the measurements can, without exception, be accommodated by the Standard Model with three families of quarks. History has repeated itself for B-mesons, but in only three years, owing to the magnificent performance of the experiments at the B-factories – Belle at KEK and BaBar at SLAC. Direct CP violation has been established in the decay Bd →Kπ with a significance in excess of 5σ. Unfortunately, this result and a wealth of data contributed by the CLEO collaboration at Cornell, DAFNE at Frascati and the Beijing Spectrometer (BES) fail to reveal evidence for new physics. Given the rapid progress and the better theoretical understanding of the expectations in the Standard Model relative to the kaon system, the hope is that improved data will pierce the Standard Model’s resistant armour. Where theory is concerned, it is worth noting that the lattice now does calculations that are confirmed by experiment.

A third important question concerns neutrino mass. A string of fundamental experimental measurements has led progress in neutrino physics. Supporting evidence from reactor and accelerator experiments, including first data from the reborn Super-Kamiokande detector, has confirmed discovery of oscillations in solar and atmospheric neutrinos. High-precision data from the pioneering experiments now trickle in more slowly, although evidence for the oscillatory behaviour in L/E of the muon neutrinos in the atmospheric-neutrino beam has become very convincing.

Nevertheless, the future of neutrino physics is undoubtedly bright. Construction at Karlsruhe of the KATRIN spectrometer, which by studying the kinematics of tritium decay will be sensitive to an electron-neutrino mass as low as 0.02 eV, is in progress, and a wealth of ideas on double beta decay and long-baseline experiments is approaching reality. These experiments will have to answer the great “known unknowns” of neutrino physics: their absolute mass and hierarchy, the value of the third small mixing angle and its associated CP-violating phase, and whether neutrinos are really Majorana particles. Discovering neutrinoless double beta decay would settle the last question, yield critical information on the absolute-mass scale and, possibly, resolve the hierarchy problem. In the meantime we will keep wondering whether small neutrino masses are our first glimpse of grand unified theories via the seesaw mechanism, or represent a new Yukawa scale tantalizingly connected to lepton conservation and, possibly, the cosmological constant.

Information on neutrino mass has also emerged from an unexpected direction – cosmology. The structure of the universe is dictated by the physics of cold dark matter and the galaxies we see today are the remnants of relatively small overdensities in its nearly uniform distribution in the very early universe. Overdensity means overpressure that drives an acoustic wave into the other components that make up the universe, i.e. the hot gas of nuclei and photons and the neutrinos. These acoustic waves are seen today in the temperature fluctuations of the microwave background, as well as in the distribution of galaxies in the sky. With a contribution to the universe’s matter similar to that of light, neutrinos play a secondary, but identifiable role. Because of their large mean-free paths, the neutrinos prevent the smaller structures in the cold dark matter from fully developing and this effect is visible in the observed distribution of galaxies.

Simulations of structure formation with varying amounts of matter in the neutrino component, i.e. varying neutrino mass, can be matched to a variety of observations of today’s sky, including measurements of galaxy-galaxy correlations and temperature fluctuations on the surface of last scattering. The results suggest a neutrino mass of no more than 1 eV, summed over the three neutrino flavours – a range compatible with the one deduced from oscillations.

The imprint on the surface of last scattering of the acoustic waves driven into the hot gas of nuclei and photons also reveals a value for the relative abundance of baryons to photons of 6.5 +0.40.3 × 10-10 (from the Wilkinson Microwave Anisotropy Probe). Nearly 60 years ago, George Gamow realized that a universe born as hot plasma must consist mostly of hydrogen and helium, with small amounts of deuterium and lithium added. The detailed balance depends on basic nuclear physics, as well as the relative abundance of baryons to photons: the state-of-the-art result of this exercise yields 4.7+1.0-0.8 × 10-10. The agreement of the two observations is stunning, not just because of their precision, but because of the concordance of two results derived from totally unrelated ways of probing the early universe.

The physics of partons

Physics at the high-energy frontier is the physics of partons, probing the question of what the proton really is. At the LHC, it will be gluons that produce the Higgs boson, and in the highest-energy experiments, neutrinos interact with sea-quarks in the detector. We can master this physics with unforeseen precision because of a decade of steadily improving measurements of the nucleon’s structure at HERA, DESY’s electron-proton collider. These now include experiments using targets of polarized protons and neutrons.

HERA is our nucleon microscope, tunable by the wavelength and the fluctuation time of the virtual photon exchanged in the electron-proton collision. With the wavelengths achievable, the proton has now been probed with a resolution of one thousandth of its 1 fm size. In these interactions, the fluctuations of the virtual photons survive over distances ct ˜ 1/x, where x is the relative momentum of the parton. In this way, HERA now studies the production of chains of gluons as long as 10 fm, an order of magnitude larger than, and probably totally insensitive to, the proton target. These are novel structures, the understanding of which has been challenging for quantum chromodynamics (QCD).

Theorists analyse HERA data with calculations performed to next-to-next-to-leading order in the strong coupling, and at this level of precision must include the photon as a parton inside the proton. The resulting electromagnetic structure functions violate isospin and differentiate a u quark in a proton from a d quark in a neutron because of the different electric charge of the quark. Interestingly, the inclusion of these effects modifies the extraction of the Weinberg angle from data from the NuTeV experiment at Fermilab, bridging roughly half of the discrepancy between NuTeV’s result and the value in the Particle Data Book. Added to already anticipated intrinsic isospin violations associated with sea-quarks, the NuTeV anomaly may be on its way out.

While history has proven that theorists had the right to be confident in 1980 at the time of Lederman’s remark, they have not faded into the background. Despite the dominance of experimental results at the conference, they provided some highlights of their own. Developing QCD calculations to the level at which the photon structure of the proton becomes a factor is a tour de force, and there were other such highlights at this meeting. Progress in higher-order QCD computations of hard processes is mind-boggling and valuable, sometimes essential, for interpreting LHC experiments. Discussions at the conference of strings, supersymmetry and additional dimensions were very much focused on the capability of experiments to confirm or debunk these concepts.

Towards the highest energies

Theory and experiment joined forces in the ongoing attempts to read the information supplied by the rapidly accumulating data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. Rather than the anticipated quark-gluon plasma, the data suggest the formation of a strongly interacting fluid with very low viscosity for its entropy. Similar fluids of cold 6Li atoms have been created in atomic traps. Interestingly, theorists are exploiting Juan Maldacena’s connection between four-dimensional gauge theory and 10-dimensional string theory to model just such a thermodynamic system. The model is of a 10D rotating black hole with Hawking-Beckenstein entropy, which accommodates the low viscosities observed. This should give notice that very-high-energy collisions of nuclei may prove more interesting than anticipated from “QCD-inspired” logarithmic extrapolations of accelerator data. Such physics is relevant to analysing cosmic-ray experiments.

A century has passed since cosmic rays were discovered, yet we do not know how and where they are accelerated. Solving this mystery is very challenging, as can be seen by simple dimensional analysis. A magnetic field B of size R can accelerate a particle with electric charge q to an energy Ε < ΓqvBR, with velocity v˜c, and no higher (where Γ is a possible boost factor between the frame of the accelerator and ourselves ). This is the Hillas formula. Note that it applies to our man-made accelerators, where kilogauss fields over several kilometres yield 1 TeV, because the accelerators reach efficiencies that can come close to the dimensional limit.

Opportunity for particle acceleration to the highest energies in the cosmos is limited to dense regions where exceptional gravitational forces create relativistic particle flows, such as the dense cores of exploding stars, inflows on supermassive black holes at the centres of active galaxies, and so on. Given the weak magnetic field (microgauss) of our galaxy, no structures seem large or massive enough to yield the energies of the highest-energy cosmic rays, implying instead extragalactic objects. Common speculations include nearby active galactic nuclei powered by black holes of 1 billion solar masses, or the gamma-ray-burst-producing collapse of a supermassive star into a black hole.

The problem for astrophysics is that in order to reach the highest energies observed, the natural accelerators must have efficiencies approaching 10% to operate close to the dimensional limit. This is so daunting a concept that many believe that cosmic rays are not the beams of cosmic accelerators but the decay products of remnants from the early universe, for instance topological defects associated with a grand unified theory phase transition near 1024 eV.

There is a realistic hope that this long-standing puzzle will be resolved soon by ambitious experiments: air-shower arrays of 10,000 km2, arrays of air Cherenkov detectors, and kilometre-scale neutrino observatories. While no definitive breakthroughs were reported at the conference, preliminary data forecast rapid progress and imminent results in all three areas.

The air-shower array of the Pierre Auger Observatory is confronting the problem of low statistics at the highest energies by instrumenting a huge collection area covering 3000 km2 on an elevated plane in western Argentina. The completed detector will observe several thousand events a year above 10 EeV and tens above 100 EeV, with the exact numbers depending on the detailed shape of the observed spectrum.

The end of the cosmic-ray spectrum is a matter of speculation given the somewhat conflicting results from existing experiments. Above a threshold of 50 EeV cosmic rays interact with cosmic microwave photons and lose energy to pions before reaching our detectors. This is the origin of the Greissen-Zatsepin-Kuzmin cutoff that limits the sources to our supercluster of galaxies. This feature in the spectrum is seen by the High Resolution Fly’s Eye (HiRes) in the US at the 5s level but is totally absent from the data from the Akeno Giant Air Shower Array (AGASA) in Japan.

At this meeting the Auger collaboration presented the first results from the partially deployed array, with an exposure similar to that of the final AGASA data. The data confirm the existence of events above 100 EeV, but there is no evidence for the anisotropy in arrival directions claimed by the AGASA collaboration. Importantly, the Auger data reveal a systematic discrepancy between the energy measurements made using the independent fluorescent and Cherenkov detector components. Reconciling the measurements requires that very-high-energy showers develop deeper in the atmosphere than anticipated by the particle-physics simulations used to analyse previous experiments. The performance of the detector foreshadows a qualitative improvement of the observations in the near future.

Cosmic accelerators are also cosmic-beam dumps producing secondary beams of photons and neutrinos. The AMANDA neutrino telescope at the South Pole, now in its fifth year of operation, has steadily improved its performance and has increased its sensitivity by more than an order of magnitude since reporting its first results in 2000. It has reached a sensitivity roughly equal to the neutrino flux anticipated to accompany the highest-energy cosmic rays, dubbed the Waxman-Bahcall bound. Expansion into the IceCube kilometre-scale neutrino observatory is in progress. Companion experiments in the deep Mediterranean are moving from R&D to construction with the goal of eventually building a detector the size of IceCube.

However, it is the HESS array of four air Cherenkov gamma-ray telescopes deployed under the southern skies of Namibia that delivered the particle-astrophysics highlights at the conference. This is the first instrument capable of imaging astronomical sources in gamma rays at tera-electron-volt energies, and it has detected sources with no counterparts in other wavelengths. Its images of young galactic supernova remnants show filament structures of high magnetic fields that are capable of accelerating protons to the energies, and with the energy balance, required to explain the galactic cosmic rays. Although the smoking gun for cosmic-ray acceleration is still missing, the evidence is tantalizingly close.

• The next Lepton-Photon conference will take place in Daegu, Korea, in 2007.

Close nucleon encounters

cross-section

Scientists believe that the crushing forces in the core of neutron stars squeeze nucleons so tightly that they may blur together. Recently, an experiment by Kim Egiyan and colleagues in Hall B at the US Department of Energy’s Jefferson Lab (JLab) caught a glimpse of this extreme environment in ordinary matter here on Earth. Using the CEBAF Large Acceptance Spectrometer (CLAS), the team measured ratios of the cross-sections for electrons scattering with large momentum transfer off medium and light nuclei in the kinematic region that is forbidden for scattering off low momentum nucleons. Steps in the value of this ratio appear to be the first direct observation of the short-range correlations (SRCs) of two and three nucleons in nuclei, with local densities comparable to those in the cores of neutron stars.

SRCs are intimately connected to the fundamental issue of why nuclei are dilute bound systems of nucleons. The long-range attraction between nucleons would lead to a collapse of a heavy nucleus into an object the size of a hadron if there were no short-range repulsion. Including a repulsive interaction at distances where nucleons come close together, ≤0.7 fm, leads to a reasonable prediction of the present description of the low-energy properties of nuclei, such as binding energy and saturation of nuclear densities. The price is the prediction of significant SRCs in nuclei.

For many decades, directly observing SRCs was considered an important, though elusive, task of nuclear physics; the advent of high-energy electron-nucleus scattering appears to have changed all this. The reason is similar to the situation encountered in particle physics: though the quark structure of hadrons was conjectured in the mid-1960s, it took deep inelastic scattering experiments at SLAC and elsewhere in the mid-1970s to prove directly the presence of quarks. Similarly, to resolve SRCs, one needs to transfer to the nucleus energy and momentum >=1 GeV, which is much larger than the characteristic energies/momenta involved in the short-distance nucleon-nucleon interaction. At these higher momentum transfers, one can test two fundamental features of SRCs: first, that the shape of the high-momentum component (>300 MeV/c) of the wave function is independent of the nuclear environment, and second, the balancing of a high-momentum nucleon by, predominantly, just one nucleon and not by the nucleus as a whole.

The inclusive nature of the process ensures that the final-state interaction does not modify the ratios of the cross-sections

An extra trick required is to select kinematics where scattering off low momentum nucleons is strongly suppressed. This is pretty straightforward at high energies. First, one needs to select kinematics sufficiently far from the regions allowed for scattering off a free nucleon, i.e. x = Q2/2q0mN < 1, and for the scattering off two nucleons with overall small momentum in the nucleus, x < 2. (Here Q2 is the square of the four momenta transferred to the nucleus, and q0 is the energy transferred to the nucleus.) In addition, one needs to restrict Q2 to values of less than a few giga-electron-volts squared; in this case, nucleons can be treated as partons with structure, since the nucleon remains intact in the final state due to final phase-volume restrictions.

If the virtual photon scatters off a two-nucleon SRC at x > 1, the process goes as follows in the target rest frame. First, the photon is absorbed by a nucleon in the SRC with momentum opposite to that of the photon; this nucleon is turned around and two nucleons then fly out of the nucleus in the forward direction. The inclusive nature of the process ensures that the final-state interaction does not modify the ratios of the cross-sections. Accordingly, in the region where scattering off two-nucleon SRCs dominates (which for Q2 ≥ 1.4 GeV2 corresponds to x > 1.5), one predicts that the ratio of the cross-section for scattering off a nucleus to that off a deuteron should exhibit scaling, namely it should be constant independent of x and Q2 (Frankfurt and Strikman 1981). In the 1980s, data were collected at SLAC for x > 1. However, they were in somewhat different kinematic regions for the lightest and heavier nuclei. Only in 1993 did the sustained efforts of Donal Day and collaborators to interpolate these data to the same kinematics lead to the first evidence for scaling, but the accuracy was not very high.

An experiment with the CLAS detector at JLab was the first to take data on 3He and several heavier nuclei, up to iron, with identical kinematics, and the collaboration reported their first findings in 2003 (Egiyan et al. 2003). Using the 4.5 GeV continuous electron beam available at the lab’s Continuous Electron Beam Accelerator Facility (CEBAF), they found the expected scaling behaviour for the cross-section ratios at 1.5 ≤ x ≤ 2 with high precision.

Cross-section ratios

The next step was to look for the even more elusive SRC of three nucleons. It is practically impossible to observe such correlations in intermediate energy processes. However, at high Q2, it is straightforward to suppress scattering off both slow nucleons and two-nucleon SRCs. One needs only to reach the x ≥ 2 region where scattering off a deuteron is kinematically forbidden. Here, the experiment typically probes scattering off a fast nucleon with momentum opposite to the virtual photon, with two nucleons balancing the fast nucleon’s momentum.

Again, a scaling of the ratios was expected. In this case, however, the ratios of the cross-sections for a pair of nuclei of masses A1 and A2 and with A1 > A2 was predicted to be higher for 2 ≤ x ≤ 3 than for 1.5 ≤ x ≤ 2. This is because there is a high probability for a nucleon to have two nearby nucleons in a heavier, denser nucleus. Hence, one expected to find two steps. This is exactly what the CLAS experiment observed in data recently reported for these kinematics and shown in figure 3 (Egiyan et al. 2005). Moreover, the iron:carbon ratios for x ˜ 1.7 and 2.5 are consistent with the expectation that the probability of two- and three-nucleon SRCs should increase with A as the square and cube, respectively, of the nuclear density. For iron, the probability of two-nucleon SRCs reaches about 25%.

More data for exploring SRCs have already been taken at JLab, and several more efforts are already planned to study this interesting region of nuclear physics, which has important implications for the dynamics of the cores of neutron stars.

Wisdom generation in the Alps: a student’s tale

“Seventy per cent of today’s successful particle physicists have attended this school – which means you have a high chance to be one of them in the future,” says a joyful Egil Lillestøl as he welcomes us to the 2005 European School for High Energy Physics. Instantly more than 100 glasses rise, accompanied by a cheerful applause. We all feel lucky to be here.

CCEsum1_11-05

We are in Kitzbühel, a peaceful town with green and beautiful surroundings in south-west Austria, to witness a curious learning experience and to contribute to its spirit as much as we can. The first evening’s dinner sweeps away any clouds of anxiety we might have, and observations of the first encounters have provided more than 5σ evidence of a great event.

On the first morning, just as rain is refreshing the beauty of the mountains outside, the overhead projector starts to light up the first fields and interactions on the screen. Wilfried Buchmüller from DESY provides us with the most fundamental piece of knowledge we will ever need – the Standard Model itself. The school’s academic programme is like a perfect PhD Student’s Guide to High-Energy Physics, as if to advise “don’t panic” in the wide and diverse realm of this exciting subject, “we will show you the route”.

CCEsum3_11-05

Our appetite for learning grows as cosmology slowly makes peace with precision in the lectures by Rocky Kolb from Fermilab. He calmly strides through the whole universe, from its brilliant but furious past to its settled and gloomy present, from its simply overwhelming dark side to its modest but comforting light side.

CCEsum4_11-05

Then enters Larry McLerran from Brookhaven, who introduces us to the colour glass condensate and the quark-gluon plasma, which happen to be two rather unusual forms of strongly interacting matter. He tells us the ancient tales of the good old days when quarks and gluons used to enjoy their freedom, and how the Relativistic Heavy Ion Collider came along at Brookhaven with the aim of capturing a few memories of such eras. On the other hand, Gerhard Ecker from Vienna draws a somewhat more familiar portrait of strong interactions as he systematically goes through quantum chromodynamics, explaining the usual quarks and gluons, and showing the remarkable detail hidden behind even the simplest approaches in this theory.

The evenings call for our creativity in the discussion sessions (which might also be considered as gentle warnings for us to stay awake during the lectures). Having received our daily lecture notes we are divided into six discussion groups, where we are supposed to make an account of the day’s learning and remove any obscurities in the lectures. Encouraged by the friendly attitudes of our discussion leaders, who are all young and willing theorists, and of the visiting lecturers, any shyness disappears and the first hints of inspiration begin to appear as ideas, questions and comments bravely make their way into the discussions.

It is now Thursday night and the poster session begins, transforming modest students into proud physicists who share the outcomes of their current research with great skill and enthusiasm. As well as discovering new ideas, we also see some different approaches to familiar subjects. For example, as someone who wrote an MSc thesis on the analysis of miniature black holes in the CMS experiment at CERN, I am delighted to come across a poster on a similar study for ATLAS. I discover that our friends from Oxford suffered the same problems we did, and so over discussions we decide to support each other in any future studies of these ruthless objects. Best of all though, is to have the vision that through all of these diverse contributions the goals of physics today can indeed be fulfilled.

The sound of music

But it’s not all work. We also have enough time to answer the irresistible call of the great Alps or to relax in the pleasant atmosphere of the historic town of Kitzbühel. On Saturday we visit Salzburg, the town enchanted by the graceful hand of Mozart.

The second week brings new lectures and new lecturers. After convincing us that Buchmüller’s Standard Model is fine but definitely insufficient, John Ellis from CERN goes on to reveal the vast worlds beyond, which are ruled by brilliant scientific imagination, with of course some rightful emphasis placed on the unavoidable elegance of supersymmetry. His presence is an invaluable gift, especially for me, as my current research happens to be on supersymmetric dark matter. Inspired by his lectures, as devoted experimentalists, we even go on a dangerous quest for dark matter on the nearby Schwarzsee at night.

Later, Robert Fleischer, also from CERN, explains how a nasty complex phase destroyed the beautiful CP symmetry and introduced some excitement into our universe, which would otherwise be less interesting; and how it also caused a few headaches among the physicists trying to explore the rich phenomenology of the Cabibbo-Kobayashi-Maskawa matrix and its unitarity triangles. We then discover some “CP violating terms” in the local organizing committee as two of its members from Vienna, Manfred Jeitler and Laurenz Widhalm, in addition to their efforts to offer us an outstanding experience, present lectures on experimental aspects of B- and K-physics, respectively. Then Manfred Lindner from Munich describes the ghostly neutrinos and the many consequences of their mischievous behaviour, and gives a long list of the global endeavours to discover their nature experimentally.

There are even some lectures not on particle physics. Wolfram Müller from Graz gives instructions on the physics of ski jumping, which seems quite appropriate in Kitzbühel, and Herbert Pietschmann from Vienna shows us our fate on the way to knowledge in his delightful lecture on physics and philosophy.

Meanwhile, the interactions increase, just as predicted by the famous “Summer Student Group Theory”. Although we have grown up under the strict hands of scientific work, the children within us still seek fun and adventure. We make the most of a colourful international community formed without prejudices and borders. The coffee breaks, which seemed a little long in the beginning, now fly swiftly by with cheerful conversations. I feel a significant improvement in my debating skills, especially after all the “SUSY and beyond” discussions with several expert theorist friends.

Grand finale

However, the inescapable end is close. In order to avoid becoming too melancholy and to create a glorious finale, we amalgamate all our creativity in preparing an unforgettable farewell night. This time we are on stage, giving so-called lectures on “serious subjects” (that cannot be mentioned here!), singing, acting and doing all sorts of things to entertain our audience. But finally we have to say difficult goodbyes to all of our friends (yes, we are friends now), and leave the cosy Hotel Kitzhof, where our hosts, through their patience and goodness, have somehow managed to survive our two-week occupation.

I know that all of us share the same feeling of gratitude towards everyone who made this school possible. I am especially indebted, as a student coming from an observer state who had the privilege of being supported through the generosity of CERN. We are greatly thankful for the endless support and kindness we received from Egil Lillestøl (CERN schools director), Danielle Métral (CERN schools secretary), Tatyana Donskova (JINR schools secretary), all the local organizers plus all the other representatives of CERN and JINR who were with us during the school. We have been thoroughly enriched as a result of their sincere efforts. This worthy tradition must continue, as long as physics has new puzzles to offer us and as long as we can respond through willing fresh minds.

bright-rec iop pub iop-science physcis connect