The ultimate fate of the universe depends on exactly how much matter it contains. It could expand forever, with galaxies drifting further and further apart. However, if there is enough matter, gravity’s pull will slow the expansion down, or even reverse it, ultimately leading to a Big Crunch.
New results from a test flight of the Boomerang balloon experiment imply that there is just enough matter to stop the expansion, but not to reverse it. This scenario is known as a “flat” universe. Boomerang measures the Cosmic Microwave Background radiation. Fluctuations in this background are evidence for the first clumping of matter – the seeds of galaxies we see today.
ESO has strong ties with CERN. Before the observatory moved to Garching in Germany, it was based at the CERN site. Shown here are CERN’s LHC division leader Philippe Lebrun (right), Catherine Cesarsky and CERN physicist Daniel Treille, visiting the LHC magnet test hall.
The latest round of the traditional annual theory workshop at DESY focused on the interplay between particle physics and cosmology, a theme of increasing interest to specialists in both domains. Two experiments and observations highlighted topical interest: Y Totsuka reported the strong case for the observation of neutrino (n) oscillations at the Superkamiokande underground experiment in Japan, while B Leibundgut overviewed the recent results on the “Hubble diagram” for 1a type supernovae, which point towards a cosmological constant (L).
Both topics are both very immediate and fundamental – they affect the question of what the universe is made of, the possible need for modifications of standard cosmology, the generation of masses and scales in grand unified field theories or even deeper theories, and the issue of hidden generation symmetries that may explain the observed patterns of elementary particle masses. And both issues are not yet understood.
Results that began to emerge in 1998 make it clear that not all muon-neutrinos produced in the atmosphere arrive in underground detectors – in transit they appear to switch to another type of neutrino. Superkamiokande has reported a dependence of this neutrino “extinction” on the path length between production and detection. Such a dependence is characteristic of neutrino oscillations and implies that such neutrinos must have mass.
At the DESY meeting, theorists tried to understand the resulting neutrino mass and mixing implications. The debate centred on a consistent determination of the neutrino masses and mixing angles from the various experimental observations. New effects and experiments were proposed to resolve the issue.
On the more theoretical side, most specialists agreed that we understand why the neutrino masses are so much smaller than the electron or quark masses. In the standard model, renormalizability forbids neutrino masses – having no mass is consistent with the gauge symmetries. To introduce masses means going beyond the standard model. After years of searching for “physics beyond the standard model”, it has now arrived.
Grand unification
Neutrino masses are most likely due to the violation of lepton conservation in an extension of the standard model, which could happen in the vicinity of the remote grand unification scale. The results are effective non renormalizable couplings between neutrinos and Higgs scalars. The Higgs mechanism then relates a typical neutrino mass to the Fermi scale – a mere fraction of an electronvolt.
A concrete manifestation of these general aspects is the “see-saw mechanism”, in which the non-renormalizable interaction is generated by the exchange of superheavy singlet neutrinos. Another is the induced vacuum expectation value of a superheavy scalar triplet.
The need for neutrino masses thus gives direct experimental evidence that the standard model needs to be extended, hinting towards grand unification or similar ideas. Even though less spectacular, the theoretical implications of the neutrino oscillations may turn out to be of comparable importance to proton decay – a process long thought to be inevitable but yet to be observed.
Particularly intriguing are the possible consequences of these effects for the generation of matter asymmetry in the early universe. Our very existence demands processes that produce more matter than antimatter.
The precise mass and mixing pattern for the neutrinos is not well understood. It may be as rich as for the quarks, but with a completely different generation structure. Why is the mixing angle for the muon neutrino maximal? Generation symmetries and their spontaneous breaking are the prominent candidates for possible explanations of the mass patterns.
Hubble diagrams
The other basic question debated at the workshop hinged on basic cosmology. A “Hubble diagram” of brightness versus redshift (related to velocity of recession) of very distant type 1a supernovae suggests that the expansion of the universe is accelerating. This could be the effect of a cosmological constant (L), proposed long ago by Einstein. Some doubts remain in the interpretation of the data. The main uncertainty is the lack of understanding of how the average brightness of supernovae has evolved. These are explosions that happened in quite early stages of the evolution of the universe.
On the other hand, K Gorski and J Silk compared the anisotropy of the cosmic microwave background radiation with structure formation in the universe. This indicates that the background energy density seems to participate in the formation of the structure. There seems to be some homogeneous component, and the cosmological constant would be a candidate.
At DESY, specialists admitted to being quite puzzled by these findings. Theorists still have no good explanation of why the cosmological constant should vanish, and even less why it should have a tiny non-zero value. The energy density in radiation or matter decreases with the second inverse power of time, so a true constant L that influences today’s evolution of the universe would have been completely negligible at early stages of the universe.
A significant role for L “today” – and neither earlier nor later in the history of the universe – seems to require an unacceptable matching or “fine-tuning” of numbers. Some say that the “natural guess” for the value of L is off by 120 orders of magnitude – probably the worst failure of an educated guess ever made. Already Einstein was worried about his constant, and we still are.
Popular alternatives are models with a cosmological evolution of a scalar field, often named “quintessence” today. In these models the homogeneous part of the energy density varies with time in such a way that it is relevant today.
In a class of these models, “cosmic attractor” solutions avoid the fine-tuning problem. And some of them mimic the effects of a cosmological constant on the supernovae Hubble diagram. One of these models could finally lead to a cosmology consistent with observation. Nevertheless, a satisfactory explanation from fundamental particle physics or string theories is still missing. Much remains to be done and understood.
On 23 February 1987 an explosion that was a billion billion billion times as powerful as a hydrogen bomb was detected on Earth. It was Supernova 1987A, the first exploding star visible to the naked eye since the one that was detected by Kepler in 1604. The star, 170 000 light-years away in the Large Magellanic Cloud, ran out of nuclear fuel, collapsed under the influence of its own strong gravity and, in a few seconds, released a hundred times as much energy as our Sun has poured out in its entire lifetime.
However, even before a Canadian astronomer on a mountain in Chile first noticed the light of Supernova 1987A, ghostly messengers called neutrinos were registered in two huge underground particle detectors in the US and Japan. These detectors, consisting of a few thousand tonnes of very pure water, equipped with photomultipliers and electronics, had been built for a quite different purpose. They were designed to check whether protons were stable or whether they might undergo a very slow radioactive decay. No proton decays have yet been seen, but the detection of supernova neutrinos gave information both about these particles and about stellar collapse, and it was a dramatic illustration of the interplay between astronomy and particle physics.
Of course, the biggest explosion of all was the Big Bang – the creation of the universe about 12 billion years ago. The early universe was incredible – a dense primordial soup of elementary particles, colliding repeatedly at tremendous energies – a brilliant fireworks display. Indeed, the present universe with all its beauty and complexity is merely the wisp of smoke remaining after the fireworks show.
Today’s particle physics allows us, in a way, to recreate some conditions of the early universe. Readers who have spent many hours learning history, spanning perhaps a mere few thousand years, may be pleased to see the history of the universe displayed on a rather simple graph. The temperature of the universe in Kelvin is plotted on the right y-axis against time in seconds on the bottom x-axis. Both axes are logarithmic. On the left y-axis is plotted the average energy per particle, which is proportional to the temperature. The energy-mass density of the universe is plotted on the top x-axis, in units of equivalent mass density relative to the density of terrestrial water.
The CERN LEP and Fermilab Tevatron colliders have energies of around 100 GeV per elementary constituent (quark or lepton), and such energies were normal when the universe had a temperature of 1015 K, around 10-11 s after the Big Bang. The constituent particles – even neutrinos – were in almost perfect equilibrium with each other. Annihilation and creation were in balance. The universe then, as now, contained vastly more photons than quarks, and the energies per quark or lepton were then much greater than the rest masses, so the universe was accurately described as “radiation dominated”. As the universe expanded, it stretched the wavelength of radiation so that the photons had lower energies. The concentration of elementary objects was also reduced, thus the universe cooled. As we follow this thermal history, a number of remarkable events occur, leading to our present world.
Annihilation
At around 10-6 s the average energy had dropped to a few giga electron-volts, and quarks could combine into hadrons, and a bit later into the stable protons and (relatively stable) neutrons. At around 1 s, although the density was still several hundred thousand times that of water, collisions of neutrinos became rare – they could no longer be in thermal equilibrium with other particles and effectively decoupled for ever from the rest of matter and radiation. After a few more seconds, when the energy dropped below the mega electron-volt level, electrons and positrons could no longer be created, so they annihilated, leaving just sufficient electrons to balance the charge of the protons.
Some of the protons and neutrons could combine into deuterons, and then alpha particles, before the density and collision rate became too low. Then any remaining unbound neutrons decayed in the following hours. The amounts of deuterium and helium, which can be measured today, are a sensitive test of the conditions at that time, and hence of the Big Bang model. Tiny traces of isotopes of lithium could have been formed, but the absence of stable nuclei of mass 5 and mass 8 prevented the creation of further nuclei. This nucleosynthesis occurred at around 3 min.
After some 300 000 years the temperature had dropped to around 104 K and the average energy to around 1 eV, below the ionization potential of atoms. Neutral atoms of hydrogen and helium were formed. Photons were no longer impeded by frequent interactions with matter (they couple to charged particles), and the universe, which had until then been opaque, became transparent. Photons thus decoupled from matter. The dominant energy density after this was in the form of matter (including dark matter, the nature of which has not yet been determined), having previously been in the radiation. However, the temperatures and energies shown in the diagram, even in the matter-dominated era, represent those of the radiation. With the expansion of the universe, this radiation has now cooled to 2.7 K, the cosmic microwave background (CMB).
Star material
Gravity, acting on density ripples that have now been detected as tiny anisotropies in the CMB, caused matter to form clumps, which later became galaxies and stars. Thus the objects of astronomy make their appearance in the bottom right corner of the diagram. The first stars were composed of hydrogen and helium only. Fusion processes and other nuclear reactions in the cores of stars created the remaining elements.
The more massive stars had shorter lifetimes, and some exploded as supernovae, thereby polluting the local cosmos with these chemicals, and contributing to the mixture of elements out of which later stars, including our Sun and its solar system, could be formed. Every carbon and heavier nucleus in the Earth and in our bodies was formed at the centre of some now exploded star. We are all made of star material!
Until recently, cosmological measurements were consistent with the Big Bang expansion, opposed by the attraction of gravity. Depending on the mean mass-energy density of the universe, this could lead to continuous expansion or to ultimate contraction: the Big Crunch. Most cosmological measurements were rough. A few years ago, measurements indicated that the age of the universe was a bit less than the age of some stars, but, because of the observational uncertainties, a factor of less than two did not cause undue concern.
However, observations are getting much better. Results in the last year from two collaborations indicate that the expansion may actually be accelerating: very distant supernovae appear fainter than expected, indicating that they may be further away than implied from their redshifts. Careful checks on the supernova analyses are in progress, and additional distant supernovae are within the range of existing telescopes.
The observations can be explained by invoking Einstein’s cosmological constant – a kind of cosmic repulsion or negative vacuum pressure, which Einstein later regarded as his “biggest mistake”. Variations, such as “quintessence” – a fifth force that changes with time, – are also receiving attention. The energy density associated with a cosmological constant would affect the early structures in the universe and hence the angular anisotropy of the CMB – the ripples in the universe.
The CMB fluctuations are being measured with good precision at smaller angular scales, and their analyses show better consistency with supernovae and other observations if an additional cosmic repulsion is allowed. The energy density associated with the cosmological constant appears to be greater than that of matter. New CMB anisotropy results from the balloon-borne Boomerang project are about to be released, and these will be followed in the next few years by the NASA MAP and ESA Planck missions. The era of precision cosmology is about to begin, and its symbiosis with particle physics will result in more exciting science in the new century.
The traditional annual UK theory meeting at the Rutherford Laboratory is a good showcase for new theory trends. One highlight was Robertt Dijkgraaf’s review of the promising area of string theory and quantum gravity. He was effectively “selling” dictionaries to translate between any two out of three “languages” for describing the modern view of particle theory. D-branes, the large N-expansion and and non-commutative geometry provided the connection between the string theory and gauge theory descriptions, while renormalization group flow, holography and effective geometry provided the links between gravity and gauge theory. Sigma models allowed gravity to be related to the string theory language.
Martin Luscher presented a beautiful resume of Weyl fermions and chiral theory on the lattice. Jonathan Flynn gave the latest on the CP violation situation, both theoretically and experimentally. CP violation is poised on the threshold of a new era.
Subir Sarkar emphasized the excitement that exists in ultrahigh cosmic rays and how this is a clear indication of new physics just around the corner. The hot topic of extra dimensions and how to see a signal of their existence was authoritatively reviewed by Joe Lykken. Marcela Carena showed what we might expect at the Fermilab Tevatron and at CERN’s LHC in Higgs phenomenology, while Martin Beneke described the impressive progress made during the last year in computing higher-order corrections to many quark-gluon processes and the impact that these will have in phenomenology.
Two talks took the audience to more distant horizons. Pierre Sikivie raised the exciting prospect of the possible existence of caustic rings of high-density dark matter in the galactic halo. Meanwhile, Chas Beichman, chief scientist of the origins programme at JPL, presented dramatic evidence for planets around distant stars and illustrated how we may be able to probe the conditions on these planets and determine whether such conditions would be favourable in supporting life.
Given the uncertain future of the theory group at Rutherford, it was heartening to see the traditions of this meeting, which has gone from strength to strength over the last 35 years, being kept.
In 1975 Martin Perl found a new exotic lepton in electron-positron collisions at the SPEAR ring at SLAC, Stanford. The electrically charged tau turned out to be a heavy brother of the muon and the electron. The tau is 170 times as heavy as the muon and 3500 times as heavy as the electron, and has roughly the properties to be expected for such a particle. Owing to its very short lifetime (2.9 x 10-12 s) and the presence of unseen particles (neutrinos) in its decays, the detailed investigation of the tau has been an experimental challenge ever since its discovery.
In the past few years, the four experiments at CERN’s LEP electron-positron collider have each produced a very clean sample of tau pairs (some 0.2 million) with low backgrounds. The very good particle identification of the LEP detectors and the use of modern silicon microvertex technologies have created a wonderful environment in which to investigate the tau.
At the same time, the CLEO II detector at Cornell’s CESR electron-positron ring has collected more than 10 million tau pairs, making it possible to study the rare tau decays. As a result, tau physics has reached a level where precise tests can be performed.
Lepton universality
The existence of different families is one of the most important open questions in particle physics. The basic matter structure of the Standard Electroweak Theory with the up and down quarks (the electron and the electron neutrino) appears to have two heavier replicas with identical interactions: the charm and strange quarks with the muon and the muon neutrino; and the top and bottom quarks with the tau lepton and its neutrino.
We do not understand what causes this triplicity, nor do we know what generates the different masses. However, we expect the heavier family to be more sensitive to whatever dynamics are related to the generation of mass. This makes the tau an ideal particle to use to investigate these gaps in our understanding. Is the tau really identical to the electron and the muon?
In the Standard Model, the tau decays in the same way as the muon: through emission of a W boson (shown in figures 1 and 2). However, the tau’s heaviness makes several extra decay modes kinematically accessible. The tau can either decay leptonically into its lighter electron and muon brothers, accompanied by appropriate neutrinos, or it can decay into quarks. Because quarks can appear in three different “colours”, the probability of a hadronic decay is three times greater than leptonic decay. The detailed analysis of the tau decays shows an excellent agreement between the measured branching fractions and the Standard Model predictions.
Comparing the different tau decays with the weak decays of the muon and the charged pion, we can test whether the different leptons couple to the W with the same strength. Within the present (and impressive) experimental accuracy of 0.2%, the electron, the muon and the tau appear to have exactly the same W interactions. The same observation can be made directly from the analysis of W decays at LEP II and the proton-antiproton colliders, although, the present experimental sensitivity is not as good in this case.
The leptonic couplings to the neutral Z particle have been accurately measured at LEP and SLC (SLAC, Stanford), through the study of lepton-antilepton production in electron-positron collisions. Again, the experimental data show that the three known leptons have identical interactions with the Z boson, at the present level of experimental sensitivity.
Because the tau decays within the detector – a tau produced at LEP travels 2.2 mm before decaying (a tau produced at CLEO travels 0.24 mm) – one can measure its spin orientation (polarization) from the distribution of the final decay products. The present data show that only left-handed taus decay. This is in good agreement with the Standard Model. An upper limit of 3% has been set on the probability of a (disallowed) decay from a right-handed tau.
Leptons do not couple to the gluonic carriers of the strong interaction. However, an electroweak boson emitted by a lepton can produce quarks, which are strong interacting particles. Electrons and muons only feel this effect indirectly, through tiny quantum corrections. The heavier tau can decay hadronically, which makes the tau a unique tool for studying strong interaction dynamics in a clean way.
Between 1988 and 1992, a series of papers by Eric Braaten, Stephan Narison and the author showed that the hadronic decay of the tau can be theoretically predicted from first principles, as a function of the quantum chromodynamics (QCD) coupling as Summing over all possible hadrons produced in the decay, avoids the problems related to the messy rearrangement of quarks into hadrons. The decay probability can then be computed at a more fundamental level in terms of quarks and gluons. The result is known up to the third order in a perturbative expansion in powers of as. Comparison of the theoretical predictions with the experimental measurements gives a precise determination of as at the tau mass region.
An extensive experimental effort was initiated in 1992 by an ALEPH group at LEP, which was led by Michel Davier at Orsay. This was soon followed by similar work from other experiments. The four LEP collaborations and CLEO have all performed their own measurements of as. Moreover, ALEPH and OPAL, through a careful analysis of the distribution of the final decay hadrons, have been able to measure, separately, the tiny non-perturbative corrections and obtain values in good agreement with theoretical expectations.
The resulting determination, as (mt) = 0.345 ± 0.020, shows that the coupling, measured at the tau mass scale, is very different from the values obtained at higher energies. The value extracted from the hadronic decays of the Z boson, 0.119 ± 0.003, differs from the tau decay measurement by eleven standard deviations.
The comparison of these two measurements is of fundamental importance within our present understanding of quantum field theory. Quantum corrections, mainly generated through the virtual production of particle-antiparticle pairs, modify the values of the bare couplings in a way that depends on the energy scale. This is a very important effect, which, in the context of non-abelian gauge field theories (like the electroweak theory or QCD), is deeply related to the 1999 Nobel prizewinning work by ‘t Hooft and Veltman.
Gross, Politzer and Wilczek showed that in non-abelian theories quantum effects give rise to “asymptotic freedom”, in which the coupling decreases as the energy increases. Asymptotic freedom explains why high-energy experiments feel quarks as nearly free particles, while at low energies they are strongly confined within hadrons. The tau provides the lowest-energy scale where a very clean measurement of the strong coupling can be performed, which gives an opportunity to test asymptotic freedom in a quantitative way. Using the theoretically predicted dependence of as on energy, the measurement of as at the tau mass can be translated into a prediction of as at the Z mass scale: 0.1208 ± 0.0025. This value is in close agreement with the direct measurement from hadronic Z decays, and has a similar accuracy.
Tau decays, which result in an even number of pions, have also been used to measure the hadronic vacuum polarization effects that are associated with the photon. It is possible, therefore, to estimate how the electromagnetic fine structure constant is modified at LEP energies. The uncertainty of this parameter is one of the main limitations on the extraction of the Higgs mass from LEP/SLD data. From the ALEPH data, the Orsay group is able to reduce the error of the fitted log(MH) value by 30%.
The same tau data can pin down the hadronic contribution to the anomalous magnetic moment of the muon. Recent ALEPH and CLEO analyses have improved the theoretical prediction by setting a reference value to be compared with the forthcoming measurement of the E821 experiment, which is running at Brookhaven.
About 3% of tau decays produce a strange quark. The four LEP experiments have investigated these decays. In particular, ALEPH has analysed kaon production in tau decay and the associated distribution of the final hadrons. The difference between the dominant decay producing a down quark, and that producing a strange quark is sensitive to the mass difference between the down and strange quarks. Because the former is much lighter, the ALEPH measurement can be translated into a good determination of the strange quark mass at the tau mass scale: 119 ± 24 MeV.
Quark masses are also dependent on energy; quarks weigh less at higher energies (and weigh more at lower energies). At 1 GeV, for instance, the strange quark mass becomes 164 ± 33 MeV. These measurements have important implications for the theoretical prediction of CP violation in kaon physics. Future tau analyses at the BaBar and BELLE detectors should provide a more accurate determination of the strange quark mass.
Tau decay data has been probed extensively for signatures of new physics beyond the Standard Model framework. Using its huge data sample, CLEO has looked for 40 forbidden tau decay modes. No positive signal has been found, which puts stringent upper limits (of a few parts per million) on the probability of many decays into final states without neutrinos. Anomalous electric and magnetic electroweak dipole couplings of the tau and possible CP-violating decay amplitudes have also been searched for, with negative results. Within the present experimental accuracy, the tau appears to be a standard lepton.
Tau decays are accompanied by neutrinos, so kinematical analysis of hadronic tau decays gives an upper limit on the tau neutrino mass: 18.2 MeV. However, nobody has been able to detect a tau neutrino so far. The DONUT experiment at Fermilab is expected soon to provide the first experimental evidence of the tau neutrino through the detection of its interaction with a nucleon via the produced tau.
This is an important goal in view of the recent neutrino results, which suggest tau-muon neutrino oscillations, and neutrino mass squared differences of around 0.003 eV2. These results could be checked by the new-generation long baseline neutrino experiments.
In 25 years we have seen remarkable progress in our knowledge of the tau and its neutrino. However, there is still much room for improvement, and, no doubt, the tau will continue to play an important role in the continuing search for new physics.
While we claim to understand more and more about how elementary particles interact, modern particle physics is increasingly being characterized by bigger and bigger experiments, which are searching for smaller and smaller effects. Most of what happens in high-energy scattering processes is put to one side and is deemed unfashionable. However, what is thought to be unfashionable by some is not necessarily uninteresting to everyone.
The largest single such process is elastic scattering – where the incoming particles bounce off one another. However, even this straightforward process is difficult to understand quantitatively. Central to this physics is the concept of the pomeron mechanism. What exactly is the pomeron? Our current understanding of these processes was summarized in an article by Sandy Donnachie in the April 1999 issue of CERN Courier.
Every two years, dedicated enthusiasts of this physics meet for the International Conference on Elastic and Diffractive Scattering. Initiated in 1985 by B Nicolescu and J Tran Thanh Van, the first meeting was held in the Chateau de Blois, France – hence the series has earned the name Blois Workshops. The latest Blois meeting was held at Protvino, near Moscow, this summer.
Fundamental dilemma
There appears to be a dilemma that involves our understanding of fundamental processes. The general features of the bulk of high-energy scattering processes are difficult to reconcile with quantum chromodynamics (QCD) – the field theory of quarks and gluons. Reports at major physics conferences around the world say that quantum chromodynamics “works perfectly well at high energies”. This is true. However, it has to be qualified by insisting on high-momentum transfers.
On the other hand, diffractive and elastic scattering are characterized by small momentum transfers where traditional QCD approaches are difficult. Nevertheless, theoreticians have tried bravely to understand the pomeron through QCD. After lengthy and cumbersome calculations, Russian theorist L Lipatov and his collaborators managed to obtain some insight into the “hard” pomeron.
Fortunately this arrived in time for the first results from the HERA electron-proton collider at DESY. The results indicated that, reaction rates for processes that involved the absorption of a virtual photon by a proton, grow in energy much faster than, say, proton-proton reaction rates.
Many physicists believe that the hard pomeron is responsible for this rapid growth. With the situation becoming more complicated with more detailed QCD analysis, the elastic-diffractive community came to the Blois workshop in Protvino.
Results from H1 and ZEUS
Centre stage at the workshop were recent results obtained from the two major collaborations at HERA: H1 and ZEUS. Complemented by hard-diffraction studies at Tevatron, these results confirmed general trends that had been observed previously and had posed a number of problems.
Why do reaction rates grow faster with energy when the virtual photon becomes even more virtual?
Is there a new, hard pomeron that is dependent on the particular scattering process?
Are there distinct pomerons (hard and soft) or is there one single pomeron that manifests itself in different ways depending on the kinematics?
Theoreticians presented a range of possible answers. N Tyurin (IHEP, Protvino), in the framework of a specific approach (U-matrix), argued that unitarity allows new possibilities (antishadowing) and that preasymptotic behaviour may mimic fast energy growth. A Kaidalov (ITEP), using the ITEP-Orsay quasi-eikonal approach, developed an interesting scenario of “undressing” the pomeron with a highly virtual photon. E Predazzi (INFN, Torino) gave an original view on ways in which one can detect unitarity effects in elastic and diffractive scattering. A young theoretician from IHEP, A Prokudin, insisted that all of the data from HERA do not rule out the “good old soft pomeron” and that there is no need for anything else.
This also found support in the excellent talk by M Kienzle (CERN), which was devoted to photon-photon interactions at LEP. The first experimental results on the total cross-sections were summarized.
However, the hard pomeron was not discredited at the conference. The discrepancies that were uncovered initiated a useful impromptu discussion session. One of the subjects that was covered was “What is the pomeron?” In the course of the discussion session it was realized that, very often, the same term is used to designate different contents.
The status of the perturbative pomeron was summarized in the mini-review on BFKL by one of the authors of this mechanism, E Kuraev from JINR (Dubna). In many cases the results are too preliminary for immediate use (or misuse) in phenomenological models. The work on this subject is, without doubt, extremely important.
Another feature of this physics is the odderon, which supplements the main pomeron mechanism and accounts for differences between, for example, proton-proton and proton-antiproton scattering. A special session devoted to the still elusive odderon was led by one of its most enthusiastic advocates, B Nicolescu.
The theoretical legitimacy of the odderon comes from recent QCD calculations. There are good prospects for its detection in exclusive processes at HERA. Drawing odderon conclusions from the latest CERN proton-antiproton diffractive scattering experiments seems very difficult and model dependent.
Further studies
High-energy elastic and diffractive studies are needed to resolve the picture. Included in these studies are the TOTEM project for CERN’s LHC collider, presented by S Weisz (CERN), and Brookhaven’s RHIC heavy-ion collider, presented by S Nurushev (IHEP, Protvino). Some specialized projects related to polarization phenomena were reviewed by A Krisch (Michigan).
The need for further experimentation at the LHC was also advocated by A Martin (CERN), to test the saturation of the fundamental Froissart bound on high-energy scattering behaviour and to see if dispersion relations continue to hold true or a breakdown of locality occurs.
The overall impression gained from the Protvino workshop is that elastic and diffractive scattering, despite being unfashionable, is very interesting and has a direct bearing on the most fundamental problems of this physics. A great variety of differing opinions were expressed (often mutually contradictory) and even disputes took place. This suggests a healthy future for research into elastic and diffractive scattering.
An international collaboration of radiochemists has carried out the first chemical study of the transuranic element bohrium (atomic number 107) using the Philips cyclotron at the Swiss Paul Scherrer Institute.
While the recent discovery of elements 114, 116 and 118 have grabbed the scientific headlines, these experiments do not yield any information about chemical properties. Such information is a pre-requisite for the classification of an element in the Periodic Table. From a chemist’s point of view, the Periodic Table ended with seaborgium, which has an atomic number of 106.
The bohrium experiment aimed to close the information gap, and investigate whether the element belongs to Group VII (which includes the elements rhenium and technetium). However, chemistry is by no means straightforward. Relativistic effects can strongly distort the electronic structure of elements and, in turn, lead to unexpected deviations in chemical properties compared with lighter homologues in the Periodic Table.
In the first experiment, performed in spring 1999 at the 88-inch cyclotron at the Lawrence Berkeley National Laboratory (LBNL), a new, long-lived isotope of bohrium, with a mass number of 267, was produced in the reaction between neon-22 ions and a berkelium-249 target. This new isotope was found to have a half-life of about 20 s, which is long enough for chemical investigation.
Previous investigations of heavy-element compounds that gave sufficiently high reaction rates and unambiguously identified themselves as members of a given group, showed that the most promising method of confirming bohrium as a Group VII element was by studying its oxychloride. In the case of Group VII elements, these molecules become volatile at much lower temperatures than those of the actinides (Group III) and the neighbouring transactinides (Groups IV-VI).
During a one-month period of beam time at PSI in September 1999, a 600 g/cm2 target of berkelium-249 was bombarded with 2 x 1012 neon-22 ions/s. The target material was provided by the US Department of Energy and prepared on thin beryllium foils by LBNL.
Using a gas transport system, the products were continuously injected into an on-line gas-chromatography apparatus (OLGA), which was capable of measuring the volatility of the pre-formed oxychlorides. Confirmation of the presence of bohrium, with single-atom sensitivity, was achieved using a rotating wheel multidetector analyser (ROMA). The analyser was equipped with solid-state detectors to register both the alpha-particle emission and spontaneous fission events, which are characteristic of the decay of such heavy nuclei.
Using a total of just six detected atoms, it was shown that bohrium indeed forms volatile oxychlorides at a temperature of 180 ºC, within the expected range of 200 ºC for Group VII elements. This, together with the registered alpha-decay chains, starting at bohrium-267 and passing through dubnium-263 (atomic number 105) and lawrencium-259 (103) to the long-lived mendelevium-255 (101), show that bohrium is an ordinary member of Group VII.
In a move that significantly extends the scope of European collaboration in particle physics, CERN is collaborating with the Italian National Institute of Nuclear Physics in a new project. A beam of high-energy neutrinos will be sent from CERN to detectors that will be built at the Italian Gran Sasso Laboratory, 730 km away from CERN, and 120 km from Rome.
The first historic Alpine crossing was General Hannibal’s march on Turin, in about 200 BC. The advent of modern communications brought a need for transalpine railway links. The first tunnel to breach the Alps was the 14 km Fréjus/Mont Cenis tunnel, the construction of which began in 1851. It was soon followed by the 14 km St Gotthard tunnel in Switzerland. Now physics crosses the Alps too.
However, neutrinos need no tunnel to cross a mountain range – most of them can pass through rock. Contemptuous of matter, a neutrino beam can even pass through the 13,000 km of the Earth and emerge on the other side. There is, however, a slight neutrino casualty rate that makes experiments with neutrinos possible. If there are plenty of neutrinos, it is probable that enough of them will interact to produce a detectable signal. Of the 1018 neutrinos to be sent to Gran Sasso annually from CERN, about 2500 will interact, en route, with each 1000 tons of target material.
It took about half a century to discover that the neutrino (nature’s most unpredictable particle) comes in three different kinds – electron, muon, or tau – according to the type of weakly interacting particle (lepton) they escort. Physicists are now convinced that these three varieties of neutrino are not immutable, as was first thought, but they subtly rearrange their lepton allegiance in flight. In physics language, the neutrinos oscillate from one kind to another as they travel.
Positive evidence for neutrino oscillations so far comes, overwhelmingly, from extra-terrestrial neutrinos: from the Sun or from the interactions of high-energy cosmic rays in the atmosphere. To probe these oscillations under controlled conditions requires synthetic neutrinos. These neutrinos are produced via the decay of high-energy particles, which are generated by beams in an accelerator. The oscillations depend on the distance between the neutrino source and the detectors – the baseline.
For the new project, protons from CERN’s SPS synchrotron, with an energy of up to 450 GeV, will be focused on a target to produce pions and kaons. These particles will then be magnetically focused to point towards the Gran Sasso Laboratory. After about 1000 m, most of these pions and kaons will have decayed, producing electron- and muon-neutrinos. The remaining strongly interacting particles will be removed by a beam stop (which the neutrinos can hardly “see”). CERN will construct the neutrino source, while Gran Sasso will host the detectors and provide the infrastructure at the far end.
After an initial round of proposals for experiments to detect the neutrinos, two – OPERA and ICANOE – are well defined. OPERA will use the emulsion target techniques developed for the CHORUS neutrino experiment at CERN, and further refined for the DONUT study at Fermilab. While CHORUS used a mere 700 kg of emulsion, OPERA will use 200 tons of emulsion, which will be interspersed with thin lead plates. Vital to this work are Japanese emulsion technology and sophisticated automated emulsion scanning techniques, developed in Japan and in Europe.
ICANOE is based on the liquid argon detector used to track and identify particles and developed for the ICARUS neutrino detector. It will be supplemented by the NOE magnetized spectrometer. ICANOE will use 9.3 kilotons of liquid argon in four modules, which will be separated by NOE spectrometers.
The main goal is to see the appearance of tau neutrinos in Gran Sasso. These particles will not be present in the beam when it leaves CERN, but may be produced en route.
Data taking for these new experiments is planned to begin in May 2005. In addition to existing equipment at CERN, two-thirds of the 71 million Swiss francs needed for the project is being provided by the Italian National Institute of Nuclear Physics (INFN). So far voluntary contributions from Belgium, France, Germany and Spain have been announced.
Long baseline neutrino studies are under way in Japan, where the KEK Lab sends particles to the 250 km distant Superkamiokande detector. In the US, the MINOS project is sending particles from Fermilab to detectors in the Soudan mine, 730 km away. The main thrust of these studies is to chart the disappearance of neutrinos that were initially present in the beam.
With construction work for CERN’s LHC and its big detectors under way (and en route to the scheduled commencement of the programme in 2005) a workshop, Standard Model Physics (and more) at the LHC, was held at CERN. The first plenary meeting took place on 25-26 May 1999. The second, and final, meeting was held on 14-15 October 1999.
The goal of the workshop, not evident from its subdued title, was to promote physics studies at the LHC beyond the main focus of the LHC physics programme. The physics community is very much aware of the need to ensure that physics remains lively during the long years of LHC machine and detector construction. Exploring additional possibilities – beyond the spearhead search for the long-awaited Higgs particle and other new objects, especially supersymmetric ones – were, therefore, very much to the fore.
To attack specific physics objectives, working groups were set up on QCD, electroweak interactions, top quark physics, and beauty physics. Subgroups were formed on the production of B-particles and on the decays of B-particles. In each group, theorists, together with experimentalists from ATLAS, CMS and LHCb, acted as convenors. Heavy-ion physics held its own workshop.
The meetings attracted many participants. Included were a substantial number of theorists from outside of CERN. The organizers were happy to see distinguished visitors from the US. Some of the visitors took an active role as convenors and many presented talks. The participation of experimentalists, on the other hand, reflected that most of them were either busy making LHC detectors or were still working on current experiments. Overall, the experience was positive, and similar meetings will take place in the future. The plenary meeting in October presented the preliminary results of the workshop. A more complete and final version will soon be published.
In one year the LHC should produce between 10 and 100 million top quarks and antiquarks, so the top quark mass will be measured with an unmatched accuracy of 1-2 GeV. In the rare decays of this quark, stringent limits can be set on flavour-changing couplings. Single top quark production through weakly charged currents was studied in detail, along with the new possibility of measuring top to bottom quark transitions, and of studying the top quark polarization.
The electroweak working group presented some remarkable theoretical calculations. These included radiative corrections to single W or Z boson production (with effects in the same order as the experimental errors), and the QCD and electroweak corrections on the associated production of two bosons. On the experimental side, it was shown that the W mass can be measured to an accuracy of about 15 MeV, better than in any previous experiment. However, the precision on the electroweak mixing angle, approximately 0.00025, is not competitive with what has been achieved at electron-positron colliders. On the search for new physics, the capabilities of the LHC on contact interactions, new vector bosons, anomalous gauge couplings and strongly interacting WW scattering, etc, were reviewed.
For beauty production, benchmark calculations, b-tagging, measurement accuracies and efficiencies, quarkonia, and small-x structure (relevant for b-production predictions) were among the topics addressed. The goal of the B-decay working group was to provide a more complete picture of the B-physics performance and to search for new strategies in the quest for CP violation in B-decays. Studies of promising new channels, and/or methods, included: B-decays into charmed D-meson pairs; the J/psi plus Ks; pion and kaon pairs; and three pions. The DD channel here looks very promising.
The QCD working group presented a thorough report on quark-gluon structure, a discussion on jet definitions and algorithms, strategies for systematic computations of radiative corrections, and related results on some specific processes, for example, those that involve photons.
In normal beta decay, a nuclear neutron transforms into a proton by emitting a neutrino. A far more exotic possibility is two successive beta decays, in which the neutrino emitted in the first decay would be absorbed by the second decay. The resulting isotope would have two more protons, and no neutrinos would emerge. This can only happen if the neutrino and its antiparticle are indistinguishable from one another (a Majorana particle), as opposed to a conventional Dirac neutrino, whose particle and antiparticle are distinct.
Using an 11.5 kg sample of germanium-76, in the Gran Sasso Laboratory, the Heidelberg-Moscow search has now established that neutrinoless double beta decay, if it happens at all, does so with a lifetime of at least 1025years: a world record. The corresponding effective Majorana neutrino mass (a superposition of the different neutrino mass eigenstates) has to be less than 0.2 eV.
This complements information obtained from solar and atmospheric neutrino oscillation experiments, which determine differences in neutrino mass eigenstates. The new mass limit has implications for the neutrino mass matrix, and for cosmology in the Majorana neutrino scenario. In addition to limits on the neutrino mass, the experiment places limits on other new physics effects.
A full report will appear in a forthcoming issue of CERN Courier.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.