Comsol -leaderboard other pages

Topics

Looking at physics in a rarefied atmosphere

cernrare1_2-01

Rare processes and the violation of CP symmetry provided the focus for the recent Theory Workshop at the DESY Laboratory in Hamburg. As emphasized by Chris Quigg (Fermilab) in the opening lecture, discrete symmetries (box 1) and their violation, in particular CP symmetry, play an important role in a deeper understanding of nature at both very small and large distances. Studying these violations in rare processes may give hints to what lies beyond the Standard Model of particle physics.

In the Standard Model, CP violation is attributed to quark transitions, which are described by the three dimensional (Cabbibo-Kobayashi-Maskawa; CKM) matrix. The classic effect in the decays of neutral kaons into two pions, which was first seen in 1964, is attributed to “indirect” CP violation through the mixing of the neutral kaon and its antiparticle (box 2). This type of violation is usually characterized by a small parameter, which is measured to be roughly 2.3 x 10-3. However, the Standard Model also allows “direct” CP violation, governed by quark mechanisms involving the exchange of the sixth “top” quark.

The theoretical status of these effects was summarized by Matthias Jamin (Heidelberg). Refined gluon corrections and a more accurate top quark mass (174 ± 5 GeV) from the CDF and DO collaborations at Fermilab have both considerably improved the evaluation of CP violation parameters. While indirect CP violation in the Standard Model is consistent with experimental data, large uncertainties currently preclude a precise comparison of direct and indirect CP violation.

Despite intensive efforts by theorists, estimates of this ratio (which is known in the trade as e¢/e) by various groups range between 5 x 10-4 and 30 x 10-4 .

As discussed by Guido Martinelli (Rome), Laurent Lellouch (Marseille) and Amarjit Soni (Brookhaven), advanced numerical lattice calculations could considerably improve the estimates in the coming years. However,
as stressed by Jamin, it is important to develop further the existing analytical tools in order to confront the lattice
results.

Difficult measurements

The experimental situation for e¢/e, described by Martin Holder (Siegen), improved considerably in the past two years due to measurements by the KTeV collaboration at Fermilab (28 ± 4) x 10-4 and the NA48 collaboration at CERN (14 ± 4) x 10-4. These measurements confirm the previous result of the NA31 collaboration at CERN (23 ± 7) x 10-4 that e¢/e is not zero, confidently ruling out certain hypotheses.

Also taking into account the older inconclusive measurement of the E731 collaboration (7 ± 5) x 10-4, one arrives at the world average of (19 ± 3) x 10-4. In view of the spread in values obtained by various experimental groups, the resulting small error should be treated with caution.

Within the next few years the experimental situation should improve considerably through the new data from KTeV and NA48, and in particular from the KLOE experiment at DAFNE in Frascati.

New channels

While the theoretical estimates of e¢/e in the Standard Model are compatible with the experimental data within the theoretical and experimental uncertainties, there is still a lot of room for new physics. As discussed by Luca Silvestrini (Rome), important new contributions are still possible within supersymmetric (box 3) extensions of the Standard Model.

The present bounds on CP violation in various processes already give very important limits for the masses and weak couplings of supersymmetric particles. In particular, the pattern of the masses of squarks – the supersymmetric partners of quarks – is severely restricted. However, in spite of these constraints, large supersymmetric effects in CP-violating processes are possible.

It is important to study CP violating decays and CP conserving rare decays, which are theoretically far cleaner than those traditionally studied. As stressed by Gino Isidori (Frascati), a “gold-plated” decay in this respect is that of the long-lived kaon into a neutral pion, neutrino and antineutrino, proceeding almost exclusively through direct CP violation. The predicted branching ratio within the Standard Model (3 x 10-11) and the presence of neutrinos and the neutral pion make the measurement of this decay formidable.

On the other hand, in certain supersymmetric models the branching ratio could be one order of magnitude higher. Most important is that the newly approved KOPIO experiment at Brookhaven should be able to measure this decay in the first half of this decade even if the branching ratio is at the predicted level. There are also plans to measure this decay at Fermilab and at KEK in Japan. The optimism in measuring this branching ratio is strengthened by the observation of one event in the CP-conserving decay of the charged kaon into a charged pion, neutrino and antineutrino by the E787 at Brookhaven, dating from 1997.

The branching ratio for this rare decay as of the end of 2000 is around 1.5 x 10-10 – slightly higher but fully compatible with expectations. As this decay is also theoretically very clean, the improved measurements of its branching ratio expected in the coming years at Brookhaven, and later at Fermilab, will provide powerful constraints on the elements of the CKM quark transition matrix and the parameters of new physics.

Beauty quark

In the coming years, some of the most promising tests of
the Standard Model and its extensions will come from studying the decays of B-mesons (containing the fifth –
“beauty” – quark) into strange hadrons and either a photon or a lepton-antilepton pair. As reviewed by
Christoph Greub (Bern), refined calculations of gluon corrections and new physics contributions, in particular in
supersymmetric models, during the last years will allow for stringent tests of the theory once the experimental
branching ratios become precise.

The channel with the final photon was observed in 1993 by the CLEO
collaboration at Cornell, and these data have been improved considerably since by CLEO and by the ALEPH
collaboration at CERN. Recently the efforts to measure this channel precisely have been joined by the new
B-factories at SLAC (Stanford) and KEK (Japan), so that in a few years a rather precise branching ratio should
become available.

However, as emphasized by Greub, the available data, while being consistent with
current expectations, already put powerful constraints on supersymmetric extensions. The second channel, with
a muon-antimuon pair in the final state, should be observed this year at the B-factories and at Fermilab. For
new physics, this is even more interesting than the photon-yielding decay.

Larger effects

cernrare2_2-01

While CP violation has been observed so far only in kaon decays, where the effects are rather small, much larger effects are predicted for B-mesons. As stressed by Roy Aleksan (Saclay), Yosef Nir (Weizmann) and Robert Fleischer (DESY), there are several decays where measurements should fix CP violation without almost any hadronic uncertainties. Here the central role is played by the “gold-plated” decay of the Bd-mesons (bound states of an b-antiquark and a down quark) into a short-lived kaon and a ¥ (charm quark-antiquark bound state). The corresponding CP violating asymmetry is parametrized by an angle, ß, in the so-called unitarity triangle, which is related to the CKM matrix.

The most recent data – which were reviewed by Aleksan – from the BaBar and Belle experiments at SLAC and KEK respectively give values of sin2ß somewhat lower than expected and lower than earlier measurements by the CDF collaboration at Fermilab. The three experiments taken together give sin2ß = 0.42 ± 0.24, compared with the Standard Model-expected sin2ß = 0.7 ± 0.15, and discussed by Nir and Ali (DESY). Clearly, within the experimental uncertainties, the measured value is consistent with expectations, which, using among other inputs the observed CP violation in kaon decays, is subject to theoretical uncertainties.

On the other hand, as stressed by Ali, Nir and Silvestrini, improved measurements of sin2ß significantly below 0.5 would signal the presence of new physics contributions – in particular new CP violating phases.

The large variety of CP-violating asymmetries in B-decays should allow for decisive tests of the Standard Model and its extensions. Other CP-violation parameters could be measured, initially by BaBar and Belle in the coming years, as reviewed by Aleksan. The decays of the heavier Bs meson, containing a strange quark, will open up more possibilities. These measurements can only be done by the dedicated experiments LHCB at CERN and BeTeV at Fermilab around the year 2005. Several strategies were reviewed by Robert Fleischer (DESY). However, he also emphasized that the two-body decays of Bd mesons into pions and kaons, measured first by CLEO at Cornell and now studied by BaBar and Belle, despite some hadronic uncertainties, are likely to provide very valuable constraints.

Mapping chaos in particle revolutions

cernmap1_1-01

At first glance, any close association between the planets of the solar system – huge masses of rock, liquid and gas gently guided by gravity through the vast emptiness of space – and the mad traffic of tightly bunched particles in a circular accelerator, crushed together by fierce radiofrequency and magnetic fields, could hardly seem less likely.

Nonetheless, the dynamics of planets moving through our solar system and particles moving in accelerators do share many similar features. Both demand an analysis of the evolution of a dynamic system over a very long time – up to 1 billion revolutions for both the solar system and the Large Hadron Collider (LHC) at CERN. In addition, these systems can be studied to a first approximation as though they were non-dissipative (although radiation damping is not negligible when synchrotron radiation becomes significant, as in electron storage rings).

Over the last 10 years the relatively new technique of frequency map analysis has turned out to be very effective when applied to the analysis of numerical simulations in physical systems – particularly those with three or more degrees of freedom – which may be as large as the solar system or even an entire galaxy, or as small as the particles in an accelerator.

The frequency mapping technique was recently applied for the first time to measured rather than simulated electron trajectories in a storage ring, at the Advanced Light Source (ALS) at the Lawrence Berkeley National Laboratory. The aim was to reveal the dynamics of an actual particle beam.

cernmap2_1-01

Chaotic motion

The story of frequency map analysis began in 1989 when Jacques Laskar (Bureau des Longitudes, Paris) demonstrated that the motion of the solar system is chaotic (Laskar 1989). He showed that the separation between two orbits with similar initial conditions will diverge exponentially over time (e.g. the distance between the orbits will increase by a factor of 10 every 10 million years).

In practice, this means that, although it is possible to make a useful prediction for the evolution of the solar system over 10 million years, it is essentially impossible to ascertain what the planetary positions will be after 100 million years have passed.

So will the Earth collide with Venus or Mars within the next few billion years? Thankfully, this possibility seems to have been ruled out (Laskar 1990), but it is difficult to understand more precisely the behaviour of this chaotic system with a large number of degrees of freedom. For this reason, Laskar began developing the frequency map analysis technique.

Frequency map analysis involves looking at the dynamics in frequency rather than configuration space. Any regular and quasi-periodic motion appears as a fixed point in frequency space, where it will be characterized by the values of its fundamental frequencies (one per degree of freedom).

By contrast, irregular trajectories will be subject to some diffusion in frequency space (the frequencies will change with time). The map from initial conditions to points in frequency space is regular in regions where the trajectories are regular, and irregular where the trajectories are chaotic.

The full dynamics of the system can thus be analysed by varying the initial conditions (in position or momentum) of the system and computing the fundamental frequencies for each set of initial conditions. To accomplish this, a numerical integration of the equations of motion and a fast-converging modified Fourier technique can be used to obtain a quasi-periodic approximation of the calculated trajectories.

Adapting to the accelerator

cernmap3_1-01

Frequency map analysis was applied to particle accelerators for the first time in 1992, when Scott Dumas, a mathematician from Cincinatti, visited the Bureau des Longitudes to discuss some of the difficulties that arise in accelerator dynamics. Laskar realized that his new technique could very well be adapted to accelerators, and the following year Laskar and Dumas published a letter in Physical Review Letters applying the technique to a simple accelerator model that had previously been studied extensively by other means (Dumas and Laskar 1993).

cernmap4_1-01

In a circular accelerator, focusing magnetic fields cause particles to oscillate transversely about the closed, central trajectory. The number of oscillations in one turn around the ring is called the betatron tune and can be different in the horizontal and vertical directions. Additionally, the oscillations are nonlinear and the oscillation frequencies change with the transverse amplitude of the particles.

In this context the fundamental frequencies extracted from the frequency map analysis correspond to the tunes for each trajectory. The amplitude of the transverse particle motion is mapped into frequency space by associating a pair of fundamental frequencies with the horizontal and vertical transverse amplitudes. This frequency map is displayed in a coordinate system with the horizontal and vertical tunes as the axes.

From the nominal working point corresponding to small transverse amplitude oscillations, the frequencies shift over a wide area as the amplitudes of the betatron oscillations increase. The motion of electrons with large transverse amplitudes may be influenced by resonances. Damaging resonances show as distortions in the map.

This publication led Laskar to a 1994 conference on Nonlinear Dynamics in Particle Accelerators held in the Tuscan town of Arcidosso, where he presented a model of Berkeley’s ALS, a 1.9 GeV electron storage ring designed to generate synchrotron radiation with the highest brightness in the soft X-ray region.

David Robin, now ALS Accelerator Physics Group leader, was in the audience and immediately invited Laskar to a working-group meeting later in the conference. This resulted in an ongoing collaboration between the Astronomie et Systèmes Dynamiques group at the Bureau des Longitudes and ALS accelerator physicists, with the goal of investigating this new application of frequency map analysis to the study of ALS dynamics.

At first the collaborators applied frequency map analysis to numerically generated data from a model. In these simulations the frequency maps turned out to be very sensitive to the distribution of magnetic-field errors in the model.

Even for a machine with very small field errors, there was a striking difference in the frequency map compared with the frequency map of an ideal machine. Smaller stable areas and larger chaotic regions resulting from the errors emphasized the importance of accurate machine models.

During this period, accelerator physicists were also applying the frequency map analysis technique to several other synchrotron radiation storage rings, including those at the Stanford Synchrotron Radiation Laboratory (Stanford Linear Accelerator Center), the European Synchrotron Radiation Facility (Grenoble) and the Laboratoire pour l’Utilisation du Rayonnement Electromagnétique (Orsay), as well as the recently approved French synchrotron source, SOLEIL.

In the high-energy physics field, the frequency map analysis technique is finding a role in the evolution of the design of the LHC (Papaphilippou 1999). To take one example, when making changes in the magnet lattice, accelerator physicists want to know with considerable confidence that it will be possible to accumulate particles in the storage ring during the rather lengthy injection process (10 million turns round the ring). Frequency map analysis provides a global view of the resonance structure and other features of the phase space, thereby enabling accelerator physicists to avoid areas which might be dangerous.

Constructing experimental frequency maps based on measured beam oscillations and using them to optimize performance takes frequency map analysis beyond simulations to operating accelerators. A step in this direction has now been taken at the ALS (Robinet al.2000), using two tools to provide the required data.

Charmed particles in Beijing

cernnews10_12-00

The world’s largest data sample of J/psi particles produced directly from electron-positron annihilation has been accumulated with the BESII spectrometer at the BEPC collider in Beijing. The discovery of this particle in 1974 heralded a revolution in particle physics. In the remarkable progress made since then, the J/psi, composed of a charmed quark bound to a charmed antiquark, continues to provide a useful benchmark.

The BES collaboration successfully completed a scan of R (the ratio of hadron to muon pair production) over 85 scan points in the important collision energy region of 2-5 GeV in 1999. Precision R values in this energy region are crucial for the accurate determination of the quark-gluon coupling constant alpha at the Z mass and the interpretation of the muon (g-2) measurement at Brookhaven, which are essential for precision tests of the Standard Model and for narrowing the mass window for Higgs particle searches.

After finishing the R scan, the collaboration turned its attention to other charmonium (charmed quark-antiquark bound states) physics with the goal of accumulating 5 x 107 J/psi events, about six times higher than the world’s existing largest J/psi sample.

The detector was turned on in mid-November 1999. Data-taking started in December 1999 and ended in mid May with about 2.2 x 107 J/psi events accumulated as planned.

In addition, special data runs were taken at 3.0 GeV and at the J/psi resonance peak for the study of quantum electrodynamics background and the trigger efficiency. Data runs were also taken at the peak of the psi(2S) resonance to help in the determination of the total number of J/psi events.

The J/psi run shows that with BESII and the upgraded BEPC, both the hadronic rate and the integrated luminosity accumulated per day have been increased by a factor of two to three compared with BESI. Also, the upgraded barrel time-of-flight system with a time resolution of about 180 ps significantly improves particle identification.

All the accumulated data has been reconstructed. Preliminary physics analysis shows that the data quality is excellent and that the detection efficiency is higher than the J/psi data collected with the BESI detector.

BES continued accumulating J/psi events in the autumn and hopes to reach the total of about 5 x 107 events before next summer.

With this world’s largest J/psi event sample, the BES collaboration can systematically study light hadron spectroscopy; excited baryonic states such as N*, L* and S*; search for glueballs, chiral partners and exotic states; and probe lepton flavour violation and CP violation using J/psi decays. The collaboration is very excited about the physics that can be done with this unique huge sample.

Physics as you lake it

cernnews11_12-00

At the end of August nearly 100 physicists met in Ambleside in England’s beautiful Lake District for the Photon 2000 conference organized by Lancaster University. Held roughly every two years, this conference concentrates on theoretical and experimental advances in the understanding of the high-energy behaviour of the photon, particularly the way it interacts with quarks – the production of matter from light.

In the first talk, Maria Krawzyck pointed out that this year is the 100th anniversary of Planck’s quantization of electromagnetic phenomena, which led to the concept of the photon.

The large volume of data coming from CERN’s LEP electron-positron collider in the last few years provides a unique tool to study the collisions of high-energy photons, and all four LEP experiments presented exciting new results in this field.

These results are complemented by the study of photons in their collisions with protons in data coming from the two experiments at the HERA electron-proton collider at DESY, and the very high volumes of data from the CLEO detector at Cornell’s CESR electron-positron ring which provide precise measurements at lower energies.

At the end of the conference delegates looked forward to their “dream machine”, a dedicated high-energy photon collider, which is one option available for the new generation of linear electron-positron colliders now being planned.

CAPP off at music and physics festival

cernnews12_12-00

In July 1994 Swedish musical personality Martin Engstrom launched the Verbier Festival and Academy in Valais, Switzerland, which has gone on to become a regular feature of the late-July arts calendar. This festival has attracted prominent figures from the musical and theatrical world, such as Zubin Mehta, Isaac Stern and Isabelle Huppert. It is now a valuable step on the ladder for aspiring young artists.

CERN physicist André Martin and his wife Schu knew Aspen, in the Rocky Mountains, where there is a very successful annual summer symbiosis of music, mountains and physics, with the famous Music Festival on one hand and the Aspen Center for Physics on the other. It was tempting to propose scientific activities in conjunction with the Verbier Music Festival.

In the summer of 2000 this was realized for the first time through a conference entitled CAPP (Cosmology and Astroparticle Physics) 2000, organized by Ruth Durrer, Juan Garcia-Bellido, André Martin and Misha Shaposhnikov. About 100 participants came from as far afield as Australia and Korea, to Verbier’s “Centre Culturel du Hameau”.

Prestigious lecturers also came from all over the world. The programme covered both theoretical and experimental physics. One focus was the extremely accurate measurements of the structure of cosmic microwave background radiation by the balloon experiments Boomerang and Maxima at the South and North poles, respectively. This, combined with new measurements of the Hubble galactic recession parameter, leads to a picture of the universe which is asymptotically flat (W = 1), with an accelerating expansion, a non-vanishing cosmological constant and an age of between 14 and 18 billion years, fitting most inflationary models.

W = 1 is the result of W = 0.3 for matter and 0.7 for the vacuum. The former retains a need for invisible “dark matter”, also needed to explain the observed rotation of galaxies. Although definite cases of gravitational lensing have been seen (see Not enough stellar mass objects to fill the galactic halo?), their interpretation does not seem to fit with the Massive Astronomical Compact Halo Object (MACHO) picture.

On the other hand the Weakly Interacting Massive Particle (WIMP) interpretation of dark matter is still possible, which would also be an indication in favour of supersymmetry.

Among the projects for the future, more refined detectors of the cosmic microwave background such as the Planck mission and the Virgo project for detecting gravitational waves were described. Tremendous progress has been made in recent years thanks to the new instruments, and this looks set to continue. In particular the continued detailed analysis of fluctuations in the cosmic microwave background radiation will lead to a further confirmation of the inflationary models.

Returning to the music festival, a public lecture “L’Univers, passé, présent et futur” given in “Café Schubert” (where musicians attending the festival are habitually interviewed), was well received by an audience including Swiss Federal Councillor Pascal Couchepin.

Supersymmetry physics on (and off) the brane

cernsusy1_12-00

For all its spectacular experimental successes, the Standard Model (SM) fails to give us solutions to such basic problems as why there are three copies (generations) of quarks and leptons, why there are three different gauge forces (the strong, weak and electromagnetic, with differing strengths), and how gravity should be included in a consistent quantum theory along with the gauge forces.

Supersymmetry (SUSY) is the leading contender for physics beyond the SM. Although SUSY has been around for some time and has so far had no direct experimental support, indirect experimental hints and progress in understanding the theoretical possibilities allowed for in a SUSY world have led to a new feeling of excitement. With these new ideas on the market, the Supersymmetry 2000 (SUSY2K) conference, held recently at CERN, attracted a large crowd and showed how the new SUSY ideas can help.

SUSY makes precise predictions for the quantum numbers and selection rules for many new particles. What is much more difficult is predicting the masses of these additional supersymmetric particles. The reason for this is that SUSY must be a so-called “broken” or hidden symmetry, and the mechanism of communication of SUSY breaking to the SM and its superpartners is inevitably indirect, not well constrained, and is poorly understood.

As a comparison, the unification of weak and electromagnetic gauge forces in the electroweak sector is also “broken” or hidden – with the Higgs mechanism leading to very different masses for the electromagnetic photon and the W and Z carriers of the weak force.

For SUSY, such a direct coupling to the sector that breaks SUSY (analogous to the direct coupling of the electroweak force to the Higgs) is not possible, because such a coupling leads to sum rules for the masses of the unobserved superpartners (see box) that are definitively excluded. Thus an indirect communication of SUSY breaking must be employed.

cernsusy2_12-00

Mass communication

Many attractive new communication mechanisms for SUSY breaking were reviewed at the SUSY2K conference. In “archetypal” SUSY breaking, gravity takes on the role of communicating between the SUSY breaking sector and the conventional world, and, until recently, this gravity-mediated SUSY breaking was considered as the most plausible possibility.

However, during the last few years many innovative new mechanisms have been proposed – “gauge mediation” (with heavy messenger particles communicating the breaking), “anomaly mediation” (via symmetries that are broken at the quantum but not at the classical level), and “gaugino mediation” (when the SUSY partners of the SM gauge bosons take on the mediating role).

These different mechanisms have characteristic mass spectra and experimental signatures. Supersymmetry might not manifest itself as neutrino-like invisible events detectable only through “missing” energy, but in several other ways, for example in events producing additional photons or stable charged particles, or models with supersymmetric particles that are nearly degenerate in mass. Experiments at LEP and elsewhere have been looking for these various possibilities, but without any luck so far (see “Particles and sparticles” below).

Particles and sparticles

Standard Model (SM) particles come off the shelf in two kinds – fermions (matter particles) such as quarks, electrons, muons, etc.) and bosons (force carriers) such as photons, gluons, Ws and Zs. A feature of SUSY is that every matter particle (quark, electron…) has a boson counterpart (squark, selectron…) and every force carrier (photon, gluon) has a fermion counterpart (photino, gluino, chargino, neutralino…).

This doubling of the spectrum is due to the fact that SUSY is a quantum-mechanical enhancement of the properties and symmetries of the space-time of our everyday experience – such as translations, rotations and Lorentz boosts.

SUSY introduces a new form of dimension – one that is only defined quantum mechanically, and does not possess the classical properties we associate with a new dimension, such as continuous “extent”.

The doubling of the particle population can fix several of the problems afflicting today’s SM, for instance why the different forces – gravity, electromagnetism, weak and strong – appear to operate at such vastly different and apparently arbitrary scales (the “hierarchy problem”). The extra particles provided by SUSY are also natural candidates for exotica such as the missing “dark matter” of the universe.

cernsusy3_12-00

Problem solving

One of the theoretical motivations for these new models is the “flavour problem”, namely that of understanding the relations between the different generations of particles. Experiments observe many approximate flavour symmetries in today’s non-SUSY SM; however, these symmetries are usually violated in typical gravity-mediated SUSY breaking schemes.

Another motivation for some of the new communication ideas (anomaly and gaugino mediation) has been provided by new ideas for physics beyond the SM, such as extra dimensions beyond those accessible to us and multidimensional “branes” (see Superstrings, black holes and gauge theories).

Many new ideas have also been stimulated by the exact non-perturbative results that have allowed theorists to construct explicit models of SUSY breaking, and motivated attempts to merge SUSY breaking with the visible particles. One approach involves composite (sub-quark) models, where some of the SM states are composites of a strongly interacting sector.

Extra dimensions – are we the scum of the universe?

A natural focus of the workshop was extradimensional models, in which the world we experience is complemented by extra (but to us invisible) spatial dimensions. These models have the common feature that our SM world is realized as localized degrees of freedom living on a generalized 3-spatial-dimensional membrane (“3 brane”) embedded in a universe possessing a larger number of dimensions.

In this approach, it is possible that the fundamental scale of gravity might be the TeV scale, rather than the embarrassingly distant Planck scale (1019 GeV), potentially eliminating the hierarchy problem (see “Particles and sparticles”).

This requires a fundamental rethinking of cosmology and the high-energy behaviour of SM physics. Many questions are being reformulated in terms of the geometry of the extra dimensions – their sizes and shapes, and the fields localized on them. In the same way that general relativity introduced geometry as the natural explanation of gravity, so concepts of geometry and locality replace the ideas of symmetry usually used in field theory.

Superstring theory naturally incorporates such branes and gives, at least in toy models, explicit realizations of the brane-world idea. One major question is the radiative stability of such models – that their predictions are compatible with accompanying virtual quantum effects.

Without SUSY, the apparently haphazard hierarchy of the different forces of nature, with each force having very different associated mass scales, is not stable (or rather requires fine tuning). SUSY can take care of this problem, and new light may be cast by brane physics.

At the moment there are two main approaches to the construction of extra-dimensional models. Originally, it was thought that the geometry of the extra coordinates should be distinct from our space – the universe at large could be viewed as the product of two spaces. In this case, a solution to the hierarchy problem requires large extra dimensions and quantum gravity physics at the TeV scale.

In a more recent approach, highly-curved geometries have been proposed, which tightly constrain the brane in which we live. In this very different geometry, gravity is concentrated away from our world, explaining its observed weakness for us. Both schemes have very specific signatures for experiments at high-energy colliders.

Seeing SUSY

All current major high-energy collider experiments are desperately seeking SUSY and/or extra dimensions. One of the crucial searches is for a Higgs boson: SUSY suggests that one might well be visible at CERN’s LEP electron positron collider.

Future collider experiments are also gearing up to look for new particles. The Fermilab Tevatron will resume the sparticle and Higgs searches after LEP is retired, and has quite good prospects. In the longer run, the LHC is expected to produce Higgs bosons and any supersymmetric particles. It will also be able to probe for extra dimensions at shorter scales than any previous experiments. There is optimism that the next generation of collider experiments will break out of the SM straitjacket.

The issue of the cosmological constant – the energy density of free space – has been the most striking problem in quantum field theory for many years. Experimentally, it has long been known that it is very close to zero. According to the latest observations a (very) small non-zero value is now preferred, and this is further supported by cosmic microwave background observations by the BOOMERANG and MAXIMA collaborations.

However, the result of theoretical calculations in quantum field theory is naturally a number at least 60 orders of magnitude bigger. SUSY has long held out the promise of a resolution to this dilemma, but so far has not been able to claim a solution. However, many new ideas of how to approach this problem are also suggested by brane theories and were discussed at SUSY2K.

Dark matter

If SUSY is correct then it would have played an important role in the Big Bang. For example, SUSY might have played a role in the generation of the observed matter in the universe. However, one of the most important issues is that of possible SUSY remnants of the Big Bang, which could play the role of the invisible “dark matter” known to pervade our universe. One of the most attractive features of SUSY is that it provides quite naturally a candidate, the “neutralino”. Experimental searches for such particle dark matter are just beginning to reach the range suggested by theory. However, SUSY must also contend with the strong upper limits on various unwanted supersymmetric particles such as gravitinos.

SUSY2K showed that supersymmetry is assured of an exciting future.

No smoking guns under the Sun

cernsun1_11-00
cernsun2_11-00

The Sun is a typical main sequence star that generates its energy via the fusion of hydrogen into helium in two chains of nuclear reactions: the so-called pp chain and the CNO chain. If the nucleon number, electric charge, lepton flavour and energy are conserved and the Sun is in a steady state, then the total solar neutrino flux is fixed, to a good approximation, by the solar luminosity (approximately 65 billion neutrinos/cm2/s at Earth), independent of the specific nuclear reactions that power the Sun and produce neutrinos by beta decay or the electron capture of reaction products.

The neutrinos from the dominant pp chain are produced by the beta decay of proton pairs (pp), boron-8 and lithium-4, and by electron capture by pp pairs and beryllium-7. Their spectra can be measured directly in the laboratory or calculated from the standard theory of electroweak interactions.

To a very good approximation, they are independent of the conditions in the Sun. Only their relative contributions depend on the detailed chemical composition, temperature and density distributions in the Sun. Solar neutrino experiments can therefore test both the standard theory of stellar evolution and neutrino properties over a long distance, much larger than the diameter of Earth.

By the turn of the last century, solar neutrinos had been detected by radiochemical methods in three underground solar neutrino experiments in the US (Homestake) and Europe (SAGE and GALLEX) and in real time by the water Cherenkov techniques in two experiments in Japan (Kamiokande and Superkamiokande). These studies have confirmed that the Sun is powered by the fusion of hydrogen into helium – a milestone achievement in physics.

However, the combined results also suggested that the solar neutrino fluxes differ significantly from that expected from the standard solar models. This discrepancy has become known as the solar neutrino problem (SNP).

Neutrino oscillations

Many scientists have argued that this discrepancy is due to neutrino properties beyond the minimal standard electroweak model. In 1968, Gribov and Pontecorvo suggested that “oscillations” of electron neutrinos to other neutrino flavours may reduce the solar electron neutrino flux at the Earth. Later, Mikheyev and Smirnov elaborated on work by Wolfenstein on the propagation of neutrinos in matter and found that matter amplification of these oscillations in the Sun can provide an elegant solution (the so-called MSW solution) to the SNP.

The widespread belief in this solution of the SNP was strengthened by the accumulating data from the deep underground experiments on the atmospheric neutrino anomaly (fewer muons than expected) and, most recently, also from the first terrestrial long distance neutrino experiment, K2K, which were reported by Kenzo Nakamura from the KEK laboratory at Neutrino 2000 – the 19th international conference on neutrino physics and astrophysics which was held this summer in Sudbury, Canada.

Both the atmospheric neutrino anomaly and the K2K results can be explained by the hypothesis of nearly maximal strength oscillations of muon neutrinos to tau neutrinos if their squared masses differ by some 3 x 10-3 eV2. However, conclusive solar neutrino evidence for electron neutrino properties beyond the standard electroweak model can be provided only by detecting at least one of the following signals:

* neutrinos other than electron-type visible by neutral current interactions;

* spectral distortion of the fundamental beta-decay spectra;

* a neutrino flux different from that expected from the solar luminosity;

* modulations of the solar neutrino flux, such as a day-night or summer-winter difference, other than that expected from the seasonal variation in the Earth’s distance from the Sun.

Looking for smoking guns

It was hoped that these “smoking gun signals” would be found before Neutrino 2000 with the two currently operating solar neutrino telescopes: the 50 kt Superkamiokande underground light-water Cherenkov detector that has been collecting data on the boron-8 solar neutrino flux, its spectrum, and seasonal and day-night possible variations, with a lower energy threshold and lower background; and the 1 kt Sudbury Neutrino Observatory (SNO) heavy-water detector in a 2 km deep Canadian mine that started taking data half a year ago and is expected to detect the conversion of solar electron neutrinos to mu or tau neutrinos through their dissociation of the deuterium into a proton and a neutron in the heavy water.

However, no such signals have been detected. At Neutrino 2000, Yoichiro Suzuki from the Kamioka Observatory presented data from 1117 days running of Superkamiokande which show no day-night effect, no spectral distortion of the boron-8 solar neutrino spectrum, and the expected variation due to the annual variation in the distance between the Earth and the Sun.

In fact, the use of a new and more precise laboratory measurement of the neutrino spectrum from boron-8 beta decay and a new estimate of the cross-section for proton capture on helium-3 yield an excellent agreement between the expected and observed Superkamiokande solar neutrino spectra as seen in figure 1.

When combined with the other solar neutrino experiments, the Superkamiokande data rule out, with 95% confidence, the small mixing angle MSW solution and a “sterile” neutrino oscillation solution to the SNP. It leaves only a small region in the mass-mixing exclusion plot with a large mixing angle as a possible simple oscillation solution to the SNP as can be seen from figure 2. Fortunately, this solution will be tested in the near future in a terrestrial experiment – KamLAND, a long base-line neutrino oscillations experiment in Kamioka using nuclear reactor neutrinos, and with new solar neutrino experiments such as BOREXINO.

What if?

But what if the large mixing angle oscillation solution to the SNP will also be ruled out by KamLAND and BOREXINO, and SNO will not detect conversion of solar electron neutrinos into mu or tau neutrinos? At Neutrino 2000, E Belloti, spokesman of the Gallium Neutrino Observatory (GNO), and V Gavrin, spokesman of SAGE, reported updated results for the solar neutrino capture rate in gallium. Their measured rates, some 78 ± 7 standard solar neutrino units (SNU), are above the minimal signal expected from the observed solar luminosity, if solar neutrinos do not oscillate.

These results seem to leave only a little room for solar neutrinos from electron capture by beryllium-7 in the Sun. This is also suggested by the results from the pioneering chlorine experiment of Ray Davis at Homestake, the counting rate of which, 2.56 ± 0.23 SNU, is consistent with the solar neutrino flux (2.37 x 1010 cm2/s) measured by Superkamiokande, but leaves very little room for beryllium-7 neutrinos.

However, this conclusion heavily relies on the accuracy of the theoretically-deduced cross-sections for neutrino capture in gallium and chlorine. If the results of GALLEX and SAGE are calibrated with their chromium source experiments, they leave more space for beryllium-7 solar neutrinos, perhaps sufficient to accommodate a solar electron-capture rate in beryllium-7 consistent with the solar proton-capture rate in beryllium-7 that produces the observed boron-8 solar neutrino flux in Superkamiokande.

A direct calibration experiment for the chlorine detector was described by Ken Lande at Neutrino 2000. Improved calibration experiments are also under consideration by the GNO and SAGE collaborations. Altogether, it will still require long, challenging and innovative experiments to give a complete spectroscopy of the elusive solar neutrinos and pin down the origin of the SNP.

How long until the next supernova?

cernsupernova1_10-00

It is difficult to explain what the biennial neutrino conferences mean for the neutrino physics community. Every two years, hundreds of scientists from all over the world meet to update and compare results, conclusions, opinions, claims and contentions.

Neutrino physics is different from any other field due to the variety of experiments, techniques, measurements, approaches, theories and prejudices associated with it.

Neutrino physics today involves many different fields – the detection of remnant neutrinos from the Big Bang; galactic neutrinos at extreme energies; neutrinos from supernovae, the Sun, the Earth’s atmosphere and emitted from the Earth; artificial neutrinos produced from accelerators and nuclear reactors; and laboratory neutrinos from tabletop sources.

These experiments, spanning the 0.1 eV-1 PeV (1000 TeV or 1015 eV) energy region and providing information on the same particle, give a good idea of the spirit of the conference – a scientific bazaar where a cosmologist’s opinion is confronted with that of a solid-state physicist, and a 50 000 t Cherenkov detector’s results have to be understood in the light of the claims from a 10 kg calorimeter. Neutrino 2000, the 19th iteration of the biennial neutrino forum, took place in Sudbury, Ontario.

According to the results presented, the only possible conclusion is that neutrinos have mass. The solar neutrino deficit, the atmospheric neutrino oscillation pattern and the Los Alamos Neutrino Detector (LSND) claim in the region of cosmological interest have been confirmed and reinforced.

However, the interpretation of the results is ambiguous and the emergence of a unique picture strongly depends on experimental confirmation and improvements. A wide ongoing programme aims to improve our current experimental knowledge – new solar neutrino experiments such as SNO and Borexino, long-baseline programmes at Fermilab at CERN addressing the atmospheric neutrino signal, and MiniBoone on the verification of the LSND claim.

A worldwide programme for the study of a neutrino factory is also challenging the unprecedented possibility of precision measurements on neutrinos and is advancing rapidly to overcome the difficulties of exploitation.

cernsupernova2_10-00

The Sudbury observatory

In addition to the beautiful Canadian environment and the conference itself, interest was focused on the Sudbury Neutrino Observatory (SNO), a 1 kt heavy-water detector that has been in operation since September 1999. SNO has the unique ability to be able to say if the solar neutrino problem – a neutrino deficit with respect to the theoretical predictions and now observed by a variety of experiments – is indeed due to neutrino oscillations. (Classically, the different neutrino types – electron, muon and tau – lead separate lives. However, with a mass these species can “oscillate” or transform into each other.)

To achieve this goal at SNO meant solving formidable technological problems – the deepest observatory in the world, shielded by 2000 m of rock in an active nickel mine, with a cosmic ray reaching the detector once every 20 min. The environmental conditions are particularly critical, and the entire laboratory has been constructed by transporting single components down a vertical elevator and a 1.3 km horizontal tunnel. The SNO is a “ship in a bottle”.

SNO has an unprecedented purity for a large-scale detector – the purification system extracts daily seven atoms of radon for every ton of heavy water. To achieve this, strict precautions are taken. The laboratory is completely shielded from the still-active mine environment (blasting occurs in neighbouring tunnels), and different levels of cleanliness and dust purity allow the inner core to be a class 10 000 clean room (fewer than 10 000 dust particles per cubic foot).

Since everything reaching the detector goes through the mine, the major source of radon contamination comes from the transfer of material. For example, even Neutrino 2000 visitors in special clothing meant a significant effort to restore normal purity conditions.

SNO uses heavy water – a very precious liquid. However, Canada is the major world producer. Heavy water (produced for nuclear power plants) is usually extracted from freshwater lakes.

Simultaneous detection

The SNO experiment has demonstrated that the design goals have been achieved, and solar neutrino interactions above 2 MeV are indeed measured – less than one per hour – with the expected backgrounds. However, no quantitative statement on the solar neutrino problem was made by SNO at the conference, and scientists are waiting for additional data and improvements that will make SNO the only experiment capable of also detecting the interactions from muon or tau neutrinos coming from the Sun. Since only electron neutrinos are produced in solar nuclear reactions, simultaneous detection of an electron neutrino deficit and a muon or tau neutrino excess would be the final proof that electron neutrinos oscillate.

The conference, as usual, provided new and exciting results. For the first time in such a meeting, long-baseline data were presented. In the K2K project, artificial neutrinos from the Japanese KEK laboratory are detected 250 km away in the Superkamiokande 50 000 t underground detector.

The observed rate is compared with predictions extrapolated from the interaction rate registered by detectors near the source to cross-check, in an independent way, the result claimed by Superkamiokande based on “natural” neutrinos produced by cosmic-ray interactions in the atmosphere.

As with the solar neutrino effect, a neutrino deficit would imply neutrino disappearance, and thus an indication of neutrino oscillation. Still, the meagre data so far (17 events observed while 29 were expected; September p8) did not allow K2K to claim a deficit that would confirm the atmospheric neutrino oscillation signal, but data will naturally increase in the coming years.

Superkamiokande is also able to detect solar neutrinos, and on this subject data were meaningful, excluding possible hypotheses on the solar neutrino problem. In particular, they were able to collect 15 000 solar neutrino interactions with an improved signal-to-noise ratio and lower energy threshold (5 MeV).

These data could disfavour the simple hypothesis of neutrino oscillation in vacuum (space), and instead point more directly to additional oscillations as the neutrinos traverse the Sun, significantly refining the known oscillation parameters.

Limits on neutrino mass?

The neutrino oscillation phenomenon cannot occur if neutrinos are massless. On the contrary, if neutrinos are massive, it is possible that they oscillate. The oscillation is such that not all neutrino oscillation experiments could actually observe it: this depends on a few parameters, such as the neutrino flavour (electron, muon or tau) they can detect, the neutrino flavour emitted from the source, the distance from the neutrino source and the energy of the neutrinos.

Many experiments are therefore currently looking for neutrino masses but are not observing a positive signal, without being in contradiction with the fact that neutrinos have mass and can be seen to oscillate under other conditions. Among these are the CHORUS and NOMAD experiments at CERN, which are currently exploring the oscillation parameters with a sensitivity higher than any other experiment (their new results are about 1000 times as sensitive as Superkamiokande).

All of these experiments are nevertheless contributing to our understanding of neutrino properties. CHOOZ and Palo Verde, for example, have measured the neutrino emission from nuclear power plants but did not observe any oscillation phenomenon. However, their results are crucial, showing that the atmospheric neutrino oscillation claimed by Superkamiokande does not take place between electron neutrinos and muon neutrinos, but between muon neutrinos and something else.

Another group of experiments is pursuing a different strategy – to detect the neutrino masses directly, that is without assuming the oscillation hypothesis but directly “weighing” their mass, as has been done for all other known particles.

Unfortunately, neutrino masses are extremely small, and the current experimental sensitivity of the Mainz and Troitsk experiments allows us to say that the electron neutrino mass is less than a few electronvolts. This is again compatible with all neutrino masses currently claimed (even with the massless neutrino hypothesis) but does not rely at all on the oscillation hypothesis.

Neutrino astrophysics and cosmology

No less important were the neutrino-related conclusions from astrophysics and cosmology. For the first time, supernova modelling was able to describe the stellar explosion mechanism, which is crucial to understand the time distribution of the emitted neutrinos.

At the same time, cosmological measurements are constraining more and more the way neutrinos can be distributed in the universe, pointing to a significant dark matter contribution by neutrinos of a mass of about 1 eV.

Neutrino scientists are eagerly awaiting the next nearby supernova explosion. The 1987 event, were it to happen today, would yield far more data on neutrino properties due to detector improvements. The supernova rate in our galaxy is about three per century, so one of the next neutrino meetings will be more interesting than ever.

CP asymmetry moves to new setting

cernnews1_9-00

The year 2000 sees the debut of a new kind of precision physics. It formally began with the opening plenary talks at the International Conference on High Energy Physics, the biennial particle physics jamboree, held this year in Osaka.

cernnews2_9-00

On 31 July the initial plenary speakers at Osaka – David Hitlin of Caltech and Hiroaki Aihara of Tokyo – presented the first physics results from the BaBar and BELLE detectors respectively. A small but paradoxically important effect called CP violation has now been seen and measured outside its traditional hunting ground, which had been limited to studies of neutral kaons.

BaBar and BELLE operate at new electron_positron colliders – BaBar at the PEP-II machine at SLAC, Stanford, and BELLE at KEKB at the Japanese KEK laboratory. These colliders started operating for physics in 1999 with the aim of achieving high luminosities (collision rates) to mass-produce B-mesons – particles containing the fifth “beauty” quark, hence their name “B factories”. The commissioning of both machines has been impressive, and they are routinely delivering luminosities in excess of 1033/cm2/s – figures previously unheard of.

Physicists think that the delicate charge-parity (CP) violation effect, or matter-antimatter asymmetry, played a major role in shaping the particle scenario that emerged from the Big Bang. The initial explosion that created the universe presumably produced equal amounts of matter and antimatter. The convention is that CP violation then moulded the universe so that it eventually emerged with antimatter apparently erased from the map.

For CP symmetry the physics of left-handed particles is the same as that of right-handed antiparticles. In 1964, physicists discovered that, in the decays of one of the neutral kaons (the long-lived variety), CP symmetry is respected only 99.997% of the time. This tiny violation is enough to define which is positive and which is negative electric charge – the positive-negative assignment is not just convention.

Subsequent precision experiments have probed CP violation in depth, revealing even smaller effects that operate at the quark level. At a few parts per million, these effects are difficult to see and even harder to measure, and it has taken some 20 years for experiments to approach a consensus.

Quarks transform into each other under the action of the weak force, and the various possible quark transformations have been studied extensively and documented in the Cabibbo-Kobayashi-Maskawa (CKM) matrix of quark coupling strengths. According to these numbers, CP violation in the decays of B mesons should be easier to see than in the neutral kaon case.

The results

At Osaka it was announced that PEP-II has delivered an impressive 16 inverse femtobarns of accumulated luminosity since last May, of which the BaBar team has 14 on tape. The collider is running well and is already exceeding the design goal of 135 inverse picobarns per day. For the measurements presented at Osaka, BaBar uses only this year’s data, corresponding to about 10 inverse femtobarns.

The CP violation measurement uses B decays into J/psi and a short-lived kaon, with several different kaon decay modes, and into psi prime and a short-lived kaon. A total of 120 reconstructed events are used to determine the sin(2b) CP violating parameter. The Babar result is 0.12 ± 0.37 ± 0.09. For a check, CP asymmetries of channels that should not have any CP violation, for instance J/psi and a positive kaon, are consistent with zero.

The PEP-II plan is to extend this year’s run until October, collecting 25 inverse femtobarns (BaBar aimed for 30 inverse femtobarns, so they are right there from the start). BELLE at KEKB uses the full data sample since the start-up, corresponding to 6.8 inverse femtobarns. The luminosity reached so far is 2.04 × 1033. The decay channels used for the determination of the sin(2b) CP-violating parameter include, in addition to those used at PEP-II, B decays into chi-c1 and decays giving long-lived kaons. (BaBar has about 89 of these but doesn’t include them yet.) A total of 98 events enter the final BELLE CP fit. For J/psi Ks events only, the BELLE result is sin(2b) = 0.49 + 0.53 – 0.57. Combining this with a result from other events yields sin(2ß) = 0.45 + 0.43 – 0.44 + 0.07 – 0.09. Statistically, this is within one standard deviation of previous determinations/expectations. BELLE checked that decay channels not expected to show CP asymmetry give a null result. Plans are for BELLE to resume running with higher currents in October until summer 2001.

The upshot

The first results from BaBar and BELLE show that CP violation happens in B decays. Forget briefly about comparing the results with predictions. The fact that the effect can be measured for B-particles so soon after the debut of two innovative machines and two challenging new experiments is a major achievement and bodes well for the continued vigour of this new branch of particle physics. It look a long time before all of the CP violation parameters in neutral kaon decay were known with confidence. The B sector will probably consolidate much faster.

Fermilab’s Tevatron proton-antiproton collider is also a prolific source of B particles, and the CDF experiment there made an earlier brave attempt to measure CP violation in B decays. This environment is cluttered and the signal difficult to measure, but an effect was almost certainly there. With the advent of the PEP-II and KEKB B-factories, the study of matter-antimatter asymmetry enters a new era. For the longer-term future, other experiments are setting their sights on B physics – HERA-B at DESY, Hamburg, BTeV at Fermilab and LHCb at CERN’s LHC collider.

With reporting from Ariane Frey, CERN.

DONUT comes to neutrino town

cernnews3_9-00

A neutrino experiment at Fermilab has seen the first direct evidence for the tau neutrino, the most elusive of the 12 particles that make up the Standard Model picture of the fundamental structure of matter.

Using the intense neutrino beam from Fermilab’s Tevatron, the DONUT (Direct Observation of the Nu Tau) experiment has seen four examples of neutrinos producing slightly kinked tracks, the tell-tale sign that an unstable tau particle has been produced.

According to the Standard Model, all of the matter we know in nature can be explained in terms of six quarks – the ultimate constituents of nuclear matter – and six other particles (leptons). The quarks are arranged in three pairs – up and down, heavy strange and charm, and still heavier beauty and top. The six leptons are also arranged in three pairs – three electron-like particles; the electron, the muon and the tau; and three ghostly neutrinos – each associated with one of the electron-like particles.

The Standard Model

Quarks and leptons can thus be arranged in three “families” of four: the first contains the up and down quarks, the electron and the electron neutrino; the second contains the strange and charm quarks, the muon and the muon neutrino; and the third contains the beauty and top quarks and the tau and tau neutrino.

It has been known for a long time that the Standard Model contains these 12 particles, but initially not all of them had been seen. In 1995, experiments at Fermilab’s Tevatron collider saw evidence for particles containing the long-awaited sixth “top” quark. Now, with the evidence for the tau neutrino, all of the direct evidence for the 12 particles is finally in place.

For the DONUT experiment, Fermilab’s 800 GeV proton beam (effectively the highest energy in the world) is slammed into a huge target or “beam dump”, which produces a dense fog of highly unstable secondary particles. One of these is a D meson, containing both strange and charmed quarks (the Ds particle), which can decay to produce tau neutrinos. (Conventionally, neutrinos are produced by the decay of secondary pions and kaons. However, with a beam dump, many of these are otherwise
absorbed by the surrounding material before they have a chance to decay and produce neutrinos. The fraction of the neutrino content produced via other decays is therefore increased.)

After the beam dump, an obstacle course of magnets sweeps away charged particles, while thick shielding absorbs many of the rest. However, the ethereal neutrinos continue almost unaffected.

Downstream of the magnets and shielding is the DONUT detector, a sandwich of iron plates and photographic emulsion. In this target, one in a trillion tau neutrinos hits an iron plate, releasing an unstable tau lepton.

The tau leptons (which like the electron carry electric charge) leave a sub-millimetre track in the emulsion before decaying. The DONUT experiment set out to look for these tiny track stubs. Of the 100 or so tau neutrino collisions, just four track stubs have been unearthed so far. Isolating these signals from the mass of accumulated data is a triumph of painstaking analysis. Emulsion technology developed at Nagoya plays a major role in this work, and the Nagoya team handles DONUT’s crucial emulsion analysis.

cernnews4_9-00

When CERN’s LEP electron-positron collider came into operation in 1989, one of its first results was to show that particle decays allow for three, and only three, kinds of neutrino. The first of these had been seen by Clyde Cowan and Fred Reines in a reactor experiment in 1955, and for this the latter received the Nobel Prize for Physics in 1995 (Cowan died in 1974). In the 1950s, seeing the neutrino (in this case the electron-type particle) was considered a major accomplishment.

Soon the decay patterns of the muon suggested that the neutrino had to come in two different kinds, one preferring to associate with electrons, the other with muons. In 1962 an experimental team led by Leon Lederman, Mel Schwartz and Jack Steinberger
at Brookhaven revealed muon tracks emerging from neutrino interactions. For this discovery the trio received the 1988 Nobel Prize.

In 1975 Martin Perl at the SPEAR electron-positron collider at SLAC, Stanford, discovered the third lepton, the tau. Before this discovery only two families of fundamental particles had been known. Perl’s breakthrough suggested that there are three. For the tau discovery he was awarded the 1995 Nobel Prize, sharing it with neutrino pioneer Reines.

For the tau to fit into the picture it also had to be accompanied by its own neutrino. Physicists learned to live with the elusiveness of this particle, and could infer its existence directly. For example, in 1987 the UA1 experiment at CERN’s proton-antiproton collider studied decays of the W particle, the electrically charged carrier of weak interactions, which was discovered at CERN four years previously. Setting to one side the W decays producing electrons and muons, they found 29 decays that could be designated as candidate decays producing a tau (and a tau neutrino). Although the neutrino could not be seen, energy-momentum accounting revealed “missing energy, showing that an invisible particle – the tau neutrino – had escaped in the W decays”.

cernnews5_9-00

Tau physics, with the tau neutrino playing an essential but invisible role, went on to become a precision science in the hands of experiments at electron-positron colliders – LEP at CERN and CESR at Cornell.

The recent Chorus neutrino experiment at CERN also used Nagoya emulsion technology. This study (and the companion Nomad experiment) explicitly set out to look for the transformation of muon neutrinos into tau neutrinos (neutrino oscillations). These experiments used a conventional neutrino target rather than a beam dump. At the lower proton energies available at CERN, few Ds particles containing heavy quarks are produced directly. The experiments did not see any tau neutrinos, either through oscillations or via direct production.

DONUT is a collaboration between the US, Greece, Japan and Korea.

bright-rec iop pub iop-science physcis connect