Comsol -leaderboard other pages

Topics

In hot pursuit of CP violation

Long, long ago, in a far, far different universe, there were equal amounts of matter and antimatter. At least, this is the most popular conception. Why only matter remains has been a nagging question for decades.

We normally think of antimatter as a sort of inverted matter behaving the same way as matter does but with reversed properties, such as electric charge. How nature could choose matter over antimatter is puzzling. A seemingly obscure violation of a symmetry principle, called CP, may hold part of the key. As we approach the close of the millennium, laboratories around the world are poised to enter a new era by studying this phenomenon in a new sector ­B ­mesons, particles containing the fifth quark, variously denoted as “beauty”, “bottom” or simply “b”.

Symmetry principles

A major theme in particle physics for the last half-century has been symmetry relations. These came to the fore in the mid-1950s when “parity” violation was discovered. Parity conservation is the apparently innocuous proposition that the laws of physics are the same, or symmetric, when spatially inverted (the parity operation, P), as in a mirror-image world.

Prompted by the realization of T D Lee and C N Yang that there was no experimental evidence that weak interactions conserved parity, C S Wu and collaborators discovered in 1957 that weak interactions do not conserve parity in the radioactive decay of cobalt-60. A stunning development was that weak interactions depend on the specific “handedness” of particles. In modern terms, this is because the charged W carrier particle only couples left-handedly.

The realization soon followed that another symmetry ­ charge-conjugation (C) ­ was violated too. This is the operation of switching particles to their antiparticles, and vice versa. However, C violation occurred in such a way that the combined operation of charge-conjugation and parity (CP) restored the symmetry. Thus the decays of mirror-inverted cobalt-60 antinuclei, for example, should behave the same way as those of cobalt-60.

Although P and C are not always good symmetries, the combined CP operation was respected by nature. CP was a consolation prize for physicists. At least it seemed so until 1964. Less than a decade after the fall of parity symmetry, physicists were jolted again when CP invariance also fell by the wayside. A landmark experiment, led by James Cronin and Val Fitch, saw a rare neutral K ­meson decay that should be prohibited if CP is a perfect symmetry. The effect is small: 1 in 500 decays.

Parity violation could be attributed to an intrinsic feature of weak interactions, but CP violation was a mystery. The effect was very small and hard to study. Was it a feature of weak interactions alone, a sign of a new type of interaction or something completely different? While the origin of CP violation remained a mystery, within a few years it was realized by the renowned Soviet physicist Andrei Sakharov that CP violation was a necessary ingredient for an eventual explanation of how an initially matter­-antimatter symmetric universe could evolve into a matter-dominated one.

It took some time for Sakharov’s suggestion to be appreciated, but, in the end, CP violation went from being an unpleasant wart on the face of weak interactions to a critical component of an explanation of why we exist.

Quark mixing?

Of the many ideas offered to explain CP violation, one remarkably bold proposition was based on quark mixing. In this hypothesis, which was proposed by N Cabibbo in 1963, the quantum states of quarks with definite mass are mixtures of the states that the weak interaction “sees”.

With only four quarks, the rotation matrix that transforms one set of quark states into the other is restricted to real numbers and cannot accommodate CP violation. In 1972, eight years after the discovery of CP violation, M Kobayashi and T Maskawa proposed that quark mixing be generalized to cover three generations of quark pairs. With six quarks, the rotation matrix, now known as the Cabibbo­-Kobayashi-­Maskawa (CKM) matrix, can have a physical phase that is a complex number, and this could account for the CP violation observed in neutral K-mesons.

The bold proposal did not attract much attention. After all, only three quarks were known at the time. There was speculation about a fourth quark, but even the quark model itself was regarded with some lingering suspicion. Kobayashi and Maskawa were advocating not one new quark but three.

The picture began to change quickly in 1974 when the J/psi was discovered and the second quark generation completed. In a surprisingly short time, Kobayashi and Maskawa’s third generation was also exposed: the tau lepton appeared in 1975, and then the b ­quark surfaced with the upsilon discovery in 1977. The wait was long, however, before its partner, the top quark, definitively showed itself in 1995.

Quark mixing became an integral part of the Standard Model of particle physics, and the hypothesis of Kobayashi and Maskawa became a leading candidate to describe CP violation in the only place it has so far been observed, neutral K-mesons.

Neutrinos with a swing

neutrinos1_6-99

Physicists used to believe that the three types of neutrinos ­electrons, muons and taus ­ were distinct. Then came the claim that neutrinos could “oscillate” ­ one type can change into another, and then back again ­ and that, as a consequence, at least one type has a non-zero mass.

Cosmic rays hitting the Earth’s atmosphere produce secondary particles, and the new claim was based on observations of the effects of neutrinos produced by cosmic rays. Measured using large underground detectors to record neutrino interactions, the ratio of muon-neutrinos to electron-neutrinos is less than expected.

This is interpreted as being the result of muon-neutrinos changing into a different neutrino type between their production high in the atmosphere and their detection underground. A similar effect is used to explain the fact, known for some 30 years, that the number of neutrinos from the Sun detected on Earth is also smaller than expected from the known solar power output. A third experiment that may have seen oscillation effects was performed at Los Alamos, where some muon-neutrinos produced by an accelerator beam appeared to behave like electron-neutrinos by the time they interacted several metres away.

Oscillations depend on the difference in the mass squared of the two types of neutrinos involved in the oscillation, D(m2), and a mixing parameter. The physical significance of both of these parameters can be understood in terms of a simple analogy with coupled pendula. With two independent pendula of different lengths, the shorter one swings faster than the longer. The situation changes if the two pendula are joined together, for example by an elastic band, when the elasticity of the band governs the coupling between the pendula. This coupling is usually expressed as a “mixing angle”. If the angle is zero, there is no coupling.

For the coupled case, there are special “normal modes” in which the two pendula swing at the same frequency and with constant amplitude. For this to happen, the ratio of the amplitudes must be just right. In fact there are two such modes. In the lower-frequency mode, the pendula swing in phase (if one pendulum is on the left of its equilibrium position at any time, then so is the other), with the longer one having the larger displacement. For the other mode they are out of phase (one swinging to the left when the other goes to the right) and the shorter pendulum has the larger displacement.

Stronger coupling

The relative amplitudes of the pendula in these two normal modes are displayed by the vectors v¢2 and v¢1 in figure 1. x is the amplitude of the shorter pendulum and y is the amplitude of the longer one. For given lengths, as the coupling between the pendula becomes stronger, the two vectors rotate together until the mixing angle, q, eventually reaches 45°. As the coupling becomes progressively weaker and q tends to zero, the normal modes correspond to either the shorter pendulum swinging and the longer one being stationary or vice versa.

If we displace just the first pendulum and release it (while the second is in its equilibrium position), it starts swinging with a given amplitude A. However, because of the coupling, its amplitude gradually decreases while that of the second increases. The transfer of energy between the pendula is a beat phenomenon, the frequency of which is just the difference in frequencies of the normal modes of the system. For pendula of unequal length, the transfer of energy is not complete. The amplitude of the first pendulum goes down not to zero, but only to Acos2q, where q is the rotation angle of figure 1, while that of the second reaches Asin2q.

Beat frequency

Now for the neutrino analogy. The first pendulum corresponds to, say, the electron-neutrinos as produced in the Sun, while that of the second pendulum could correspond to the muon-neutrino. As the electron-neutrinos travel towards the Earth, the natural states for describing the way in which they propagate correspond to the normal modes of the coupled pendula. These are “rotated” by the angle q with respect to the electron- and muon-neutrinos (figure 1).

The net effect is that the amplitude for the electron-neutrino changes with time, much as in, with the result that, when they reach the Earth, it is reduced by a factor of as much as cos2q. Meanwhile the muon neutrino component, which was initially absent, has grown by up to sin2q times the initial electron-neutrino amplitude. Therefore, by the time the solar electron-neutrinos are detected on Earth, their flux is smaller than would be expected simply from the rate of electron-neutrino production in the Sun. The exact flux of electron-neutrinos arriving at the Earth will depend on the time it takes for the neutrinos to arrive, expressed as a fraction of the oscillation period. It also depends on cos2q.

The possible oscillation of one type of neutrino into another is thus completely analogous to the beat-type phenomenon of the transfer of energy between two coupled pendula. q is the angle between the neutrino quantum states that participate in the weak interaction (electron- and muon-neutrinos), and the neutrino states that propagate through space. It also appears in the factor cos2q, which is relevant for the minimum amplitude of the electron-neutrino component as it oscillates backward and forwards into the muon-neutrino.

The beat frequency of the actual oscillations depends on the difference in frequencies of the two neutrino states as they propagate through free space. Because neutrinos are highly relativistic, it turns out that this depends on the difference of the squares of their masses. If both neutrinos are massless, there will be no oscillations. (In the coupled pendula example, if the normal mode frequencies were the same there would be no beats.)

Thus the parameters describing the neutrino oscillations have direct analogies to the case of the two coupled pendula. The coupled pendulum problem provides a useful insight into the mysterious world of neutrinos.

What’s the quark matter?

cernnews6_5-99

Careful analysis of data collected by the NA50 experiment studying high-energy heavy-ion collisions at CERN shows clear signs of new behaviour, suggesting that under these conditions the colliding nuclear particles briefly fuse together to form a new kind of matter.

In ordinary matter, quarks and gluons are confined inside nucleons, the component particles of nuclei. However, this has not always been the case. In the first split second after the Big Bang, when the temperature exceeded 1013°, quarks and gluons roamed around in a uniform “soup”. When the temperature dipped, the free-ranging quarks and gluons suddenly “froze” into strongly interacting particles (hadrons), where they have remained ever since. The only known way for them to leave this confinement is via high-energy nuclear collisions ­ “Little Bangs” ­ when small pockets of hot and dense nuclear matter simulate post-Big Bang conditions.

Over the past 20 years, lab physics experiments have gradually increased the energy of their nuclear beams in the search for this “quark­gluon plasma”. As well as providing sufficient input energy to create Little Bangs, another challenge is to recognize clearly the deconfined state once it has been recreated.

One suggestion, which was made in 1986 by Tetsuo Matsui and Helmut Satz, was to look among the emerging particles for states like the J/psi – a meson composed of a charmed quark and antiquark bound together.

Approaching plasma conditions, the attractive force between the quark and the antiquark will be screened by gluons and lighter quarks, and less charmed quark­ antiquark pairs will bind into J/psi states.

However, an absorption effect also results from interactions of the produced J/psis with the nucleons while traversing the surrounding nuclear matter. Fortunately, this conventional absorption mechanism can be understood from the study of lighter collision systems, as has been done at CERN’s SPS synchrotron with proton, oxygen and sulphur beams.

A sudden drop in the rate of J/psi formation, after accounting for the normal nuclear absorption, is considered to be a clear signature of quark­gluon plasma formation.

In 1996, colliding 158 GeV/nucleon lead beams on a solid lead target and using an improved experimental set-up, NA50 saw 190000 J/psis via their decay into muon pairs, four times the data collected in 1995. For peripheral lead­-lead collisions, where the density of nuclear matter is least, NA50 sees the expected nuclear absorption effects, extrapolated from studies with lighter nuclei.

However, for more violent lead-­lead collisions, more energy is transferred and there is a maximum density of hot nuclear matter. Under these conditions, quarks and antiquarks find it more difficult to stick together and the J/psi production rate dramatically decreases.

Under these conditions the quarks and gluons in the colliding lead nuclei briefly “forget” about their 15 billion year nuclear heritage and revert to their primeval state.

As well as the clear signs of J/psi suppression seen by NA50, other encouraging signs that collective quark-­gluon behaviour is not far away come from other experiments at CERN using heavy-ion beams, notably NA45; seeing an excess of light electron-­positron pairs; the increased yield of multiply strange particles by WA97/NA57; and several intriguing observations from the big NA49 study.

This bodes well for the experiments that are preparing to take their first data at the end of the year at the higher energies of Brookhaven’s RHIC heavy ion collider. Their measurements should confirm beyond reasonable doubt the current indications that high-energy nuclear collisions lead to a transition from confined to deconfined matter, where quarks and gluons are no longer bound inside hadrons.

Later this year, Brookhaven’s RHIC collider will start exploring a higher-energy frontier for heavy-ion physics, with gold nuclei at 200 GeV per nucleon­nucleon collision.

Meanwhile, CERN’s SPS experiments ­ NA45, NA49 and NA57 ­ convinced by the results found at 158 GeV per nucleon, will devote their 1999 beam time to a low-energy run with lead ions at 40 GeV per nucleon. The aim is to study the onset of the anomalous phenomena seen at the full SPS energy and to fill in the energy gap between existing SPS results and lower-energy data from the CERN and Brookhaven synchrotrons.

The return of antimatter

anti1_5-99

This year should see the start of physics with CERN’s new Antiproton Decelerator ring, marking the return of antiparticle physics to the CERN research stage three years after the closure of the LEAR low-energy antiproton ring in 1996.

The Antiproton Decelerator (AD) was built from CERN’s former Antiproton Collector ring, which was commissioned in 1987 to supplement the original Antiproton Accumulator (AA; meanwhile, elements of the AA have been sent to the Japanese KEK laboratory).

The task of the AD will be to take the antiprotons, which are produced by 26 GeV/c momentum protons hitting a target and selected at the optimum 3.57 GeV/c momentum level, and, as its name implies, decelerate them to much lower energies, using electron and stochastic cooling to control the beams.

Late last year the AD had a foretaste of particles ­ the much more readily available protons, in this case. The antiproton debut is scheduled to take place soon after the restart of the CERN machines this spring, with the physics programme following in September.

On the menu are the ATHENA and ATRAP experiments, which will use magnetic trapping to manufacture atoms of antihydrogen. Following the first synthesis of chemical antimatter at LEAR in 1995, physicists have been eagerly awaiting a chance to revisit atomic antimatter country to see whether there is any difference between the behaviour of matter and antimatter.

Also on the menu is the ASACUSA experiment by a Japanese-European collaboration, which aims to continue the exploration of antiprotonic atoms ­ atoms in which an orbital electron has been replaced by an antiproton.

Telegrams from the antiworld

anti2_5-99

Antiprotonic atoms, in which an antiparticle is bound to an ordinary nucleus, carry important messages about the antiworld and are much easier to make than anti-atoms. Among antiprotonic atoms, protonium (a “nuclear” proton and an “orbital” antiproton) is particularly interesting because it is the simplest two-body system consisting of a strongly interacting particle­antiparticle pair.

An isolated protonium atom will not be destroyed by collisions with atoms of the medium in which it was produced and can only de-excite by giving off radiation. The lifetime can then easily exceed microseconds. The difficulty will be to produce the atoms in isolation.

Antiprotonic helium is a special case. An experiment at CERN discovered that this exotic atom can survive a very large number of collisions and survive long enough to be studied by laser spectroscopy.

Isolated antiprotonic lithium would also be of great interest because its antiproton orbit should be far outside the residual pair of electrons. It should then be able to descend a ladder of these slow electromagnetic transitions, which ends only when it approaches the electrons.

In studying the interactions of antiprotons with matter, it is important to understand their ionization effects ­ how antiprotons strip electrons from ordinary atoms.

An experiment at LEAR by a collaboration involving Aarhus, PSI Villigen, University College London and St Patrick’s Maynooth measured the ionization of hydrogen by antiprotons within the 30-1000 keV energy range, where the antiprotons can be considered to be “fast and heavy” (see next article). The experimentally observed effects concur with theoretical calculations.

However, at lower energies, where there are as yet no data, theoretical analysis becomes more difficult and different calculations disagree, although they suggest an at the most weak energy dependence.

The study of the ionization of helium by antiprotons, with removal of one or both electrons, was pioneered at LEAR and is also ripe for further investigation.

These additional physics objectives form an integral part of the ASACUSA experimental programme, which involves some 50 researchers from 19 research institutes and in which Japanese physicists play a prominent role.

Antiprotons cannot do this, but when their energy drops still further (below a few tens of electron volts) they will readily be captured by the nucleus (see previous article) and form antiprotonic atoms.

These effects showed up clearly in the very-low-energy domain of antiproton physics opened up at CERN’s LEAR low-energy antiproton ring, and groups from Aarhus and Tokyo carried out many atomic interaction experiments as a guide to a better theoretical understanding of these many-body collisions (see previous article).

In the LEAR era, such experiments injected high-energy antiprotons into metallic foils or high-density gases, which degraded the antiprotons to electron volt energies and (in some experiments) provided the target atoms in which they were finally captured.

If the target density or thickness could be made so small that only one collision occured, much more precise and better-controlled experiments on the atomic interactions of antiprotons would be possible, and the dynamics of antiprotonic atom formation could be studied in detail. At such low target densities the absence of collisions after the capture process should also ensure that all antiprotonic atoms are stable enough to be brought under the penetrating eye of laser spectroscopy (see previous article).

The thin-target condition, where a beam particle enters a target and makes a single interaction, is, in a sense, “business as usual” for high-energy particle experiments, yet it constitutes one of ASACUSA’s more difficult longer-term goals. The solution is to separate the deceleration of the antiprotons from the atomic interaction (or antiprotonic atom formation) to be studied.

However, the electron volt antiprotons required for these experiments have a millionth of the energy that even the AD can provide. This energy gap will be crossed in two stages. First, the AD will be supplemented by a decelerating Radio Frequency Quadrupole (under construction in CERN PS division) to reduce the energy to tens of kilo electron volts. The antiprotons will then be confined in a Penning trap that is being constructed at Tokyo University, cooled to cryogenic temperatures, and reaccelerated to a given electron volt-scale energy.

Finally, the reaccelerated antiprotons will be introduced into low pressure gas targets or jets or ultrathin foils. These experiments should start in 2000, after the first round of experiments (on antiprotonic helium) is complete.

Per Ardua ad ASACUSA

At CERN’s AD Antiproton Decelerator, the ASACUSA collaboration is already preparing to greet the first AD antiprotons with a barrage of laser and microwave beams. ASACUSA stands for Atomic Spectroscopy And Collisions Using Slow Antiprotons, and, as this name implies, the experimenters’ joblist will include studies of the interaction of antiprotons with atoms at super-low energies, both as a means of understanding the formation of antiprotonic atoms, and as a subject in its own right.

Most physicists learn early in their career that it is impossible to find exact solutions for problems with more than two interacting bodies. Unfortunately, nature’s arrangements do not include making life easy for physicists ­ most of the phenomena that they find interesting (including those mentioned above) turn out to involve three bodies or more. Often physicists can avoid this handicap, sometimes by taking advantage of the fact that the masses and/or energies of some bodies may be much larger or smaller than those of other bodies; sometimes by using approximation methods; and sometimes by employing both approaches.

The many-body problem of the interaction of charged particle projectiles, such as protons and antiprotons, with atoms has repeatedly engaged many of the most agile minds of 20th-century physics. If, in such collisions, the incident particle is much heavier than the electrons in the target atom and its encounter with the atom is short- lived enough to be treated as a small perturbation, it will follow a straight, charge-independent, constant-velocity path through the atom and will not be deflected by electric fields.

This approximation, together with a few additional assumptions (for example, that the nucleus is too small a target to play a significant role), leads to the familiar Bethe­Bloch formula for the cumulative energy loss from multiple atomic encounters of charged particles passing through matter ­ of everyday importance in every particle physics experiment.

The “fast and heavy” approximation can at best hold down to projectile velocities about equal to that of the target atom’s electrons: about 25 keV for nucleons approaching hydrogen atoms. At lower energies the charge independence assumption will also be lost, because the projectile stays in the atom long enough to feel the nucleus. Among the more dramatic ultralow-energy effects is that of projectile protons repeatedly capturing and losing electrons.

Is spacetime symmetric?

anti3_5-99

The synthesis of antihydrogen (a lone positron orbiting a nuclear antiproton) at CERN in 1995 showed that antimatter is not merely a theoretical dream. Later this year, experiments at CERN’s new Antiproton Decelerator (AD) will begin investigating the properties of antihydrogen, their objective being to search for tiny differences in behaviour between matter and antimatter. Any such disparity would have deep implications for our understanding of space and time, as was highlighted at a recent meeting on spacetime symmetries held at Indiana University, Bloomington.

At the microscopic level the universe seems invariant both under CPT (the combination of charge conjugation, C, parity inversion, P and time reversal, T) and relativistic Lorentz transformations (rotations and boosts). However, these symmetries could be violated by effects at the Planck scale, at distances so small (10­33 cm) and energies so high (1019 GeV) that the gravitational force between two particles becomes comparable to the other forces of physics. Although such effects would be very small, they might be detected in sensitive experiments.

If nature is CPT invariant, the masses of a particle and its antiparticle should be exactly equal. Recent experiments at Fermilab and CERN have established mass equality for the neutral kaon and antikaon to about one part in 1019. This astonishing precision can be compared to measuring the distance between the Earth and the nearest stars (a few light years) to an accuracy of about 1 cm.

Opening the meeting, Bruce Winstein, spokesman for Fermilab’s KTeV experiment, summarized the status of these experiments and the KLOE experiment at Frascati’s DAPHNE collider. An ambitious proposal to improve the current bound by more than an order of magnitude in a dedicated CPT kaon experiment was presented by Gordon Thomson of Rutgers. Measurements constraining CPT violation in the B-meson system to about one part in 1016, recently performed by the OPAL and DELPHI collaborations at CERN using data from LEP, were reviewed by Martin Jimack of CERN.

A general extension of the standard model and quantum electrodynamics that includes CPT and Lorentz violation was presented by meeting organizer Alan Kostelecky of Indiana. This can be employed to identify promising observable signals that arise from a broad class of theories with CPT and Lorentz violation, including those in which Lorentz symmetry is spontaneously broken in an underlying unified theory at the Planck scale. Malcolm Perry of the University of Cambridge reviewed the status of string and M (membrane) theory and described a new mechanism for CPT violation that involves the dilaton field.

One crucial test of spacetime symmetries is to compare the properties of stable particles with those of their antiparticles. This is possible with high-precision measurements made in electromagnetic traps. New results were presented by experimentalist Richard Mittleman from Hans Dehmelt’s group at Washington. An analysis of several months of data from an experiment with single trapped electrons placed a bound of six parts in 1021 on a combination of Lorentz- and CPT-violating quantities. Another new bound was reported by Gerald Gabrielse of Harvard, who constrained certain Lorentz-violating quantities to four parts in 1026 by comparing the cyclotron frequencies of an antiproton and a hydrogen ion in an electromagnetic trap. A bold plan for testing spacetime symmetries is to perform spectroscopic measurements on antihydrogen and compare them with those of hydrogen. This requires the production of trapped antihydrogen, soon to begin, employing CERN’s Antiproton Decelerator (AD). Talks at the meeting outlined the goals of the AD’s two key trapped antihydrogen collaborations, ATRAP and ATHENA.

Comparisons between specialized atomic clocks can provide sharp tests of spacetime symmetries. These experiments are, in principle, capable of discerning Lorentz violation at the remarkable level of about one part in 1031. Astrophysical observations are interesting too, because small effects could be amplified as light travels over astronomical distances. One possibility is to look for radiowave birefringence on cosmological scales. Roman Jackiw of MIT presented a theoretical study of such effects, while other talks described possible experiments along these lines.

Organized by particle theorist Alan Kostelecky and attended by about 70 physicists from about half a dozen countries, the meeting was the first conference specifically focusing on this topic.

Don’t be afraid of the dark

dark1_5-99

The invisible dark matter of the universe weighs heavily on cosmology. However, whatever and wherever this invisible material is, it must be made of something, and the most plausible candidates are relic particles from the early phase of the universe. The search for dark matter, mostly using non-accelerator experiments, has become an established part of particle physics.

These questions were examined when physicists from all over the world met in Heidelberg for the Second International Conference on Dark Matter in Astro- and Particle Physics (DARK98). The goal was to shed light on theoretical backgrounds from particle physics and cosmology, to discuss the results of dark matter detection experiments and to examine future projects.

The most compelling evidence for both baryonic (nuclear) and non-baryonic dark matter comes from observations of the rotation curves of galaxies. In particular, the rotation curves of dwarf spirals are completely dark matter dominated, pointed out Andreas Burkert (Heidelberg). The rotation curve of one of the best measured dwarf spirals can only be fitted to theoretical predictions if both an outer cold dark matter halo and an inner spherical distribution of massive compact baryonic objects (MACHOs) is assumed.

The search for MACHOs in the halo of our own galaxy ­ in the form of planets, white and brown dwarfs or primordial black holes ­ exploits the gravitational microlensing effect ­ the temporary brightening of a background star as an unseen object passes close to the line of sight. For several years a number of groups have been monitoring the brightness of millions of stars in the Magellanic clouds, as Kim Griest (San Diego) and Marc Moniez (Orsay) explained.

MACHOs or WIMPs?

Several candidates have already been detected and if interpreted as dark matter would make up half of the amount needed in the galactic halo. However, no stellar candidate seems to be able to explain the observations. MACHOs could be an exotic form of baryonic matter, like primordial black holes, or they could be located outside the halo of our galaxy.

The leading non-baryonic dark matter candidates are the so-called weakly interacting massive particles (WIMPs). If WIMPs populate the halo of our galaxy, they could be detected directly in laboratory experiments, or indirectly through their annihilation products in the halo ­ the centre of the Sun or Earth.

Blas Cabrera (Stanford) gave an overview of the direct detection experiments. The goal is to look for the elastic scattering of WIMPs off nuclei in a low-background target detector. The Stanford Cold Dark Matter Search (CDMS) experiment, he explained, uses detectors of ultrapure germanium and silicon operated at a temperature of 20 mK. The simultaneous measurement of both ionization and phonon signals allows nuclear recoil events to be differentiated from electron interactions ­ a very effective background suppression method. For the moment, the experiment is located at the Stanford Underground Facility, 10.6 m below ground, but the goal is to operate the detector in the deep Soudan mine in Minnesota.

The DAMA experiment, presented by Rita Bernabei (Rome), is running 115.5 kg of sodium iodide detectors in the Gran Sasso underground laboratory near Rome. Its high statistics open the possibility of looking for WIMPs via a variation in the event rate owing to the movement of the Sun in the galactic halo and the Earth’s rotation around the Sun. The analysis of about 13 kg/yr reveals a positive WIMP annual modulation signal, which meanwhile has been confirmed with higher statistics from 54 kg/yr. However, a further confirmation by DAMA and by other experiments must be awaited.

The Heidelberg group reported on the two most sensitive germanium experiments ­ the Heidelberg­Moscow experiment and Heidelberg Dark Matter Search (HDMS) ­ both of which are located in the Gran Sasso Laboratory. The Heidelberg­Moscow experiment, which also searches for neutrinoless double beta decay in enriched germanium-76, currently gives the most stringent limits on WIMP­nucleon scattering for raw data.

HDMS, a dedicated dark matter experiment, aims to improve this limit by one order of magnitude. Like the Heidelberg­Moscow experiment, it looks for a small ionization signal inside a high-purity germanium crystal.

dark2_5-99

With the expected sensitivity, HDMS will be able to test, like CDMS, the complete DAMA evidence region. The new project of the Heidelberg group, GENIUS, presented by Laura Baudis, aims for a sensitivity that is a thousand times as good as that of present experiments. GENIUS will operate in its dark matter version 40 “naked” germanium crystals (100 kg) in a 12 x 12 m tank of liquid nitrogen. Reaching the target sensitivity, it could test almost the complete parameter space predicted for certain supersymmetric particles, thus deciding whether WIMPs make up the dominant part of our galactic halo.

Terrestrial indirect detection experiments search for high-energetic neutrinos as annihilation products of WIMPs in the centre of the Earth or the Sun. The MACRO experiment in Gran Sasso looks for an excess of neutrino induced upward-going muons, explained Teresa Montaruli (Bari). No WIMP annihilation signal has been found, but the sensitivity of the experiment sets stringent upper limits on the flux of upward-going muons and thus excludes significant portions of the parameter space predicted for the supersymmetric particles.

An alternative indirect signature for dark matter particles would be a distorted spectrum of secondary antiprotons owing to the pair annihilation of neutralinos in the halo. Pierre Salati (Annecy) compared the measured low-energetic antiproton flux by the BESS balloon experiment with theoretical predicted fluxes. While there is some room left for a possible signal of exotic origin, this cannot be seen as evidence for a supersymmetry induced signal, he claimed. To disentangle such a signal from the secondary antiproton flux much more sensitive detectors, like the Alpha Magnetic Spectrometer (AMS), are needed.

Superheavy dark matter

Recently a new class of dark matter candidates ­ superheavy dark matter ­ have emerged. If one gives up the assumption that the particle was in thermal equilibrium in the early universe, explained Edward Kolb (Chicago), then its present abundance is no longer determined by annihilation and much heavier particles ­ the formidable sounding WIMPZILLAs ­ are allowed. There are two necessary conditions for WIMPZILLAs: they must be stable, or at least have a lifetime much greater than the age of the universe; and their interaction rate must be sufficiently weak that thermal equilibrium with the primordial plasma was never obtained. Kolb presented a number of ways in which such a particle could have been created, like gravitational production during the transition between an inflationary and a matter- or radiation-dominated universe, and during the defrosting phase after inflation.

Like the new millennium, dark matter could be just around the corner. The next meeting ­ DARK2000 ­ will take place in Heidelberg. DARK98 was organized by H V Klapdor-Kleingrothaus (with Laura Baudis as scientific secretary) from the Max Planck Institut für Kernphysik, Heidelberg.

CP violation gets clearer

news2_4-99

New data from the KTeV experiment at Fermilab blow away some of the fog around the mystery of CP violation and underline the effects suggested by earlier results from CERN. It is now clearly established that CP is violated in the way the six known quarks decay and transform into each other.

In 1956, physicists were shocked to discover that the weak force is sensitive to direction and can differentiate left from right. With theoretical foundations crumbling, physicists proposed a new girder to support their theories: this time the combined CP symmetry mirror that changes particles to antiparticles as it reflects from left to right. In the CP mirror, a right-handed particle reflects as a left-handed antiparticle, and vice versa.

If CP symmetry is good, the neutral kaon should exist in two forms: a longlived one decaying into three pions, and a shortlived one decaying into two pions. In 1964, physicists received another shock when they found that, in the decays of the neutral kaon, CP too is violated. Longlived kaons can decay into two pions.

There are two possible explanations for this CP violation. The longlived kaon could be a mixture of two states that are even and odd under CP reflection. This has long been known to be the case: the even-CP state can decay into two pions and introduce a CP-violating component for the longlived kaon.

The other possibility, called direct CP violation, is that the CP-odd state decays directly into the “forbidden” two pion mode.

What causes the kaon states to mix CP? Using the six known quarks, this can be accommodated in the quark transformations, which subtly rearrange the incoming and outgoing quark configurations, switching a neutral kaon to its antiparticle. However, CP mixing could also be due to kaons transforming into each other via some other mechanism. In this case, direct CP violation ­ CP violation in the decay process ­ would not be possible.

To unravel these two alternatives demands the careful measurement of direct CP violation via the “ratio of ratios”: the ratio of longlived kaons decaying into two neutral pions to those going into two charged pions,
divided by the same ratio for shortlived kaons. If this ratio of ratios turns out to be different from one, this demonstrates that quark transformations are responsible for CP violation.

For several years the two main experiments ­ NA31 at CERN and E731 at Fermilab ­ begged to differ, the former giving the difference of the ratio from unity (divided by a numerical factor) of 2.3 ±0.65 x 10-3, and the latter giving a much smaller figure, compatible with zero. Physicists held their breath.

Now the KTeV experiment, using 20% of its data collected in 1996 and 1997, comes in at 2.8 ±0.41 x 10-3, in tune with the earlier CERN figure, but slightly higher. CP would appear to be violated directly in the decay process in such a way that quark mechanisms contribute.

Meanwhile, the big NA48 next-generation CERN study has been collecting data and will be the next to report. Some 35 years after its discovery, CP violation remains a mystery, but at least the mystery is gradually becoming clearer. The new result is good news for new experiments setting out to measure CP violation using B particles, containing the fifth, “beauty”, or “b”, quark, where the levels of CP violation are now expected to be much higher than those using neutral kaons.

Superstrings, black holes and gauge theories

Quantum field theories have had great success in describing elementary particles and their interactions, and a continual objective has been to apply these successful methods to gravity as well.

The natural length scale for quantum gravity to be important is the Planck length, lp: 1.6 x 10-33 cm. The corresponding energy scale is the Planck mass, Mp: 1.2 x 1019 GeV. At this scale the effect of gravity is comparable to that of other forces and is the natural energy for the unification of gravity with other interactions. Evidently this energy is far beyond the reach of present accelerators. Thus, at least in the near future, experimental tests of a unified theory of gravity with the other interactions are bound to be indirect.

When we try to quantize the classical theory of gravity, we encounter short-distance (high-energy) divergences (infinities) that cannot be controlled by the standard renormalization schemes of quantum field theory. These have a physical meaning: they signal that the theory is only valid up to a certain energy scale. Beyond that there is new physics that requires a different description.

Quantum gravity

Such a phenomenon is not unfamiliar. The short-distance divergences of Fermi’s original theory of the weak interactions (with four particles meeting at one space-time point) signal that the description is only valid for energies less than the masses of the W and Z carrier particles. The divergences are resolved by introducing these particles in the Glashow-Salam-Weinberg theory.

We expect that the divergences of quantum gravity would similarly be resolved by introducing the correct short distance description that captures the new physics. Although years of effort have been devoted to finding such a description, only one candidate has emerged to describe the new short-distance physics: superstrings.

quantum1_4-99

This theory requires radically new thinking. In superstring theory, the graviton (the carrier of the force of gravity) and all other elementary particles are vibrational modes of a string (figure 1). The typical string size is the Planck length, which means that, at the length scales probed by current experiments, the string appears point-like.

The jump from conventional field theories of point-like objects to a theory of one-dimensional objects has striking implications. The vibration spectrum of the string contains a massless spin-2 particle: the graviton. Its long wavelength interactions are described by Einstein’s theory of General Relativity. Thus General Relativity may be viewed as a prediction of string theory!

The quantum theory of strings, using an extended region of space-time, sidesteps short-distance divergences and provides a finite theory of quantum gravity. Besides the graviton, the vibration spectrum of the string contains other excited oscillators that have the properties of other gauge particles, the carriers of the various forces. This makes the theory a promising (and so far the only) candidate for a unification of all of the particle interactions with gravity.

In the absence of direct experimental data to confront string theory, research in this field is largely guided by the requirement for the internal consistency of the theory. This turns out to dictate very stringent constraints. Again, this is not unfamiliar to particle physicists. When resolving the short-distance divergences of Fermi weak interaction theory, the space-time and internal symmetries provide stringent constraints and guide us to the solution.

Superstring theories

Two important features of string theory are implied by the consistency requirement. First, superstring theory is consistent only in 9+1 space-time dimensions. This seems to contradict the fact that the world that we see has 3+1 space-time dimensions. Second, there are five consistent superstring theories: Type IIA, Type IIB, Type I, Ex E8 Heterotic and S0(32) Heterotic. Type I is a theory of unoriented open and closed strings; the others are theories of oriented closed strings. Which should be preferred?

All five possess supersymmetry, hence the name superstrings. According to supersymmetry theory, any boson (integer spin particle) has a fermionic (half-integer) superpartner and vice versa. One way to view supersymmetric field theories is as field theories in superspace ­ a space with extra fermionic quantum dimensions. Many physicists expect that supersymmetry exists at the tera electron volt scale, so that the new ‘superpartner’ particles could be seen by CERN’s LHC proton collider.

Compactification and stringy geometry

The time and the three spatial dimensions that we see are approximately flat and infinite. They are also expanding. Just after the Big Bang they were highly curved and small. It is possible that while these four dimensions expanded, other dimensions did not expand, remaining small and highly curved.

Superstring theory says that we live in 9+1 space-time dimensions, six of which are small and compact, while the time and three spatial dimensions have expanded and are infinite.

How can we see these extra six dimensions? As long as our experiments cannot reach the energies needed to probe such small distances, the world will look to us 3+1-dimensional and the extra dimensions can be probed only indirectly via their effect on 3+1-dimensional physics. It is not known at what energies the hidden compact dimensions will open up. The possibility that new dimensions or strings will be seen by the LHC is not ruled out by the current experimental data.

Superstring theory employs the Kaluza­Klein mechanism to unify gravitation and gauge interactions using higher dimensions. In such theories the higher dimensional graviton field appears as a graviton, photon or scalar in the 3+1-dimensional world, depending on whether its spin is aligned along the infinite or compact dimensions.

The number of consistent compactifications of the extra six co-ordinates is large. Which compactification to choose is the prize question. The answer is hidden in the dynamics of superstring theory. Using a limited (perturbative) framework, one can attempt a qualitative study of the 3+1-dimensional phenomenology obtained from different compactifications. It is encouraging that some of these compactifications result in 3+1 dimensional models that have qualitative features such as gauge groups and matter representations of plausible grand unification models. Interestingly the low-mass fermions appear in families, the number of which is determined by the topology of the compact space.

T-duality

Not all of the compactifications are distinguishable. To illustrate this, take one spatial co-ordinate to be a circle of radius R. There are two types of excitations. The first, which is familiar from theories of point-like objects, results from the quantization of momentum along the circle. These are called Kaluza­Klein excitations. The second type arises from the closed string winding around the circle. These are called winding mode excitations. This is a new feature that does not exist in point-like theories. When we map the size of the radius, R, of the circle to its inverse, 1/R, with the string scale set to one, the two types of excitations are exchanged and the theory remains invariant. There is no way to distinguish the compactification on a circle of radius R from a compactification on a circle of radius 1/R. This means that the classical geometrical concepts break down at short distances and the classical geometry is replaced by stringy geometry.

In physical terms, this implies a modification of the well known uncertainty principle. The spatial resolution, Dx , has a lower bound dictated not just by the inverse of the momentum spread, Dp, but also by the string size. The mapping of the radius of the compactification to its inverse, exchanging Kaluza­Klein excitations with winding modes, is called T-duality.

Another example of stringy geometry is “mirror symmetry”. In the previous example, both circles had the same topology and different sizes. In contrast, mirror symmetry is an example of stringy geometry where two six dimensional spaces, called Calabi-Yau manifolds, with different topology are not distinguishable by the string probe.

bright-rec iop pub iop-science physcis connect