Comsol -leaderboard other pages

Topics

New precision result on CP violation

cernnews2_5-00

The latest precision measurement of CP violation, matter-antimatter asymmetry, by the big NA48 experiment at CERN underlines the importance of understanding this delicate mechanism, which probably played a major role in shaping the particle scenario that emerged from the Big Bang.

In CP (charge/parity) symmetry the physics of left-handed particles is the same as that of right-handed antiparticles. This idea was suggested after physicists had been shocked in 1956 to discover that the weak interactions of atomic nuclei (beta decay) can differentiate between left and right – weak interactions violate P (parity) symmetry.

In 1964, physicists received another shock when they discovered that the more elaborate CP symmetry is also flawed. Under certain circumstances, nature differentiates between matter and antimatter – what are labelled as particles and as antiparticles is not arbitrary.

The reason for CP violation is still not understood, but whatever it is, it probably helped to shape a universe that was matter antimatter symmetric immediately after the Big Bang into the matter-dominated universe that we know. To our knowledge, antiparticles do not exist naturally and can only be synthesized when high-energy interactions produce particle-antiparticle pairs.

Last year, two major experiments, KTeV at Fermilab and NA48, reported a non-zero value for a vital CP violating parameter that shows that CP is violated directly in the actual decays of the component quarks. CP violation is not simply due to the subtle mixing of neutral particles and antiparticles.

The traditional stage for CP violation is an enigmatic particle – the neutral kaon. It exists in two forms – particle and antiparticle of each other, which are only distinguished by a quantum number – strangeness – that is not conserved in weak interactions. Because of this the two neutral kaons get mixed up.

If CP were good, the neutral kaon would exist in two forms – a short-lived variety decaying relatively easily into two pions, and a long-lived version for which the two-pion decay is not allowed and that has to struggle to produce three pions instead. However, owing to CP violation, this distinction is not total – three parts per thousand of long-lived kaons produce two pions.

To establish whether direct CP violation via the quark decay route contributes to this, physicists must carefully compare two ratios. The first is the rate of long-lived kaons decaying into two neutral pions compared with that into two charged pions, and the second is the equivalent pion-pair decay ratio for short-lived kaons. Making these measurements, involving very similar particle signatures, is extremely difficult.

If these two ratios are not exactly equal, then quark effects do contribute to CP violation. The relevant parameter used by physicists is the difference of this ratio of ratios from unity, divided by a numerical factor. For several years the situation had been unclear, until last year when KTeV reported a parameter of 28 ± 4.1 x 10-4 (thought by theorists to be uncomfortably high), and NA48, using data collected in 1997, which reported 18.5 ± 7.3 x 10-4. The new NA48 number is 14 ± 4.3 x 10-4, including 1998 data, with several times the number of kaon decays seen previously. The resulting new world average result is less surprising to theorists.

Data from 129 days of NA48 running in 1999 are still being analysed, but after this run was complete an accident destroyed the experiment’s carbon fibre beam pipe and damaged some of the detector modules. The apparatus is being repaired to enable more data to be collected and to improve the precision of its final result.

Not enough stellar mass objects to fill the galactic halo?

cernhalo1_5-00

The mass of our galaxy (the Milky Way) can be computed from the dynamics of its rotation and of the motion of its satellites. It can also be evaluated by adding up its visible components, primarily stars. That these two estimates disagree by a factor of 5-10 constitutes the problem of galactic dark matter.

Either the Newtonian/Einsteinian laws of dynamics are wrong at the galactic scale, or there exists some form of galactic matter that does not emit or absorb enough electromagnetic radiation to be directly “visible”. Studies of many other spiral galaxies confirm that this problem is not unique to the Milky Way.

Originally proposed in 1986 by B Paczynski of Princeton, gravitational microlensing is a novel and indirect way to search for galactic dark matter through the deflection and magnification of the light of extragalactic stars. The search for microlensing has recently shed new light on the galactic dark matter puzzle.

cernhalo2_5-00

Microlensing

In 1990, three groups began the search for gravitational microlensing. The main problem was the inherent large scale of such surveys. To produce a detectable magnification of the light of a distant star, an intervening compact massive object has to come closer to the star’s line of sight than one milli-arcsecond, or five nanoradians (the angle subtended on Earth by an Apollo mission lunar jeep); the tighter the alignment, the larger the magnification.

This happens so seldom that one expects less than one star in a million to be affected significantly at any given time, hence the necessity to survey some 10 million stars over many years. In contrast, variable stars are more than a thousand times as frequent and constitute a serious experimental background.

The shape of microlensing magnification is predictable and does not depend on the wavelength, contrary to most variable stars. The phenomenon is transient, because of the motion of the dark lensing object with respect to the distant star. Its duration scales as the square root of the lensing object mass, and this can be used to estimate these masses.

To simplify, one could say that two of the groups, EROS (Expérience de Recherche d’Objets Sombres – an experiment to search for dark objects) and MACHO (Massive Astronomical Compact Halo Objects), were most concerned with the dark matter problem. To probe the content of the galactic halo, they chose to monitor stars in the Magellanic Clouds – two irregular dwarf galaxies, satellites of the Milky Way, that lie close to the celestial South Pole.

The third group, OGLE (Optical Gravitational Lensing Experiment), chose to look first for microlensing where it was bound to find some – the centre of our galaxy. The microlensing rate there owing to known low-mass stars was expected to be about one in a million per year. In contrast, the rate towards the Magellanic Clouds could be anywherebetween zero, if there are no compact  dark objects in the galactic halo, and about a thousand times the galactic centre rate, if the halo is swarming with lunar mass dark bodies.

The main goal of EROS and MACHO was to detect a few microlensing events caused by brown dwarfs – would-be stars not massive enough to burn via thermonuclear reactions. These objects, between a tenth and a hundredth the size of the Sun, would give rise to a microlensing rate of the order of the expected galactic centre rate.

cernhalo3_5-00

Divergence

In September 1993 the discovery of the first microlensing candidates by the three groups aroused high hopes that the galactic dark matter problem was about to be solved. However, in the following years a gradual divergence appeared between the EROS and MACHO results. Based on two years of Large Magellanic Cloud (LMC) CCD camera images, the MACHO group presented its result as pointing to a galactic halo half-full of 0.5 ± 0.2 solar mass objects, and compatible with being totally comprised of such objects.

In contrast the EROS group observed such a small number of candidate microlensing events that it published only upper limits, first based on a photographic plate LMC survey (1990-4), and then on a survey, started in 1996, of the Small Magellanic Cloud (SMC), which uses two large CCD cameras. The EROS limits excluded, in particular, a halo full of 0.5 solar mass objects.

Despite these somewhat inconsistent results, agreement was reached on one important point: because all microlensing candidates observed by MACHO and EROS lasted longer than a month, the halo could contain no more than 10-20% of dark objects in the wide mass range between the mass of planet Mercury and one-tenth that of the Sun, that is from 10-7 to 0.1 solar masses. This excluded brown dwarfs – the dark matter candidate that had been the main motivation for this search (figure 1).

cernhalo4_5-00

 

cernhalo5_5-00

Reconciliation

Fortunately the past months have witnessed a reconciliation of the MACHO and EROS findings. The latter presented results from the first two years of an ongoing six-year survey of 17 million LMC stars that produced a meagre crop of two new microlensing candidates. Combined with their previous limits, this enables them to exclude a galactic halo fully comprised of objects of up to four solar masses – quite a respectable mass for a stellar object. (For a halo of 0.5 solar mass objects, the upper limit is now near 30%; figure 1.)

MACHO has analysed almost six years of LMC images out of seven-and-a-half years of surveying, and it has now stopped taking data. From its 13-17 observed microlensing candidates, it now favours a 20% contribution of 0.5 solar mass objects to the halo mass budget, but these are compatible with halo mass fractions ranging from 8% to 50%.

The duration of the candidates is similar, so that both results are compatible. However, the two groups interpret them differently. The MACHO collaboration favours an interpretation in terms of galactic halo objects. The distribution of stellar luminosities of its microlensing candidates agrees with that of LMC stars, which is to be expected given that the dark lensing objects do not choose the LMC star that they will lens for. The distribution of magnifications is also compatible with a random distance of the lens to the star’s line of sight.

These two tests could have revealed a possible contamination of the sample by intrinsic variable stars, but they do not shed light on the position of the dark lenses. This can be achieved by studying the spatial distribution of microlensing candidates, which should follow that of LMC stars in the case of halo lenses, or be more peaked towards the LMC centre if the lenses are low-mass LMC stars.

The MACHO group finds that the observed distribution favours halo lenses, but that it cannot completely exclude LMC lenses. The two options are, of course, very different in terms of the galactic dark matter composition.

With three to four microlensing candidates towards the Magellanic Clouds over eight years, EROS has a harder time comparing measured and expected distributions. However, it makes the following observations: compared with MACHO, EROS has chosen to monitor less frequently more stars, spread over a three times wider solid angle. Thus the smaller EROS lensing rate could be interpreted as a spatial dependence of the event rate, favouring the LMC-lens hypothesis. Moreover, while MACHO seems rather confident that its sample is background free, no such claim is heard from EROS.

Small Magellanic Cloud

Finally, there is the question of the SMC, where one candidate was seen by both groups in 1997. This event is longer than those of all EROS or MACHO LMC candidates, which does not favour its interpretation as a halo lens: as the Magellanic Clouds are separated by only 20° in the sky and are at comparabledistances from us, one would expect the characteristics of (halo) microlensing events towards both clouds to be very similar.

More quantitatively, the probability of this event being compatible with the LMC event durations is only 3%. On the contrary, as stellar velocities in the SMC are smaller than in those the LMC, it would be natural for SMC microlensing events to last longer if the lenses belong to the Magellanic Clouds.

In addition, the SMC event lasted long enough that the Earth had time to complete three-quarters of its orbit around the Sun during the magnification. This could have led to observable microlensing deformations. Such effects are not seen in EROS and MACHO data, implying that the lens is either a low-mass SMC star or a few solar-mass halo object. In the latter case its mass would not be compatible with that deduced from lenses towards the LMC.

Thus EROS concludes that this particular lens lies in the SMC. Much is expected from the comparison of LMC and SMC events but, because there are only a fifth the number of stars in the latter, no definitive conclusion can yet be reached. Nevertheless, EROS expects to be able to make a statement with an analysis of four years of data. The MACHO analysis of SMC images is also eagerly awaited.

If the MACHO interpretation is correct and there are plenty of half a solar mass objects in the galactic halo, the next challenge is to find out what they are. They cannot be ordinary stars, because these would be bright enough to be visible. One exotic scenario is primordial black holes made in the early universe at the time of the quark-hadron transition. Old white dwarf stars are another possibility: there are counter-arguments to their abundant presence in the halo, but they have the advantage that they could be detected by looking for nearby, dim high-velocity objects. Some groups are conducting such searches, including EROS. One group, led by R Ibata (Max Planck, Heidelberg), has claimed the detection of a few halo white dwarfs, but their interpretation as halo objects is unconfirmed.

Valuable results

Whatever the future developments involving galactic dark matter, microlensing surveys have already provided concrete results. The lensing probability towards the centre of the galaxy was found by the OGLE and MACHO groups to be three times as large as expected, so that microlensing can teach us much about galactic structure.

The surveys have also yielded many variable stars. This has allowed, for example, studies with unprecedented statistics of Magellanic Cloud Cepheids, a pivotal cosmic yardstick, as well as the discovery of new types of variable stars.

The monitoring of long microlensing events of bright stars provides a novel way to look for planets around the lenses

Finally, the monitoring of long microlensing events of bright stars provides a novel way to look for planets around the lenses. Compared with the current, highly successful searches that use precise measurements of stellar radial velocities, microlensing should be sensitive to lower-mass planets orbiting more distant and more typical stars.

There is a good chance that, after reconciling their results, EROS and MACHO will soon agree. A conclusion that can already be drawn is that the largest part of galactic dark matter does not comprise of dark astronomical objects lighter than a few solar masses.

As far as microlensing is concerned, the search should now be extended to longer events corresponding to heavier lensing objects. In parallel, the groups looking for other dark matter candidates, such as Weakly Interacting Massive Particles, with underground or underwater detectors or at large particle accelerators, will certainly be encouraged by the recent microlensing results.

Physicists brim with confidence

cernstats1_5-00

The first conference for high-energy physicists devoted entirely to statistical data analysis was the Workshop on Confidence Limits, held at CERN on 17-18 January. The idea was to bring together a small group of specialists, but interest proved to be so great that attendance was finally limited by the size of the CERN Council Room to 136 participants. Others were able to follow the action by video retransmission to a room nearby. A second workshop on the same topic was held at Fermilab at the end of March.

Confidence limits are what scientists use to express the uncertainty in their measurements. They are a generalization of the more common “experimental error” recognizable by the “plus-or-minus sign”, as in x= 2.5 ± 0.3, which means that 0.3 is the expected error (the standard deviation) of the value 2.5. In more complicated real-life situations, errors may be asymmetric or even completely one-sided, the latter being the case when one sets an upper limit to a phenomenon looked for but not seen. When more than one parameter is estimated simultaneously, the confidence limits become a confidence region, which may assume bizarre shapes, as happens in the search for neutrino oscillations.

cernstats2_5-00

 

cernstats3_5-00

Analysis methods

Why the sudden interest in confidence limits? As co-convenors Louis Lyons (Oxford) and Fred James (CERN) had realized, the power and sophistication of modern experiments, probing rare phenomena and searching for ever-more exotic particles, require the most advanced techniques, not only in accelerator and detector design, but also in statistical analysis, to bring out all of the information contained in the data (but not more!).

As invited speaker Bob Cousins (UCLA) wrote several years ago in the American Journal of Physics:“Physicists embarking on seemingly routine error analyses are finding themselves grappling with major conceptual issues which have divided the statistics community for years. While the philosophical aspects of the debate may be endless, a practising experimenter must choose a way to report results.”

One of these “major conceptual issues” is the decades-old debate between the two methodologies: frequentist (or classical) methods, which have been the basis of almost all scientific statistics ever since they were developed in the early part of the 20th century; and Bayesian methods, which are much older but have been making a comeback and are considered by some to have distinct advantages.

With this in mind, the invited speakers included prominent proponents of both of the techniques. In the week leading up to the workshop, Fred James gave a series of lectures on statistics in the CERN Academic Training series. This provided some useful background for a number of the participants. The workshop benefited from a special Web site that was designed by Yves Perrin and that now contains write-ups of the majority of the talks.

cernstats4_5-00

One of the items that caused some amusement was the list of required reading material to be studied beforehand. The rumour that participants would be required to pass a written examination was untrue. Also unfounded was the suggestion that the convenors had something to do with the headline of the CERN weekly Bulletinthat appeared on the opening day of the workshop: “CERN confronts the New Millennium with Confidence”.

In his introductory talk, Fred James proposed some ground rules: that all methods should be based on recognized statistical principles and therefore should be labelled either classical or Bayesian; for classical methods, the frequentist properties should be known, especially the coverage; a method is said to have coverage if the 90% confidence limits are guaranteed to contain the true value of the parameter in 90% of the cases, were the experiment to be repeated many times, no matter what the true value is.

Bayesian methods, on the other hand, require the specification of a prior distribution of “degrees of belief” in the values of the parameter being measured, so proponents of Bayesian methods were invited to explain what prior distribution was assumed and how they justified it.

Bayesians were also asked to explain how they define probability and how they reconcile their definition with the probability of quantum mechanics, which is a frequentist probability. James then offered the view that, in the last analysis, a method would be judged on how well its confidence limits fulfilled three criteria:

* Do they give a good measure of the experiment’s sensitivity?

* Can they be combined with limits given by other experiments?

* Can they be used as input to a personal judgement about the validity of a hypothesis?

Provocative views

Bayesian methods were illustrated with several examples by Harrison Prosper (Florida) and Giulio D’Agostini (Rome). D’Agostini gave some very provocative views, completely rejecting orthodox (classical) statistics. He said: “Coverage means nothing to me.” Prosper presented a more moderate Bayesian viewpoint, explaining that he was led to Bayesian methods because the frequentist approach failed to answer the questions that he wanted to answer. He was aware of the arbitrariness inherent in the choice of the Bayesian prior, but felt that it was no worse than that of the ensemble of virtual experiments necessary for the frequentist concept of coverage. He also found that the practical problem of nuisance parameters (such as incompletely known background or efficiency) is easier to handle in the Bayesian approach.

Bob Cousins (UCLA) summarized the main problems besetting the two approaches, contrasting the questions addressed by the various methods and the different kinds of input required. The influential 1998 article that he wrote with Gary Feldman (Harvard) on a “unified approach” to classical confidence limits was quickly adopted by the neutrino community and occupied an important place already in the 1998 edition of the widely used statistics section of the Review of Particle Physics,published by the Particle Data Group. He summarized his hopes for this workshop with a list of 10 points “we might agree on”.

Different approaches

The rest of the workshop largely confirmed Cousins’ hopes, because there was little serious opposition to any of his points, and in the final session the most important practical suggestion – that experiments should report their likelihood functions – was actually put to a vote, with the result that there were no voices against.

Variations of the method of Feldman and Cousins were proposed by Carlo Giunti (Turin), Giovanni Punzi (Pisa), and Byron Roe and Michael Woodroofe (Michigan). Woodroofe and Peter Clifford (Oxford) were the two statisticians invited to the workshop. They played an important role as arbiters of the degree to which various methods were in agreement with accepted statistical theory and practice. From time to time they gently reminded the physicists that they were reinventing the wheel, and that books had been written by statisticians since the famous tome by Kendall and Stuart.

Searches for the Higgs boson are being conducted at CERN’s LEP Collider and at Fermilab’s Tevatron. At LEP, Higgs events are expected to have a relatively clear signature, so selection cuts to reduce unwanted backgrounds can be fairly tight, and the number of candidate events is relatively small.

To make optimum use of the small statistics, it has been found that a modified frequentist approach – the CLmethod, described by Alex Read – provides, on average, more stringent limits than a Bayesian method. The required computing time is significant, so techniques to help reduce this were discussed.

In contrast, Higgs searches in proton-antiproton collisions at the Tevatron are far more complicated and result in a larger background. Because of the larger numbers of events, the use of a Bayesian approach is quite reasonable here. These and other searches in the CDF experiment at the Tevatron were described by John Conway (Rutgers). The final session of the workshop was devoted to a panel discussion that was led by Glen Cowan (Royal Holloway), Gary Feldman, Don Groom (Berkeley), Tom Junk (Carleton) and Harrison Prosper.

Workshop participants had been encouraged to submit topics or questions to be discussed by the panel. As was fitting for a workshop devoted to small signals, the number of submitted questions was nil. This did not prevent an active discussion, with Bayesian versus frequentist views again being apparent. The overwhelming view was that the workshop had been very useful. It brought together many of the leading experts in the field, who appreciated the opportunity to hear other points of view, and to have detailed discussions about their methods. For most of the audience it provided a unique chance to learn about the advantages and limitations of the various methods available, and to hear about the underlying philosophy. Animated discussions spilled over from the sessions into the coffee breaks, lunch and dinner.

New challenges

Several problems clearly remain. Producing a single method that incorporates only the good features of each approach still looks somewhat Utopian. More accessible may be challenges such as how to combine different experiments in a relatively straightforward manner (this will be particularly important in Higgs searches in the Tevatron run due to start in 2001); dealing with “nuisance parameters” (such as uncertainties in detection efficiencies or backgrounds) in classical methods; and reducing computation time. It was also apparent that simply quoting a limit is not going to be enough. If it is to be useful, it is necessary to specify in some detail the assumptions and methods involved.

The organizers are now putting together proceedings, which will include all of the discussions and hopefully summaries of all of the talks. It should appear as a CERN Yellow Report. The audience at this workshop was predominantly from European laboratories. A similar meeting was held at Fermilab on 27-28 March.

Taking a new close look at the proton’s weak magnetism

cernnews5_4-00

Ever since Otto Stern surprised his colleagues in 1933 by announcing that the proton’s magnetic moment was some three times as large as expected, physicists have puzzled over the origin of this effect. During the past two summers, the SAMPLE experiment at the MIT-Bates Linear Accelerator Center has shed new light on this question by measuring the proton’s magnetism as seen by the weak, rather than the electromagnetic, interaction.

Although the weak interaction violates parity (left/right symmetry), it still tries to mimic electromagnetism, and this introduces a magnetic-like term, which was called “weak magnetism” by Gell Mann in 1958.

The new measurement leads to the first direct information on how different quark “flavours” in the proton generate the magnetic moment. Because the electromagnetic and weak interactions are precisely related in the Standard Model, the new result can be combined with the proton’s ordinary magnetic moment (and that of the neutron, the proton’s iso-spin partner) to uncover the magnetic contributions of the separate quarks.

The experiment is an analogue of the classic electron-scattering experiments of Robert Hofstadter and his collaborators at Stanford in the 1950s. In the SAMPLE experiment, the electrons are polarized so that their spins are aligned either parallel or antiparallel to the beam direction. Scattering experiments with these two types of beam are sensitive to the mirror-symmetry (parity) violating nature of the weak interaction. However, the relative differences are only a few parts per million, presenting a significant experimental challenge.

An intense pulsed beam of polarized electrons from the Bates accelerator hits a liquid-hydrogen target. The backward-scattered electrons are detected with a large solid-angle air Cherenkov detector.

It is the strange quark contribution to the magnetic moment that is of the greatest interest, because such effects must come from the proton’s “sea” of virtual quark-antiquark pairs.

In the first SAMPLE experiment, the parity-violating asymmetry of the proton was measured. Using a theoretical estimate for the contribution of the weak axial vector current, the portion of the magnetic moment due to strange quarks comes out to be significantly positive.

To check the axial current contribution, a second measurement was made last summer using a deuterium target, where the strange quark effects from the proton and neutron are expected largely to cancel. The analysis will reveal the strange quark contribution to the proton magnetic moment.

These experiments, plus new measurements from the Jefferson Lab, mark the beginning of a programme to determine the contributions of strange quarks to the proton’s inner distributions of charge and magnetization.

SAMPLE is a collaboration between Caltech, Illinois, Louisiana Tech, Maryland, MIT, William and Mary, and Virginia Tech.

Workshop stresses potential of space technology

April sees a joint workshop on fundamental physics in space held at CERN and organized jointly by CERN and the European Space Agency (ESA). Interest in fundamental physics missions is at its greatest, and the last few years have seen major space agencies make policy moves to encourage the generation of new ideas.

The aim of the workshop is to highlight the possibilities offered by space technology: fundamental physics experiments such as tests of general relativity and the equivalence principle; measurements of cosmological parameters; the study of gamma-ray bursts and high-energy cosmic rays; and the search for dark matter candidates, to name but a few.

On 5-6 April there will be a series of specialized sessions for invited participants, followed by an open session on 7 April in the CERN auditorium with summary talks by the seven special session conveners. The European Physical Society and the European Astronomical Society are also sponsoring the event.

Cracow establishes a tradition

cernpoland1_4-00
cernpoland2_4-00

Neutrinos always provide compelling physics. An example of this occurred earlier this year on 7-9 January, just after the Epiphany holiday, when more than 150 high-energy and nuclear physicists and astrophysicists from many countries met in Cracow, Poland, for the Epiphany Conference on Neutrinos in Physics and Astrophysics, which is organized jointly by the Institute of Nuclear Physics and the Jagellonian University of Cracow. First held in 1995, these January Cracow meetings have now become an established feature of the international physics calendar.

Never lacking anyway, neutrino interest has been boosted by new evidence for neutrino oscillations – neutrino species (electron, muon and tau), long thought to be distinct and immutable, transform into each other. As well as surveying the experiments that led to this realization, the conference looked forward to new and planned experiments to investigate this new phenomenon further.

Of special interest are the long-baseline studies, in which neutrinos produced by an accelerator beam are observed by detectors installed at a distant point, typically several hundred kilometres away, for a direct measurement of neutrino oscillations. These manifest themselves either by the disappearance of the neutrino species produced at the accelerator site or by the appearance of a different neutrino species, depending on the capabilities of the detector installed.

One such project, K2K in Japan, which uses the neutrino beam produced at KEK and detectors of the Super Kamiokande neutrino observatory, has just started operation. The status of this experiment and recent data on solar and atmospheric neutrinos from Super-Kamiokande were presented in Cracow by Danuta Kielczewska (Warsaw). Results on the neutrino masses and mixing from other ongoing experiments were discussed by Jochen Bonn (Mainz), Yves Declais (Annecy) and Jonny Kleinfeller (Karlsruhe).

Two planned experiments, designed to use neutrinos produced at CERN and the detectors installed in the Grand Sasso tunnel in Italy, the ICANOE and the OPERA projects, were described by Andre Rubbia (Zurich) and Stavros Katsanevas (Lyon) respectively. Adam Para (Fermilab) summarized US neutrino experiments including the MINOS project, the 730 km baseline experiment using neutrinos from Fermilab, while Rob Edgecock (Rutherford Appleton) discussed the potential use of future muon colliders for super-long-baseline neutrino experiments.

Theoretical aspects were covered by Harald Fritzsch (Munich), who discussed potential connections between quark and neutrino mixings, Ferruccio Feruglio (Padova), who reviewed existing theories, and Marek Zralek (Katowice), who discussed the experimental constraints for Dirac neutrinos. The theory of unification and evolution of the neutrino masses was discussed by Stefan Pokorski (Warsaw) and Smaragda Lola (CERN).

On the subject of neutrinos in astrophysics, Wojciech Dziembowski (Warsaw) discussed the tests of the Standard Solar Model and production of solar neutrinos, Edwin Kolbe (Basel) talked about neutrino-nucleus interactions in stars, Henryk Wilczynski (Cracow) presented the neutrino aspect of the Pierre Auger cosmic-ray experiment and Anna Stasto (Cracow) discussed the penetration through the Earth of super-high-energy neutrinos.

The conference was summarized by 1988 Nobel laureate Jack Steinberger (CERN), who recalled the milestones of neutrino physics.

The first Cracow Epiphany Conference, in January 1995, was dedicated to Kacper Zalewski, one of the most creative and influential Cracow theoretical particle physicists, in honour of his 60th birthday. The subject was the physics of heavy quarks, one of Zalewski’s main fields of research. The success of that meeting encouraged Marek Jezabek, longtime Zalewski collaborator and (since last year) his successor as Head of Particle Theory Department of the Institute of Nuclear Physics in Cracow, to start a tradition. The idea was to change the subject of the conference every year and to attract the whole community of Cracow particle physicists working at the Jagellonian University, at the Institute of Nuclear Physics and at the Technical University of Mining and Metallurgy.

In 1996 the conference topic was proton structure, followed by W boson physics in 1997, the spin effects in particle physics in 1998 and electron-positron colliders in 1999.

The next Cracow Epiphany Conference, to be held on 5-7 January 2001, will cover b physics and CP violation. Further information is available from “epiphany@ifj.edu.pl”.

Heavy implications for the first second

cernbigbang1_4-00
cernbigbang2_4-00

About a microsecond after the Big Bang, the universe was a seething soup of quarks and gluons. As this soup cooled, it “froze” into protons and neutrons, supplying the raw material for the nuclei that appeared on the scene a few minutes later.

To check if this imagined scenario is correct, since 1986 experiments at CERN have been accelerating beams of nuclear particles to the highest possible energies and piling them into dense nuclear targets. Recreating what happened in the first microsecond of creation has so far taken many years of careful and painstaking work.

The goal has been to use the energy supplied by the nuclear beams to recreate tiny pockets of primordial quark gluon plasma about the size of a big nucleus and watch them behave as “Little Bangs”. Theorists using simulation tools predict that this soup/plasma should be formed at a temperature of about 170 MeV (about 1011degrees, or 100 000 times the temperature at the centre of the Sun) with an energy concentration of about 1 GeV per cubic femtometre – seven times that of ordinary nuclear matter.

The milestones of the early universe, separated by only fractions of a second, nevertheless stretched over immense energy gaps as the Big Bang temperature plummeted. The Little Bang experiments too have to contend with vast swings of temperature/energy.

The experiments take snapshots of the particle patterns emerging from these Little Bangs, but these patterns, although embedded in the particle behaviour, are quickly masked by the surrounding nuclear debris. The challenge is to peer through this debris to glimpse the signature of the Little Bangs.

Ion beam experiments

cernbigbang3_4-00

The ion beams at CERN serve several large experiments, codenamed NA44, NA45, NA49, NA50, NA52, WA97/NA57 and WA98. Some of these studies use existing multipurpose detectors to investigate the fruit of the heavy-ion collisions. Others are special dedicated experiments to detect rare signatures.

On both the machine and the physics sides, the programme is an excellent example of collaboration in physics research. Scientists from institutes in more than 20 countries, including Italy, Japan, Germany, France, Portugal, Russia, Finland, India, Poland, Greece, Switzerland, the UK and the US, have participated in the experiments. The programme has also allowed a new productive partnership to develop between high-energy physicists and nuclear physicists, and it has considerably extended the number of scientists using CERN as a research base, with new research centres, some of them from far afield, joining the CERN research programme.

Estimates of the energy density established when the colliding nuclei coalesce point to several giga-electron-volts per cubic femtometre, suggesting that the theoretically expected critical energy threshold has been crossed.

One important quark signature is the J/psi particle, which is made of a charm quark and its antiquark. J/psis are rare because charm quarks are heavy. However, theorists suspected that the production of J/psis would be suppressed by the screening of the quark “colour” charge by the surrounding quark-gluon matter. A strong reduction in the number of J/psis leaving the fireball would suggest that hot quark-gluon plasma was initially present. This is exactly what the NA50 experiment saw.

Other particles – phi, rho and omega mesons – are composed of lighter quarks and antiquarks bound together. These mesons can be seen through the surrounding fog of dense matter via their decay into pairs of weakly interacting particles – for example, electron-positron pairs – which pierce through the surrounding strongly interacting material. In a quark-gluon plasma, the quarks and antiquarks find it difficult to lock onto each other and therefore their signals get smeared out, as seen in the NA45 experiment.

Another encouraging sign seen quite early in CERN’s heavy-ion experiments was the increased production of particles containing strange quarks. The ion projectiles only contain up and down quarks – no strange quarks. High-energy proton-proton or electron-positron collisions provide enough energy to synthesize strange quark antiquark pairs, but for the nucleus-nucleus collisions the fraction seen by the WA97 experiment was markedly higher. The greater the strangeness content of the emerging particles, the more their production levels were increased. For example, the yield of Omega baryons containing three strange quarks was 15 times normal.

In principle the cleanest quark signals are the electromagnetic ones, and WA98 has seen some preliminary signs of an increased yield of single photons radiated by quarks.

Quark chemistry

The particles leaving the fireball retain signatures of their past, pointing back in time. In elastic scattering when particles “bounce” off each other, only their momentum changes. As the fireball expands, the energy density decreases until the hadrons no longer interact – their momenta “freeze out”. The momentum distribution of the particles leaving the fireball gives a snapshot of when this freezeout occurred, at a temperature of about 100 MeV.

What happened if the fireball was much hotter and denser, when quark chemistry was operating? Once the resulting subnuclear particles emerged, their composition reflected what happened when the quarks froze. These particle distributions serve to reveal the chemical freeze-out temperature when quarks became subnuclear particles – around 180 MeV, which agrees with the critical temperature predicted by theory.

Another experimental technique, based on interferometry, is a development of the pioneering astronomical work of Hanbury, Brown and Twiss and adapted for particle physics by Giuseppe Cocconi at CERN in 1974. Looking at correlated pairs of particles, this technique measures sizes. The rate of expansion of the system is known, so size information can be extrapolated backwards to reveal the original energy density and to disentangle thermal motion from collective flow.

Heavy ions at CERN

cernbigbang4_4-00

The CERN results obtained with lead beams are the culmination of a long programme. A proposal in 1982 from heavy-ion enthusiasts suggested that the CERN machines could be used to accelerate beams of oxygen ions to extend interesting heavy-ion results obtained earlier at Darmstadt’s Unilac and Berkeley’s Bevalac.

Despite CERN’s crowded programme (the SPS proton-antiproton collider was then in full swing) and commitments to new projects such as LEP, development work for heavy-ion beams began at CERN through a Berkeley/CERN/Darmstadt collaboration. An important element was CERN’s Linac 1 injector, which had already learned how to handle deuterons and alpha particles. This was fitted with an electron cyclotron resonance ion source from Grenoble and a radiofrequency quadrupole from Berkeley.

In the mid-1980s, at the same time as CERN’s big machines were learning how to handle electrons and positrons in preparation for LEP, an experimental programme got under way at CERN’s SPS synchrotron using 200 GeV/nucleon oxygen ions. Complementary data came from a programme at Brookhaven’s AGS synchrotron with beams of 14.6 GeV/nucleon.

CERN soon extended the range of its experimental programme by supplying sulphur beams at 200 GeV/nucleon. From 1993, equipped with the new Linac 3 injector and its ion source, and in a collaboration between CERN and institutes in the Czech Republic, France, India, Italy, Germany, Sweden and Switzerland, the reach of the experiments was considerably extended using the much heavier lead projectiles.

The future

These results, announced on 10 February at CERN, resulted in a blaze of media hype. However, they are not definitive and have to be followed up. While all of the pieces of the puzzle seem to fit a quark-gluon plasma explanation, it is essential to study further this new form of matter to characterize its properties fully and confirm the quark-gluon plasma interpretation. Where exactly is the energy threshold for the new state of matter? What are the critical sizes of the produced fireballs? What is the actual transition? In a succinct analogy from theorist Maurice Jacob, “We have seen boiling water but we do not yet know what steam looks like, nor how the boiling goes.”

Although the ion beam experiments at CERN continue, the focus of heavy-ion research now shifts to the Relativistic Heavy Ion Collider at Brookhaven, which starts experiments this year. Due to start in 2005, CERN’s Large Hadron Collider experimental programme will include a dedicated heavy-ion experiment, ALICE.

Making muon rings round neutrino factories

cernmuon1_4-00

Almost every day, fresh results steadily fuel the progress of science. Less frequently, major breakthroughs in experimental techniques revolutionize the way in which this research is done. Examples of such breakthroughs in particle physics include the development of accelerators in the 1950s, and of colliding rings in the late 1960s, finally culminating with CERN’s proton-antiproton collider, using beam-cooling techniques and opening up a new energy regime.

Although there is still a long research and development road to be negotiated, the first major breakthrough in particle physics experimental techniques for the 21st century looks to be the advent of a new type of machine – the muon storage ring – and using it to provide neutrino beams.

Making accelerators with muons seems crazy at first. Machine builders so far have had the wisdom to store and accelerate particles that are abundant – like the protons and electrons naturally found in matter – or, if not abundant, that at least have the good taste to be stable, like positrons and antiprotons.

It takes at least 30 min to fill CERN’s LEP collider and accelerate its beams of electrons and positrons. How could one do such a thing with unstable muons, with their combined inconveniences of being rare and having a lifetime of a mere 2.2 ms?

Progress in accelerator techniques has made this challenge at least conceivable, to the extent that there has been discussion of muon collider rings as a serious future option in the US and at CERN.

When, following the inspiration of the US muon collider collaboration, European physicists started looking at this new route, the obstacles appeared overwhelming, with many new problems to solve simultaneously.

A breakthrough came with the realization that muon decay could be turned into an advantage – muon storage rings would be an abundant source of neutrinos. Coming at the same time as the new awareness of neutrino oscillations, this step forward met with a thunderclap of enthusiasm.

The muon storage ring as neutrino source, nicknamed “neutrino factory”, requires a much lower density of particles and should thus be easier to build than a muon collider. The decay of muons into electrons provides the only known source of high-energy electron-type neutrinos – a unique and powerful new physics tool.

This led the prospective study group mandated by the European Committee for Future Accelerators (ECFA) to propose a three-step approach to muon storage rings, the first being the construction of a neutrino factory (Autin, Blondel and Ellis 1999). This has led to a series of international workshops – Lyon in July 1999 and Monterey, California, in May 2000. Neutrino-factory research and development is now a well recognized and supported project at CERN and further afield, with ECFA-supported study groups investigating the very rich physics opportunities.

Beams from rings

cernmuon2_4-00

The key requirement is a very intense proton accelerator, delivering several megawatts of beam power. These protons will be used to create pions, which will be magnetically collected. Designing a target to withstand so much power more than once is beyond what has been achieved so far and will require either a liquid jet target or a very large rotating wheel to dissipate the heat.

Pion collection is optimized for rather low momentum – about 300 MeV/c. These pions rapidly decay into muons of similar momentum. At this point the “beam” is about 1 m across and the haphazard momentum spread 100% – more like a big, hot potato than a beam.

The design challenge is to shrink the momentum spread to 5% and the beam size to a few centimetres within a few microseconds to shape the muons into an acceptable beam.

This requires two crucial elements. The first, “phase rotation” (“monochromatization”), uses variable longitudinal electric fields of a few million volts per metre to slow down the fastest particles and accelerate the slow ones. This needs either high-gradient, low-frequency radiofrequency cavities or an induction linac, with considerably improved performance compared with what has been achieved so far.

The second crucial beam element is cooling. Beam cooling was a key feature of CERN’s antiproton project, converting the largest possible number of rare particles produced from a target into a smooth beam. While antiprotons are stable and can be stored almost indefinitely, muons need fast action. However, as muons choose not to interact strongly with nuclear matter, one can use cooling via ionization energy loss, reducing the momentum in three dimensions. Followed up by reacceleration in the beam direction via a longitudinal electric field, the net result will be a decrease of transverse momentum. Simulations are promising, but this technique has yet to be demonstrated in practice.

This initial conditioning is followed by a series of fast accelerators to take the muon beam to high energy. If well designed, the system spares enough muons after decays or acceptance losses so that, from the original 1016 protons per second, 1014 high-energy muons per second can be injected into a storage ring, where during a few hundred turns, positively charged muons, for example, will decay into electrons, accompanied by electron-type neutrinos and muon-type antineutrinos.

Storage ring geometry

cernmuon3_4-00

The intentionally long, straight sections of the storage “ring” generate a large flux of collimated neutrinos, particularly electron-type ones, with properties very different to those of traditional laboratory neutrino beams (which are mainly composed of muon-type neutrinos). The geometry of the storage ring is left to the designer’s imagination. Bow-tie, triangular and trombone ring configurations have been proposed.

Whatever the geometry, very intense neutrino beams would be available right next to the storage ring, opening a new era of neutrino physics. However, what has made everyone really excited is the prospect of firing neutrino beams through the Earth, serving several underground experiments in several continents and providing different neutrino flight paths – “baselines” – for the study of neutrino oscillations.

For a long time the three neutrino types (electron-, muon- and tau-) were considered massless, and thus immutable.

Following indications from solar neutrinos as early as 1975, experiments studying neutrinos produced by the decay of cosmic-ray pions and muons in the atmosphere finally confirmed in 1998 that neutrinos undergo transmutations. The observed signals can only be understood if neutrinos starting out as muon-type in the upper atmosphere change into another type in transit – probably tau neutrinos. This neutrino “oscillation” can only be understood if the particles have a mass.

Although these masses are probably tiny – a fraction of an electron volt – the consequences are considerable. As neutrinos are one of the most common particles in the universe, their total mass could provide a significant fraction of the whole mass of creation. From the particle physicists’ point of view, neutrinos are very interesting. Since they do not feel electromagnetic or strong forces, one hopes they could provide cleaner clues to the origin of mass.

In quantum mechanical language, neutrinos produced in a weak decay or interacting via weak interaction are well defined – the well known electron, muon and tau neutrino “flavours”. However, if they have mass, neutrinos also feel the mysterious “Higgs” force that generates masses, and the neutrino states emerging with well defined masses need not be the same as those with well defined flavours.

The three flavour neutrinos are therefore mixtures of the three mass neutrinos, and a matrix of parameters connects the two triplets. Moreover, as usual in quantum mechanics, this mixing has time-dependent phases, so that any one neutrino flavour turns into another as time passes – as one type of neutrino disappears, another “appears” to take its place. This is what is meant by neutrino oscillations.

Information on these oscillations is still scanty, but atmospheric neutrino experiments tells us that a muon neutrino of 1 GeV probably turns into a tau neutrino after about 500 km. Experiments with electron neutrinos from nuclear reactors show that these particles are reluctant to oscillate on this timescale. The disappearance of solar neutrinos, which set out as electron-type, shows that these particles have a much longer oscillation timescale. However, solar neutrinos are somewhat ambiguous, since neutrinos produced deep in the stellar interior have to travel through the Sun before emerging into the vacuum of space, and one does not know where the oscillation takes place.

New “long baseline” experiments, firing neutrino beams at detectors hundreds of kilometres distant, are setting out to explore these oscillations in more detail. However, these experiments are based on conventional synthetic neutrino beams, composed mainly of muon-type particles, and are expected to validate and sharpen the pattern derived from the combined findings of atmospheric neutrino experiments and reactor neutrino experiments, although surprises cannot be excluded.

Crucial information should come from new reactor and solar neutrino experiments – Kamland, Borexino and SNO – sensitive to the disappearance of electron neutrinos suggested by the solar neutrino experiments.

New neutrino physics

With the neutrino factory, and as new results from solar, reactor and accelerator experiments become available, physicists can plan a much more systematic investigation of neutrino mass differences and mixings. The key is the high-intensity flux of electron neutrinos from a neutrino factory. With this, any appearance of muon neutrinos from oscillation of the electron neutrinos would give an immediately recognizable neutrino interaction signature, producing a muon of opposite sign to that of the original muon beam.

Comparing results using beams of positively and negatively charged muons would contrast the behaviour of electron neutrinos and their antineutrinos. As neutrinos pass through matter, they encounter atomic electrons. The interactions of electron neutrinos and antineutrinos with these electrons are different, and would lead to a matter-induced asymmetry.

Depending on whether the transmutation into muon neutrinos of electron neutrinos and antineutrinos are enhanced or suppressed by matter, one would be able to distinguish between the two mass scenarios shown in the figure.

CP violation with neutrinos

Comparing oscillation rates for electron neutrinos and antineutrinos would open another possibility, which until recently had been almost unthinkable. By comparing the transformations of, say, electron-neutrinos into muon neutrinos with the process in the reverse direction, and with the corresponding rates for antineutrino transformations, physicists could for the first time be able to investigate delicate CP and time symmetry violations for the neutrino sector.

Such effects have been well explored in the quark sector, using the neutral kaon system. CP violation unambiguously differentiates particles and antiparticles, implying that what is called matter and what is called antimatter is not a heads-or-tails call. This is one of the necessary ingredients to explain how a matter-dominated universe evolved from a Big Bang that supposedly produced equal amounts of matter and antimatter.

CP violation is deeply connected to the violation of time reversal symmetry, when a “film” of a particle interaction run backwards would look different.

World machine

Neutrino physicists are very excited at these prospects. However, such experiments would require very long baselines (in excess of 3000 km) and preferably two different baselines to unravel different processes. This leads to a vision of a truly world machine with intercontinental beams.

These new neutrino sources are of world-wide interest and a whole network of detailed working groups has been set up to attack the problems. A crash study at Fermilab will shortly make its recommendations, while a wider study involves other US laboratories.

In Europe, CERN has set up a neutrino factory study group with specialized subgroups looking at specific machine components (proton driver, targets, accumulator rings, etc). Other groups, under the sponsorship of the ECFA, look at physics objectives. These studies involve specialists from many European laboratories.

By the time this year’s Neutrino Factory meeting in Monterey takes place in May, these plans should have progressed significantly and hopefully give insight on how difficult the construction of a neutrino factory will be, on how long it would take to design and build. A similar effort is necessary to understand what detectors could be built to best take advantage of these fascinating beams. This is certainly a line of physics that will take us well into this century!

Discovery of doubly magic nickel

cernnickel1_4-00

Just as in the atom, where the electrons fill different energy levels or “shells”, the nucleons (neutrons and protons) in an atomic nucleus are also arranged in similar shells. Each time a shell has the maximum number of particles it can accommodate, the nucleus, like the atom, is particularly stable.

These “magic numbers” (2, 8, 20, 28, 50, 82 and 126) were discovered in the 1940s and soon explained by the nuclear shell model. Unlike the atom, the atomic nucleus consists of two different types of particle – the protons and the neutrons. A nucleus with completely filled shells for protons and for neutrons is called “doubly magic”.

Of the roughly 2500 different nuclear isotopes known to date, only nine had a doubly magic shell structure. Nickel-48, with 28 protons and 20 neutrons, becomes number 10 in this list, and probably, at least for quite a while, the last one.

Beyond the importance of nickel-48, owing to its doubly magic properties, this nucleus is also of particular interest because it is at the extreme limit of nuclear stability, where the nuclear forces are no longer able to bind all protons and neutrons together.

At the “drip lines”, nuclei decay by the emission of excess protons or neutrons. All commonly used models for atomic nuclei predict that nickel-48 is already beyond this proton drip line and is thus unstable with respect to the strong interaction, which means that this nucleus is only held together briefly owing to electrical forces between the protons.

cernnickel2_4-00

Therefore, a possible decay mode of nickel-48 is the emission of two protons forming a helium-2 nucleus, analogous to alpha decay, where a helium-4 nucleus is emitted. This former type of radioactivity has never been observed. In addition, nickel-48 is the only doubly magic nucleus with a bound mirror nucleus, which will allow for interesting mirror symmetry studies.

In September 1999 a collaboration of French, Polish and Romanian physicists began an experiment at the Grand Accélérateur National d’Ions Lourds (GANIL) in Caen, France, to search for nickel-48, the last doubly magic nucleus accessible with present methods.

A primary beam of nickel-58 with an average intensity of 1012 ions per second and an energy of 95 MeV per nucleon hit a natural nickel target in the superconducting solenoids of the SISSI device.

The proton-rich projectile fragments were selected by the LISE3 separator and finally identified by their time of flight, their energy loss and their total energy in a detection set-up consisting of a microchannel plate detector and a stack of five silicon detectors. This allowed the measurement of 10 independent parameters to identify each fragment arriving at the focal plane.

Features of GANIL

The success of the present experiment is a result of the combination of specific and powerful features available at GANIL:
* a primary beam intensity never reached before was achieved through an intense ion-source development programme: a new technique allowed nickel to be treated as a gas in the ion source, yielding a gain of a factor of 20 compared with past experiments;
* the transmission of the GANIL cyclotrons was optimized to accelerate a high-intensity primary beam;
* the efficient production and collection of projectile fragments by the SISSI superconducting device;
* the powerful separation and identification by the LISE3 separator with its velocity filter and an efficient detection set-up.

The experiment ran for about 10 days, revealing for the first time four production “events” of this new nucleus. Although optimized for the transmission of nickel-48, it also produced other exotic proton-rich nuclei in the vicinity – about 100 events of nickel-49, 50 of iron-45 and 290 of chromium-42. This confirms a similar experiment conducted about three years ago at the GSI laboratory, Darmstadt, where 5, 3 and 12 events, respectively, of these latter isotopes were reported for the first time.

The new observation gives a lower limit for the half-life of nickel-48 of about 0.5 ms. This contradicts a number of models that predicted nickel-48 to be highly unstable, with half-lives of far less than 1 ms, the typical flight time of the projectile fragments between the production target in the SISSI device and the detection set-up at the end of the LISE3 separator.

However, the few events observed at GANIL do not allow a detailed comparison with nuclear models. To do this requires higher-statistics experiments to determine, for example, the exact half-life of this nucleus. Such experiments should be possible in the near future at GSI as well as at GANIL, where continuous improvements in source development and the acceleration process should yield even higher production rates.

Discovering new dimensions at LHC

cernnews5_3-00

CERN’s new LHC collider, which is scheduled to begin operations in 2005, aims to find the long-awaited “Higgs particle”, which endows other particles with mass. In an entirely new energy range and with its special experimental conditions, the LHC could also discover other new physics effects.

Why is gravity so weak? The traditional answer is because the fundamental scale of the gravitational interaction (i.e. the energy at which gravitational effects become comparable to the other forces) is up at the Planck scale of around 1019 GeV – far higher than the other forces. However, that only raises another question: what is the origin of this huge disparity between the fundamental scale of gravity and the scale of the other interactions?

A possible explanation currently gaining ground in theoretical circles is that the fundamental scale of gravity is not really up at the Planck scale, it just seems that way. According to this school of thought, what is actually happening is that gravity, uniquely among the forces, acts in extra dimensions. This means that much of the gravitational flux is invisible to us locked into our three dimensions of space and one of time.

Consider, by analogy, what two-dimensional flatlanders would make of three-dimensional electromagnetism. To them, the flux lines of the force between two charges would appear to travel in their planar world, whereas in reality we know that most of the flux lines would spread out through a third dimension, thus weakening the force between the two charges.

Of course, if this third dimension were infinite in size, as it is in our world, then the flatlanders would see a 1/rforce law between the charges rather than the 1/r law that they would predict for electromagnetism confined to a plane. If, on the other hand, the extra third spatial dimension is of finite size, say a circle of radius R, then for distances greater than R the flux lines are unable to spread out any more in the third dimension and the force law tends asymptotically to what a flatlander physicist would expect: 1/r. However, the initial spreading of the flux lines into the third dimension does have a significant effect: the force appears weaker to a flatlander than is fundamentally the case, just as gravity appears weak to us.

Turning back to gravity, the extra-dimensions model stems from theoretical research into (mem)brane theories, the multidimensional successors to string theories (April 1999 p13). One remarkable property of these models is that they show that it is quite natural and consistent for electromagnetism, the weak force and the inter-quark force to be confined to a brane while gravity acts in a larger number of spatial dimensions.

The requirement of correctly reproducing Newton’s constant, G, at long distances leads to the size of the extra dimensions in which gravity is free to act being related to the number of extra dimensions.

If there is just one extra dimension, then the model says that it should be of the order 1013 m, in which case solar system dynamics would be radically different and we would be taught a Newton’s 1/r3 law in school rather than the 1/r2 law that we know and love.

So one extra dimension doesn’t work. With two extra dimensions, the scale drops to slightly less than 1 mm and, small though that is, it at first seems surprising that extra dimensions of that size have not already been seen. However, because the extra dimensions only affect gravity, the most direct constraints come from experiments to measure G at short distances, and delving into the historical literature on the subject reveals that no measurements of G at the submillimetre scale have ever been made.

A team led by Aharon Kapitulnik at Stanford is currently in the process of accurately measuring G at submillimetre scales for the first time using a tabletop experiment.

For more than two extra dimensions their size begins to get quite small: 1 fm, for example, for six extra dimensions, outside the range of even the improved submillimetre gravity experiments. Nevertheless, the model still makes a number of dramatic predictions. If gravity does have extra dimensions at its disposal, they should manifest themselves at CERN’s LHC proton collider, which is scheduled to come on line in 2005, no matter what the number of extra dimensions might be.

This is because the fundamental scale of the gravitational interaction should be around a few tera-electron volts, so, at TeV energies, gravitational effects will become comparable to electroweak effects. Consequently, gravitons will be produced as copiously as photons, with the difference that the photons will remain in our familiar dimensions while many of the gravitons will escape into extra dimensions, carrying energy with them.

More dramatically still, the LHC could produce fundamental string relations of our familiar particles, such as higher-spin relatives of electrons or photons. There is also a possibility that, owing to the now much stronger gravitational interactions, microscopically tiny black holes could be produced with striking signals.

Fortunately, such small black holes are not at all dangerous, being much more similar to exotic particles than large astrophysical black holes, and they decay quite quickly as a result of Hawking radiation. With the recent outburst of ideas in these directions, it is clear that extraordinary discoveries at the LHC may be just around the (extra-dimensional) corner.

bright-rec iop pub iop-science physcis connect