Two major new experiments have provided the first strong indications of the delicate CP violation effect in a totally new domain – the decays of B mesons (containing the fifth or “b” quark). Exploring this still unexplained phenomenon under these conditions could provide fresh insights.
The phenomenon that physicists call CP violation ultimately distinguishes matter from antimatter. In CP (charge/parity) symmetry, the physics of left-handed particles is the same as that of right-handed antiparticles – a natural enough assumption after physicists had been shocked in 1956 to discover that nuclear beta decay is spectacularly left-right asymmetric (P-violating). However, in 1964 new experiments found that CP symmetry is also flawed. Until recently, the only way to explore CP violation was via the study of the neutral kaon, where CP violation was originally discovered in 1964. However, a new generation of experiments at the PEP-II and KEKB electron-positron colliders at SLAC, Stanford, and Tsukuba, Japan, tuned to produce copious supplies of B-mesons, has opened up a new phase of CP violation research.
These “B-factory” machines achieve unprecedented collision rates for electron-positron machines (luminosities well over 1033/cm2/s). At SLAC, the BaBar detector has studied 23 million B pairs produced by PEP-II tuned to the upsilon 4S resonance. At KEK, with KEKB tuned to the same energy, the Belle detector has investigated 11.1 million B pairs. The experiments look for CP-violating B decays, mainly into a J/psi particle and a short-lived kaon. When comparing the decays of the neutral B meson and its antiparticle, CP appears as a time-dependent asymmetry in the decays to a specific CP state.
The quark transitions responsible for CP violation are conventionally described by a 3 x 3 matrix – the Cabibbo/Kobayashi/Maskawa (CKM) matrix – the rows and columns of which correspond to the six types of quark. For the B meson system, the relevant parts of this matrix are conveniently represented by a triangle, the angles of which can be measured via CP violation effects.
One of these angles, ß, or rather sin2 ß, has been measured by the new experiments. The BaBar result is sin2 ß = 0.34 ±0.20 ±0.05, while that of Belle is 0.58 + 0.32 – 0.34 + 0.09 – 0.10. (For historical reasons, the Japanese prefer to label the angle as F1.) Combined with results from other experiments, including a measurement by the CDF detector at Fermilab’s Tevatron, the world average sin2 ß = 0.49 ±0.16, which pretty much rules out (3 standard deviations) the possibility of no CP violation at all.
In reporting this development, CERN Courier has an apology to make. First indications of these B-decay CP violation measurements were announced at last year’s International High Energy Physics Conference in Osaka. These initial measurements still had rather large errors, which meant that the provisional result was still compatible (just) with no CP violation at all. CERN Courier’s report of this meeting jumped the gun when it alleged that CP violation had been “seen” in B decays.
“It is only a 1-sigma effect,” objected the physicists, who are still stopping short of announcing a discovery. For an appraisal of these latest results, see B factories measure an eternal triangle.
The CERN Solar Axion Telescope, CAST, aims to shed light on a 30-year-old riddle of particle physics by detecting axions originating from the 15 million degree plasma in the Sun’s core. Axions were proposed as an extension to the Standard Model of particle physics to explain why CP violation – a phenomenon linked to the dominance of matter over antimatter in the universe – is observed in weak but not strong interactions – the so-called strong-CP problem.
One of the most striking consequences of this is the neutron electric dipole moment, which, due to a CP-violating term in the standard equations, is calculated to be ten orders of magnitude larger than its measured upper limit. This can be overcome by introducing a further symmetry, the spontaneous breaking of which yields the axion – a neutral pion-like particle that interacts very feebly. Owing to their potential abundance in the early universe, axions are also leading candidates for the invisible dark matter of the universe.
Searches for solar axions began a decade ago when the US Brookhaven Laboratory first pointed an axion telescope at the Sun – a highly useful source of weakly interacting particles for fundamental research, as the solar neutrino anomaly amply demonstrates. Axions would be produced in the Sun through the scattering of photons from electric charges – the Primakoff effect – and their numbers could equal those of solar neutrinos. The idea behind the Brookhaven experiment, first proposed by Pierre Sikivie, was to put the Primakoff effect to work in reverse, using a magnetic field to catalyse the conversion of solar axions back into X-ray photons of a few kilo-electronvolts.
The Brookhaven telescope was later joined by another in Tokyo, while other experiments continued the search in different ways. Experiments at Brookhaven, the Lawrence Livermore Laboratory and Kyoto, for example, search for relic axions from the early universe. CERN’s NOMAD experiment joined the hunt, looking for axion production in a neutrino beam. Searches based on axion Bragg scattering have been performed by the SOLAX collaboration using a 1 kg single crystal of germanium in an underground laboratory in Argentina, while optical detection techniques are employed by Italy’s INFN experiment, PVLAS.
This list is not complete, but, taken together, earlier experiments have scanned the kinetic energy range from 10-11 eV up to 1011 eV, so far without success. CAST, however, could make a difference because of the length and strength of the magnetic field that it will have available by using a prototype magnet for CERN’s LHC collider.
The conversion efficiency for axions increases as the square of the product of the transverse magnetic field component and its length. This makes a 9 tesla, 10 m LHC prototype dipole magnet with straight beam pipes ideal for the task, giving a conversion efficiency exceeding that of the two earlier telescopes by a factor of almost 100.
CAST’s LHC magnet will be mounted on a moving platform with X-ray detectors on either end, allowing it to observe the Sun for half an hour at sunrise and half an hour at sunset. The rest of the day will be devoted to background measurements and, through the Earth’s motion, observations of a large portion of the sky. CAST’s X-ray detectors are under development, with the collaboration looking at gas-filled and solid-state options. A chamber using the “micromegas” principle has been tested.
The aperture of the LHC magnet’s beam pipes is around five times the predicted solar axion source size, so its X-ray detectors must be correspondingly large, implying a high level of noise. To overcome this problem, the CAST collaboration is considering using X-ray lenses to focus the converted X-rays emerging parallel from the 50 mm magnet aperture to a submillimetre spot. This will bring a vast signal-to-noise improvement over the original CAST proposal and the earlier solar axion telescopes. An option to recover mirrors constructed for the German orbiting X-ray telescope ABRIXAS is being pursued.
CAST is a new departure for CERN, relying not on the lab’s expertise in accelerators but on its know-how in X-ray detection, magnets and cryogenics. With a discovery potential for axions extending beyond that dictated by astrophysical considerations, the experiment leaves room for surprises and could open up a new field of terrestrial axion astrophysics. CAST should be ready to begin its search this autumn.
As reported in this month’s News, two major experiments have provided the first observation of the delicate CP violation effect in a totally new domain: the decay of B mesons, which contain the fifth (“b”) quark. The obscure phenomenon that physicists call CP violation could have been the mould that formed a universe of matter from the Big Bang’s equal mixture of matter and antimatter. After the shock discovery in 1956 that nuclear beta decay is spectacularly left-right asymmetric (P-violating), the proposition of CP (charge/parity) symmetry, with left-handed particles behaving in the same way as right-handed antiparticles, seemed a natural theoretical handhold for physicists to grasp. However, further experiments carried out in 1964 found that CP symmetry was also flawed. To reach an understanding of why CP violation happens, physicists must first measure exactly how it happens.
Until recently the only particle that showed CP violation was the neutral kaon, where CP violation was originally discovered in 1964. However, according to the conventional wisdom of today’s Standard Model (SM) of particle physics, CP violation should also be observed in the decays of B particles. Only recently have experiments been able to look into this further. CP violation has its own language. In this CP-speak, the quark transitions responsible for CP violation are conventionally described by a 3 x 3 matrix – the Cabibbo/ Kobayashi/ Maskawa (CKM) matrix – the rows and columns of which correspond to transitions involving the six types of quark. For the B meson system, the relevant parts of this matrix are conveniently represented by a Unitarity Triangle (figure 1) the angles of which can be measured via CP violation effects. One of these angles, ß, or rather sin2 ß, is now being probed.
A first shot at this measurement was taken by the Opal and Aleph collaborations at CERN’s LEP electron-positron collider. In these experiments the decays of Z particles into b quark-antiquark pairs were harnessed to “tag” B particles, with the decay of neutral B mesons and their antiparticles monitored via their disintegration into J/psi and a short-lived kaon.
LEP results for sin2 ß were severely limited by statistics and did not quantitatively test SM predictions. Prior to the advent of the new B-factories (PEP-II at SLAC, Stanford, and KEKB at KEK, Japan), the CDF collaboration at Fermilab’s Tevatron proton antiproton collider reported the first real measurement – sin2 ß = 0.79 + 0.41 – 0.44 – disfavouring negative values for sin2 ß, which are possible in unorthodox scenarios.
First results from the B-factories were reported last summer at the international high-energy conference in Osaka, Japan, and caused quite a stir. In particular, the value reported by the BaBar collaboration at PEP-II – sin2 ß = 0.12 ± 0.37 ± 0.09 – was significantly lower than the SM predictions but was not conclusive due to limited statistics.
In February the Belle (KEKB) and BaBar collaborations announced their updated results. The BaBar measurement now makes use of 23 million upsilon(4S) decays into B pairs, while the Belle measurement so far corresponds to approximately half of that number. The principal decay modes used are B decays into J/psi and a short-lived kaon; into J/psi and a long-lived kaon; and into psi(2S) and a short-lived kaon. However, a small number of other modes are also reported.
Their measured values sin2 ß = 0.58 + 0.32 – 0.34 + 0.09 – 0.1 (Belle) and 0.34 ± 0.20 ± 0.05 (BaBar) are shown together with the earlier results in figure 2. Compared with last summer, the errors from BaBar and Belle have come down considerably, but the reported values of sin2 ß are still consistent with no CP violation within two standard deviations. However, the combined data of all five experiments yields a world average of 0.49 ± 0.16, constituting a measurement at slightly more than three standard deviations. It thus establishes CP violation for the first time in any system other than kaons.
To unravel the CKM matrix fully and understand all quark transitions, physicists need to look beyond B particles. Theoretical input is needed to translate experimental results into CKM parameters.
Much painstaking work has been done, but fixing one side of the unitarity triangle requires knowledge of the decays of the sixth “top” quark into a light “down” quark. This t-d transition is expected to be a rare process, and in addition the difficulty of tagging a d-quark in top quark decays makes direct measurement daunting.
So far only the dominant CKM matrix element involving the top quark – into b quarks – has been measured directly by the CDF collaboration. Hence information on t-d transitions has to be culled from indirect measurements in which the top quark appears as a virtual, intermediate state. There are three principal means at present to estimate t-d transitions, all based on weak interaction mixings between neutral mesons and their antiparticles. The first method involves measuring the mass difference between neutral Bd mesons (containing a b quark and a d antiquark, or vice versa); the second, the corresponding mass difference in the neutral Bs mesons (containing a b quark and an s antiquark, or vice versa); and the third, CP violation from neutral kaon mixing.
The SM fits, including direct and indirect measurements, yield a value for sin2 ß that lies typically in the range 0.58-0.82 at a 68% confidence level, widening to 0.45-0.95 at the 95% confidence level, as shown in figure 2. Thus the present world average, 0.49 ± 0.16, is, within experimental and theoretical errors, in agreement with the SM. This test will become much more incisive with improved accuracy of the sin2 ß measurements, and of the other two angles of the triangle in future B-factory and hadron collider experiments. Additional valuable input to CKM phenomenology is anticipated from the measurement of mass differences and via the study of rare B and kaon decays.
If new physics is present, the principal way it can enter is via new contributions to the mixing of neutral particles and their antiparticles. Decays, dominated by the exchange of W particles, remain essentially unaffected by new physics. Thus, even with the presence of new physics, those transitions involving charm (c) and b quarks or b and up quarks correspond to their SM values, so that two sides of the unitarity triangle remain unaffected. However, the third side, which depends on top-down quark transitions, is likely to be affected by new physics. Furthermore, the measurements of kaon CP violation and Bs mixing, which provide acdditional constraints, will also be affected.
Therefore, if new physics is present, the allowed unitarity triangle as obtained from current experimental data may not correspond to the SM version. Likewise, if new weak interaction effects are present, they may show themselves by the inadequacy of describing CP violation in terms of the three angles of a triangle. Either of these two scenarios would be clear evidence for new physics.
The mysterious phenomenon of CP violation – which ultimately distinguishes matter from antimatter – has kept physicists busy since its experimental discovery in kaon decays some 35 years ago.
In CP (charge/parity) symmetry, the physics of left-handed particles is the same as that of right-handed antiparticles. CP symmetry became popular after physicists were shocked to discover in 1956 that nuclear beta decay, a fundamental weak interaction, is spectacularly left-right asymmetric (P-violating). The confusion grew in 1964 when new experiments found that the CP criterion was not 100% reliable either. Ever since then, physicists have sought to understand how and why this symmetry is flawed.
Today the effects of CP violation are expected to manifest themselves in the decays of B mesons (containing the fifth or “b” quark) as well as in the traditional kaons. This change prompted Cracow physicists from the Institute of Nuclear Physics, the Jagellonian University, and the Faculty of Physics and Nuclear Techniques of the University of Mining and Metallurgy – co-organizers of the annual Cracow Epiphany Conference – to choose B physics and CP violation as the topic of this year’s meeting, which was held in Cracow in January.
The past two years have brought a real breakthrough in experimental observations. Participants at the conference heard Bertrand Vallage and Taku Yamanaka describe the latest news from NA48 and KTeV on measuring direct CP violation in neutral kaon decays. The long-awaited measurements of CP violation in B decays by the Belle and BaBar experiments operating at the KEKB and PEP-II B-factories, and from the CDF detector at Fermilab’s Tevatron proton-antiproton collider, were presented by Kenkichi Miabayashi, Vasilii Shelkov and Slawek Tkaczyk.
In addition to new results on rare B decays presented in the talks on BaBar and Belle, many interesting B measurements from CLEO were shown by Karl Berkelman – CLEO has been studying B physics for more than 20 years. For ongoing experiments, Wouter Hulsbergen from HERA-B at DESY, Andreas Schopper from LHCb and Maria Smizanska from ATLAS at LHC presented the prospects in the B sector.
Among many activities in the field of kaon decays, the ambitious KAMI project at Fermilab to measure direct CP violation in ultra-rare long-lived neutral kaon decays was presented by Taku Yamanaka. Furthermore, as discussed by Fabrizio Scuri, new precision results in the kaon sector can be expected soon from the KLOE experiment at DAPHNE. Together with the data collected by the CPLEAR and LEP experiments (reviewed by Andreas Schopper, Tadeusz Lesiak and Celso Martinez-Rivero), it is clear that there will be plenty of new experimental information on B physics and CP violation.
Challenging the Standard Model
Understanding all of this experimental data will present a challenge for the theory. With the most popular theoretical description of CP violation also being the one provided by the conventional Standard Model (SM), different ways of testing this picture were presented.
For B meson decays, the extraction of parameters from the data was discussed in some detail by Ahmed Ali. The topic of radiative B decays and radiative transitions of b to strange quarks was reviewed by Mikolaj Misiak. With the b-quark mass being fairly large, theoretical approaches for infinite b mass were presented by Chris Sachrajda, Bennie Ward and Thomas Mannel, while Andre Hoang discussed the issue of b mass and non-relativistic effective quark theory. Jose Bernabeu showed how B decays can probe not only CP- but also T- and CPT-violating effects.
A few theoretical talks looked at CP problems in physics beyond the SM. These included effects in supersymmetry (Leszek Roszkowski) and Higgs boson interactions (Bohdan Grzadkowski). Peter Minkowski gave a talk on the perpetually intriguing subject of neutrinos. The conference was summarized by Roy Aleksan.
The Cracow Epiphany Conference has had a different topic every year since 1995, when the series was initiated by Marek Jezabek, chairman of the conference organizing committee. By bringing in new subjects and inviting new participants every year, each meeting can offer a general forum to discuss the frontiers of physics, while providing the Polish physics community with the opportunity to broaden its horizons and meet internationally acclaimed experts.
The next Epiphany meeting, to be held on 4-6 January 2002, will concentrate on quarks and gluons in extreme conditions.
The highest-energy particles ever observed are also the rarest of all observed cosmic rays – only a few per square kilometre per century reach the Earth. Understanding the origin of these particles, with energies measured up to 300 x 1018 eV (more than 1 million times as much energy as accelerators on Earth) is at the heart of questions posed by particle astrophysicists: Where and how are these particles produced and accelerated? How can they reach the Earth without losing their energy? Are they indicative of as yet unknown physics, such as extremely heavy relics of the Big Bang? In response, physicists and astrophysicists have tried to collect as many of these elusive events as possible. They have proposed and built the largest particle detectors ever considered, with volumes measured in cubic kilometres.
Since the early 1960s, astrophysicists have recognized that a special property of radio emission from particle cascades, known as coherence, could be the basis of the largest particle detectors. For detectors of visible light, the intensity of light increases in direct proportion to the energy deposited by a particle showering in a material. However, for radio waves having wavelengths several times as large as the size of the shower, each particle produces electromagnetic radiation in phase, coherently, so that the total electric field increases in direct proportion to the shower energy. The resulting quadratic increase of radio emission with particle energy means that radio emission from an ultrahigh-energy cascade will dominate all other forms of secondary radiation. It is this that drew more than 50 physicists and astrophysicists from around the world in November 2000 to the University of California, Los Angeles, to the the First International Workshop on the Radio Detection of High Energy Particles (RADHEP-2000).
The history of theoretical ideas was reviewed by Boris Bolotovskii (LPI, Moscow) and Jon Rosner (Chicago). In the 1960s several mechanisms were identified that would cause the particles to radiate. The Armenian-Russian theorist Gurgen Askaryan from LPI realized that Compton scattering and positron annihilation by the shower particles in matter would result in a 15-25% excess of electrons over positrons that could emit radio Cherenkov radiation. Kahn and Lerche pointed out that the splitting of the electron and positron trajectories in the Earth’s magnetic field would cause them to radiate like a relativistic dipole.
Giant spark chamber
Robert Wilson, later the founding director of Fermilab, even considered the possibility that cosmic rays could discharge the atmosphere’s electric field gradient like a giant spark chamber. In air, the radiation will be coherent for frequencies of up to about 100 MHz. In dense materials, where the shower is more compact, coherence may be seen up to about 10 GHz. At the workshop banquet, Bolotovskii gave a memorable after-dinner personal reminiscence of Askaryan, whose sharp mind led to many ground-breaking physics ideas as well as to personal and professional struggles. Karo Ispirian (Yerevan, Armenia) translated and recited several of Askaryan’s poems.
Trevor Weekes (Harvard-Smithsonian) recounted the story of the first successful radio detection of cosmic rays in 1964 – the result of his PhD work at Jodrell Bank with Neil Porter in an experiment supervised by John Jelley, one of the pioneers of Cherenkov detection techniques. Weekes showed pulses that were probably the first radio emissions detected from extensive air showers induced by cosmic rays with energies of more than 1016 eV. The data-acquisition system consisted of a camera mounted on an oscilloscope triggered by a small array of Geiger counters. The camera recorded the scope trace of the summed voltage output of a large field of dipole antennas. Weekes noted that the number of papers presented at cosmic-ray conferences on radio signals dropped precipitously in the mid-1970s, leading A A Watson to remark: “It appears that experimental work on radio signals has been terminated everywhere.”
Yet from 1979 to 1992, the period when few groups were working with radio techniques, the Gauhati University Cosmic-Ray Group collected a large sample of atmospheric events in India. The latest results and implications for production models were transmitted to the workshop in absentia by Kalpana Sinha (Assam Institute, India). Rosner summarized the team’s work as well as from others in Japan and the former Soviet Union in a variety of measurements ranging from 2 to 200 MHz. Rosner also described his own air-shower work at the Dugway Proving Grounds in Utah.
Augmenting Auger array
Using a simple antenna and recording system, they are approaching the sensitivity to electric fields in the 20-250 MHz range necessary to detect coincidences. This work is proceeding with an eye towards augmenting the giant Pierre Auger array under construction in Argentina with a complementary radio detector. Rosner’s talk will probably be a useful resource for future students of the field, because he has collected a large number of useful scaling laws and common experimental details in one place.
Upward-going showers in a large volume of solid material is one of the classic signatures that is sought to identify neutrino-induced events, which could give birth to high-energy neutrino astronomy. Steve Barwick (Irvine) and Francis Halzen (Wisconsin) reported on current and future experiments – Amanda and Icecube – which search for optical Cherenkov radiation from the showers produced by such events.
Radio searches for upward-going events began in earnest in the 1980s when various groups began to bury dipoles in deep Antarctic ice, which is so pure that it has been measured to be transparent to radio and microwaves for hundreds, even thousands, of metres. Dave Besson (Kansas) reported on the status of the largest and longest running of these, the Radio Ice Cherenkov Experiment (RICE). This collaboration has submerged radio dipoles at the South Pole on the strings used by the Amanda experiment for placing its phototubes 2 km below the surface. It placed a limit on the neutrino flux, although there are a few intriguing events and more data to be analysed.
Masami Chiba (Tokyo Metropolitan) discussed his measurements of nearly lossless propagation in another ultrapure material – salt. He described how natural geological salt formations may be of sufficient purity and size to provide a complementary material to Antarctic ice with more than twice the density.
Askaryan was the first to note that the outer few metres of the Moon’s surface, known as the regolith, would be a sufficiently transparent medium for detecting microwaves from the charge excess in particle showers. The radio transparency of the regolith has since been confirmed by the Apollo missions.
In the late 1980s Igor Zheleznykh (INR, Moscow) and Rustam Dagkesamanskii (LPI, Puschino), who both participated in RADHEP, predicted that radiotelescopes on Earth would be sensitive to such microwave pulses and might discover a flux of cosmic neutrinos with energies of more than 1019 eV. Tim Hankins (New Mexico Tech) reported on the first such search in 1996 using the 64 m diameter Parkes radiotelescope in Australia. Peter Gorham (Jet Propulsion Lab) reported on a current search using two large and physically separated radiotelescopes in coincidence at NASA’s Deep Space Network in Goldstone, California. Requiring coincident microwave pulses at both antennas all but eliminates triggers due to terrestrial radio interference. (The day after the workshop, many of the participants joined an excursion to the Mojave Desert, where NASA sponsored a tour of the Deep Space Network radio antennas.) Dagkesamanskii described his plans for a search using the Kalyazin and Bear-Lake Radio Telescopes in Russia.
Particle theorists who dared to predict the electric field intensities from particles showering in solids were faced with developing an interface between large shower Monte Carlo programs and Maxwell’s equations.
Enrique Zas (Santiago de Compostela, Spain) summarized the pioneering work by Halzen, Zas, Stanev and Alvarez-Muñiz with detailed particle showers and models for radiation that improved Askaryan’s early work. Soeb Razzaque (Kansas) described his new work using the modern GEANT Monte Carlo package. Some disagreement with the other simulations point to a modest controversy. Roman Buniy (Kansas) warned all concerned to be careful with lengthscales, such as the Fresnel versus Fraunhofer limits. Buniy showed some beautiful graphical visualizations from his calculations of field intensities and phases that show the “richness of structure” imposed by the different lengthscales. Jaime Alvarez-Muñiz (Bartol), with a major effort in three dimensional geometry, applied some of the theoretical results to the lunar experiments, describing how parameters could be varied to optimize the detection of neutrinos versus other high-energy particles.
The various techniques of mass spectrometry now provide atomic and nuclear mass measurements of such precision that, as well as enabling theoretical nuclear models to be stringently tested, they also allow the testing of higher-order effects in quantum electrodynamics (QED) – the most precise theory in existence.
Progress in the field was highlighted by the Atomic Physics at Accelerators (APAC) 2000 conference on mass spectrometry held at the Institut d’Etudes Scientifiques de Cargèse, Corsica. APAC2000 was the second of a series of three Euroconferences on atomic physics at accelerators. Completing the triad begun by APAC99 (held near Mainz), which covered laser spectroscopy, will be APAC2001 (to be held in Aarhus), which will cover spectroscopy with highly charged ions. The APAC conferences were initiated by Heinz-Jürgen Kluge, director of the Atomic Physics Division of the GSI Laboratory, Darmstadt, and they are interdisciplinary, linking the study of the nucleus with its influence on the atomic electron cloud.
Atomic mass data are systematically evaluated because their impact on neighboring masses via reaction and decay energies can be considerable. This important job has long been carried out by Aaldert H Wapstra, now retired from NIKHEF, and Georges Audi (CSNSM-Orsay), who together periodically produce the benchmark Atomic Mass Evaluation publication.
A large part of the funding for APAC2000 was secured for the training and mobility of young European students and researchers, so a tutorial session of six lectures was included. It marked the 78th birthday of Aaldert Wapstra, honouring his unwavering commitment to the field of atomic masses.
The first lecture covered the evaluation of atomic masses (Georges Audi, CSNSM-Orsay), and was followed by an overview of the experimental techniques for such measurements (Alinka Lépine-Szily, Sao Paulo). Penning traps – magnetic storage devices that confine charged particles and determine their mass by making them “dance” at their cyclotron frequency – now dominate the field in precision measurement.
Atomic mass provides important information about nuclear structure via the binding energy. The correct treatment of the nucleon-nucleon interaction is required to make a correct calculation. Unfortunately, masses can still be measured with more than 100 times the accuracy that calculations can provide. Some of the reasons were explained by Michael Pearson (Montreal), who described the quest for a microscopic nuclear mass formula. Measurements are even accurate enough to warrant the inclusion of the atomic binding energy, thus requiring theorists to look beyond QED (Gerhard Soff, Dresden). The role of nuclear masses in explosive nucleosynthesis (Stéphane Goriely, UL Brussels) is critical, and the fact that most of the nuclides involved cannot be produced in the laboratory forces a dependence on nuclear models.
Accurate mass measurements also provide stringent probes of the underlying fundamental interactions – a domain that was once solely the concern of those conducting experiments with the world’s largest particle accelerators. The energy of a certain type of beta-decay (super-allowed) is sensitive to up-down quark transitions, and accurate measurements provide important constraints (John Hardy, Texas A&M).
The tutorial session was concluded by Aaldert Wapstra, who shared some of his memories of a long career dedicated to studying atomic masses.
Worldwide effort
Review talks from the numerous groups involved in mass measurement worldwide – all of whom were represented – revealed not only a variety of techniques but also a rich harvest of new results.
In Europe, several groups are pursuing mass measurements for nuclear physics: at GANIL in France the energy loss spectrometer SPEG (Hervé Savajols, GANIL-Caen) is being used to study the weakening of shell structure far from stability (Fred Sarazin, Edinburgh), and the CSS2 and CIME cyclotrons (Marielle Chartier, Bordeaux) are being used for the study of isospin symmetry in nuclei (Anne-Sophie Lalleman, GANIL-Caen); at GSI in Germany the Experimental Storage Ring (ESR) is being used in both Schottky pick-up mode (Yuri Litvinov, GSI Darmstadt, and Guenther Loebner, LMU-Munich) and isochronous mode for shorter half-lives (Marc Haussman, GSI-Darmstadt). At ISOLDE-CERN such studies are being made using the radiofrequency spectrometer MISTRAL (Dave Lunney, CSNSM/Paris Sud) and the Penning trap spectrometer ISOLTRAP (Georg Bollen, ex ISOLDE, now Michigan State), where a variety of physics themes are explored, notably isospin symmetry and the weak interaction (Frank Herfurth, GSI).
Sophisticated calculations
SMILETRAP in Stockholm (see following article) uses a Penning trap for measuring masses of highly charged stable nuclides (Tomas Fritioff, Stockholm) to test sophisticated atomic binding energy calculations for various atomic charge states. Another key SMILETRAP goal is a fundamental test concerning neutrinoless double beta decay, where an accurate mass difference can help in determining whether neutrinos are their own antiparticles or not. At the GSI on-line facility, masses come as a by-product of extensive nuclear spectroscopy studies (Ernst Roeckl), and similarly at the Swedish Studsvik reactor facility (Konstantine Mezilev, NPI-Gatchina).
Mass-yielding spectroscopy is very important for short-lived isotopes that are inaccessible by other techniques, as shown by a report (Mark Huyse, KU Leuven) on neutron-deficient polonium isotopes measured at SHIP-GSI and at RITU-JYFL in Finland. Although some nuclides are known to be “schizophrenic”, these measurements indicate that some species may have not just two but three shape “personalities”.
In North America, masses are also included in large data harvests of alpha-particle and proton emission from Argonne (Cary Davids) where these nuclides are formed offshore of the island of nuclear stability, then shedding protons to beach themselves and decay along the valley of stability.
Similarly, beta-particle endpoint measurements at Yale’s Wright Laboratory (Daeg Brenner, Clark) provide masses of proton-rich nuclides of importance to the astrophysical rapid proton capture process (Ani Aprahamian, Notre Dame). Argonne will soon initiate an ambitious programme for studying nuclides of interest for testing the fundamental properties of weak interactions, using the Canadian Penning Trap (Guy Savard, Argonne).
An important aspect of using mass values for the better definition of physical constants is pursued at the University of Washington in Seattle (Robert Van Dyck) and at MIT (Simon Rainville). These groups give mass measurements of record accuracy – parts per trillion. A Harvard group working at CERN has used the excellent environment of Penning traps for an ultra-precise comparison of the proton and antiproton masses – an appetizer for the imminent synthesis and study of antihydrogen (Gerald Gabrielse).
A further high-precision Penning trap result, on bound-electron magnetism (Guenther Werth, Mainz), puts the ball back in the court of Penning trappers measuring masses, because the electron mass uncertainty now dominates the error of this measurement. Complementing the masses of these fundamental particles was that of the pion (Guenther Borchert, IKP-Julich), which is of importance for cosmology.
New standards
The very mass unit itself, the kilogram – the last fundamental standard defined by an artefact – was the subject of a prizewinning presentation (Annette Paul, PTB-Braunschweig) on the AVOGADRO project to redefine the kilogram using silicon atoms counted in a lattice.
Atomic masses can also be determined via nuclear reactions with heavy ions (Yuri Penionzhkevich, JINR-Dubna) and with neutrons (Cyriel Wagemans, Gent). New schemes were presented for measurement techniques using tabletop storage rings (Hermann Wollnik, Giessen) and small cyclotrons (Oleg Kozlov, JINR-Dubna).
The sessions on measurements and techniques were complemented by reports on advances in theory. On the atomic physics side, higher-order QED corrections to atomic binding energies dominate the overall errors. These calculations are among the most precise possible (Vladimir Shabaev, St Petersburg; Eva Lindroth, Stockholm; and Paul Indelicato, LKB-Paris). On the nuclear physics side, more affordable computing power opens up the scope and validity of mass predictions. Recent advances on the nuclear theory front include mean-field calculations (Paul-Henri Heenen, UL Brussels), including a separable monopole Hamiltonian (Jirina Rikovska, Oxford), the Monte Carlo shell model for light nuclides (Takaharu Otsuka, Tokyo) and semi-empirical shell model calculations for superheavy nuclides (Nissan Zeldes, Jerusalem).
Future projects
The conference ended with a look to future projects – all involving ion traps – for the DRIBS facility (Nicolai Tarantin, JINR-Dubna) and GSI’s new HITRAP for atomic physics (Wolfgang Quint) and SHIPTRAP for nuclear physics (Gerrit Marx). Some of these are covered by European Research and Training networks, notably EUROTRAPS and EXOTRAPS (Ari Jokinen, Jyvaskyla), and have begun to yield promising results.
The concluding speaker, Catherine Thibault (CSNSM-Orsay), was one of the pioneers of direct on-line mass measurements via mass spectrometry. She lauded the tremendous progress made in recent years, notably thanks to Penning traps.
Abstracts of the conference presentations, as well as all of the posters, are available at the conference Web site. The conference proceedings will be published this year in the journal Hyperfine Interactions.
APAC2000 was funded by the EU Program for Training and Mobility of Young Researchers. Additional support came from the French Institut National de Physique Nucléaire et de Physique des Particules (IN2P3), from the German GSI heavy ion laboratory and from the French host institute, Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse, at the Université de Paris Sud, Orsay.
From time to time, people ask whether particle physics at CERN and other large laboratories produces applications for other sectors of society. Usually they are pointed towards the use of detector technology in medicine and the development of information technology. Everyone knows that the World Wide Web was born at CERN, but even earlier than that the very creation of the laboratory was one of the first demonstrations of how effective European co-operation can be.
Less frequently discussed is how CERN has become such a rich source of scientific and technical knowledge from specialists from all over the world. Visitors at CERN exposed to this enormous pool of knowledge and expertise are often inspired with new ideas to implement when they return to their home labs. The impact of large international labs such as CERN is thus boosted by related activities at the smaller home labs.
For more than two decades, Stockholm’s Manne Siegbahn Laboratory was one of the collaborators in the study of exotic atoms, first at CERN’s PS and later at LEAR. (In exotic atoms, the orbital electrons of everyday atoms are replaced by other, heavier particles, such as muons, kaons and antiprotons.)
However, we were often motivated to try to transfer some of CERN’s impressive technological achievements to our own home lab, within a necessarily limited budget. Therefore, sabbatical years for scientists and leave of absence for skilled technicians working with CERN accelerator groups, the vacuum group and in the electronics sector were organized systematically.
From this came the idea to build a storage ring where particles did not make head-on collisions as in particle physics but merged in a soft collision. Though stored ions could have energies of tens of mega-electronvolts per nucleon, and, for example, be merged with electrons in the kilo-electronvolt region, the collision energies, surprisingly, could be as small as fractions of a milli-electronvolt.
Stockholm’s CRYRING storage ring was to a great extent based on the design and experience of CERN’s low energy antiproton ring (LEAR), although the size is somewhat smaller (circumference 54 m compared with 80 m for LEAR). Figure 1 shows the CRYRING layout with its ion injectors and low-energy experiments.
An impressive amount of work has been published since CRYRING began running in 1992. From the very beginning, research concentrated on stored highly charged ions produced in an electron-beam ion source (CRYSIS).
CERN asked whether Stockholm could reproduce the seemingly strange LEAR result that the lifetimes of stored lead-208 ions with some even and odd charge states were dramatically different.
CRYRING confirmation
To optimize the beam, the lead ions injected into LEAR were cooled by electrons. This also brings the risk that ions are lost by dielectronic recombination. In such a two-electron resonance process, a cooling electron is captured by the ion and the energy balance is regulated by the excitation of one of the bound electrons. This was studied for 53+ and 54+ charge states by R Schuch and H Danared. Not only were the LEAR results confirmed, but the reason for this deviation was indeed found to be due to dielectron recombination resonances.
In the case of 53+, these occurred at the very low energies of 0.1 and 1 meV. Similar resonances are not present for 54+. From the measurements it is possible to determine the energy splitting between the ground state and the first excited state (4s and 4p levels) with an accuracy of about 1 meV. This is the most accurate determination of such a splitting ever measured in a very highly charged ion. The energy splitting includes large quantum electrodynamic effects.
At the ESR storage ring at the GSI Darmstadt lab, researchers have observed an ordered structure of stored ions when only some 5000 ions are left in the ring – a phenomenon sometimes referred to as one-dimensional ion crystallization. This has also been confirmed at CRYRING by studying the behaviour of stored nickel-17+ ions. Recent work has shown the existence of a one-dimensional lattice of xenon-36+ ions with thousands of stored ions.
In 1985 two experiments using electromagnetic traps were launched at CERN. One, at LEAR (G Gabrielse et al), aimed at a precision determination of the ratio of antiproton to proton masses, and it went on to show that they were equal to within a factor of 9 x 10-11. The other trap experiment (ISOLTRAP) was set up at the ISOLDE on line ion source for the determination of masses of radioactive nuclei. More than 100 masses of short-lived radioactive atoms have been determined to an accuracy of 10-7.
Having an electron beam ion source in Stockholm, we asked why everybody was using singly charged ions when it was obvious that the precision in mass measurements in a Penning trap increases linearly with the ionic charge. In collaboration with H-J Kluge’s group, we built another Penning trap (SMILETRAP) at Mainz, which was subsequently connected to the Stockholm electron beam ion source.
In a Penning trap, the mass of a highly charged ion is determined by measuring its cyclotron frequency and comparing it with the cyclotron frequency of a mass reference ion – often a carbon-12 ion of a suitable charge. The atomic mass is then obtained by correcting for the mass of missing electrons and their electron binding energies.
In addition to carbon ions, we used helium-4, nitrogen-14, neon-20, silicon-28 and argon-40 ions with various high charges as mass references. Their masses are known to an accuracy of about 10-10. Ionized hydrogen molecules were used as carriers for the proton. The ratio between the hydrogen molecule ion and the proton mass is known to better than 0.05 ppb.
The results are shown in figure 2. The proton mass value obtained (1.007 276 466 72(16)(86)) is close to the currently most accurate value (1.007 276 466 89(13)), except for the mass calculated from helium-4. This deviation seemed puzzling because the listed mass claimed an uncertainty of 0.25 ppb.
We therefore remeasured the helium-3, helium-4 and tritium masses using the hydrogen molecule ion as a mass reference. Figure 3 shows the results. The masses of these three light atoms are heavier than previously reported. It is thought that the relatively large mass discrepancy may be due to a day/night variation in the magnetic field that was unknown at the time of the earlier measurements.
This emphasizes how important it is in precision physics to obtain results from a number of different groups, preferably each employing a different technique. Our interest therefore focused on the neutrinoless double beta decay of germanium-76 – a process that, if observed, would signal some unexpected physics.
The most promising results come from the Heidelberg-Moscow experiment in the Grand Sasso underground laboratory. The masses of germanium-76 and selenium-76 were measured with an uncertainty of 1 ppb. However, for mass difference measurements, the systematic uncertainties cancel and the decay energy equivalent (Q-value) becomes very accurate (Q = 2 039 006(50) eV). Our Q-value is so accurate that its uncertainty represents only a few per cent of the half-width of the expected electron peak in the semiconductor detectors used.
This value may thus seem exaggerated, but it should be pointed out that the mass uncertainty reported from measurements with conventional mass spectrometers often has large underestimated systematic uncertainties. In this particular case, however, we have confirmed the latest reported Q-value (figure 4), and with an accuracy seven times as good.
Through contact with CERN, a number of scientists and technicians at Stockholm have been able to establish a unique facility for advanced in-beam atomic and molecular physics. Other laboratories could produce similar measurements. Our choice of experiments has often been influenced by current problems in particle and accelerator physics – no doubt as a direct result of our contact with CERN.
The RHIC collider at Brookhaven now dominates the world stage for high-energy, heavy-ion collisions. It is set to run later this year with its sights on the full design energy of 200 GeV per nucleon, following its initial running at a lower energy in 2000. The main goal of this research is to explore the transition from ordinary hadronic matter to quark-gluon plasma (QGP) – matter as it is thought to have existed at the birth of the universe.
Meanwhile, a new experiment at CERN’s Super Proton Synchrotron is also poised to provide new results, following a decision last November to continue CERN’s heavy-ion programme.
The NA60 experiment will complement the new research programme at RHIC, meaning that CERN will continue to be a player in a field currently awaiting the big ALICE experiment at the Large Hadron Collider.
CERN’s heavy-ion programme began in 1986, with the initial goal of identifying the phase transition from confined hadronic matter to the deconfined QGP state. Last year, those conducting the experiments concluded that they had produced compelling evidence for a new deconfined state of matter in which quarks roam freely (see Opening the door to the quark–gluon plasma).
Several signals for the formation of QGP had been anticipated, and many of them were observed by CERN experiments. Among the most important of these was a sharp reduction in the number of charm-anticharm mesons, J/psi and psi primes emerging from the highest energy-density collisions. This was caused by the tumultuous conditions of the plasma preventing the charm quarks and antiquarks from binding together. Another signal that was observed was an increase in the number of lepton pairs emerging from the collisions. These are the signals on which NA60 will focus.
The NA60 detector builds on the existing apparatus of the NA50 experiment with the addition of two new silicon detectors. One is placed in the beam and measures the interaction point with a precision of 20 µm. The other, placed after the target, is a silicon pixel telescope of almost 1 million channels in a 2.5 T magnetic field, which will vastly improve the mass resolution for muon pairs. Together these detectors will enable the experiment to measure J/psi and psi prime suppression more cleanly, and to measure charmed D-meson production in heavy ion collisions for the first time.
NA60 will run with protons this year, followed by indium and lead ions in 2002 and 2003. NA60 has chosen indium to test the current interpretation of the pattern of suppression observed by NA50 in lead-lead collisions. Verification of the effect seen by NA50 would bring conclusive evidence that the deconfined quark-gluon phase sets in under SPS conditions. It would also provide fundamental information on the mechanisms driving the transition. Furthermore, NA60’s excellent dimuon mass-resolution will allow the experiment to investigate other signals for deconfined matter, including the production of rho, omega and phi mesons, and to check whether the observed intermediate-mass dimuon excess is due to thermal dimuons emitted from free quarks in the plasma.
The NA49 experiment has also received a short extension to complete its data at 40, 80 and 158 GeV per nucleon study with points at 20 and 30 GeV per nucleon. This should allow the onset of the transition to be ascertained with a greater degree of certainty.
Supersymmetry is now 30 years old. The first supersymmetric field theory in four dimensions – a version of supersymmetric quantum electrodynamics (QED) – was found by Golfand and Likhtman in 1970 and published in 1971. At that time the use of graded algebras in the extension of the Poincaré group*was far outside the mainstream of high-energy physics”.
Three decades later, it would not be an exaggeration to say that supersymmetry dominates high-energy physics theoretically and has the potential to dominate experimentally as well. In fact, many people believe that it will play the same revolutionary role in the physics of the 21st century as special and general relativity did in the physics of the 20th century.
This belief is based on the aesthetic appeal of the theory, on some indirect evidence and on the fact that there is no theoretical alternative in sight. Since the discovery of supersymmetry, immense theoretical effort has been invested in this field. More than 30 000 theoretical papers have been published and we are about to enter a new stage of direct experimental searches.
The largest-scale experiments in fundamental science are those that are being prepared now at the LHC at CERN, of which one of the primary targets is the experimental discovery of supersymmetry.
The history of supersymmetry is exceptional. In the past, virtually all major conceptual breakthroughs have occurred because physicists were trying to understand some established aspect of nature. In contrast, the discovery of supersymmetry in the early 1970s was a purely intellectual achievement, driven by the logic of theoretical development rather than by the pressure of existing data.
Simultaneous discovery
To an extent, this remains true today. The history of supersymmetry is unique because it was discovered practically simultaneously and independently on the both sides of the Iron Curtain. There was very little cross fertilization – at least in the initial stages. As such, it is not surprising that eastern and western research arrived at this discovery from totally different directions.
While scientific interactions could have been mutually beneficial, they did not occur. Indeed, the political climate of the 1970s precluded such interactions. Of course, once it was recognized that supersymmetry could be integrated into and extend the standard model of fundamental interactions, progress on both sides of the Iron Curtain were recognized. However, it was only recently that some of the pioneers who opened the gates to the superworld in the early 1970s met face to face for the first time – in Minnesota.
As so often when exploring new ground, some early work on supersymmetry was hit and miss. Golfand and Likhtman initially reported a construction of the super-Poincaré algebra and a version of massive super-QED. The formalism contained a massive photon and photino, a charged Dirac spinor and two charged scalars (spin-0 particles).
Likhtman found algebraic representations that could be viewed as supersymmetric multiplets and he observed the vanishing of the vacuum energy in supersymmetric theories. It is interesting to note that this latter work still only exists in Russian.
Subsequent to the work of Golfand and Likhtman, contributions from the East were made by Akulov and Volkov, who in 1972 tried to associate the massless fermion – appearing due to spontaneous supersymmetry breaking – with the neutrino. Within a year, Volkov and Soroka gauged the super-Poincaré group, which led to elements of supergravity. They suggested that a spin 3/2 graviton’s superpartner becomes massive on “eating” the Goldstino that Akulov and Volkov had discussed earlier. The existence of this “super-Higgs mechanism” in full-blown supergravity was later established in the West.
A mathematical basis for the work of Volkov and collaborators was provided by the 1969 paper by Berezin and Katz (published in 1970), where graded algebras were studied thoroughly. In his memoirs, Volkov also mentions the impact of Heisenberg’s ideas on the making of Volkov-Akulov supersymmetry.
In the West, a completely different approach was taken. A breakthrough into the superworld was made by Wess and Zumino in 1973. This work was done independently, because western researchers knew little if anything about the work done in the Soviet Union. The prehistory on which Wess and Zumino based their inspiration has common roots with string theory – another pillar of modern theory – which in those days was referred to as the “dual model”.
Around 1969, the dual-resonance model of strong interactions, found by Veneziano, was formulated in terms of four-dimensional harmonic oscillators. Nambu advanced the idea that these oscillators represented a relativistic string. After that the scheme was reformulated as a field theory on the string world sheet. The theory was plagued by the fact that the spectrum contained a tachyon but no fermions and it was consistent only in 26 dimensions. These problems motivated the search for a more realistic string theory.
*Words in italics are explained in the superglossary, next page.
The known elementary particles come in two kinds – fermions, such as quarks, electrons, muons, etc (matter particles), and bosons, such as photons, gluons, Ws and Zs (force carriers). The key feature of supersymmetry is that every matter particle (quark, electron, etc) has a boson counterpart (squark, selectron, etc) and every force carrier (photon, gluon) has a fermion counterpart (photino, gluino, chargino, neutralino, etc). This doubling of the particle gene pool is because supersymmetry is a quantum-mechanical enhancement of the properties and symmetries of the space-time of our everyday experience, such as translations, rotations and relativistic transformations.
Supersymmetry introduces a new dimension – one that is only defined quantum mechanically and does not possess classical properties, such as continuous extent. The particle-superparticle twinning can assuage several theoretical headaches, such as why the different forces – gravity and electromagnetism – appear to operate at such vastly different and apparently arbitrary scales (“the Hierarchy Problem”). The extra particles provided by supersymmetry are also natural candidates for exotica, such as the missing dark matter of the universe.
“Season of mists and mellow fruitfulness,” wrote the English poet John Keats (1795-1821) in his famous ode “To autumn”. At CERN, the autumn of the year 2000 brought a very different scenario.
During the latter months of 2000, CERN Courier‘s news coverage followed the progress of the experiments at the LEP electron-positron collider as its energy was pushed to unprecedented and unforeseen levels. In the lead-up to the ultimate closure of LEP, several experiments produced some interesting evidence for the long-awaited Higgs particle – a keystone of modern physics theory.
Originally scheduled to close in September 2000, LEP was given a six-week “stay of Higgs execution” so that it could investigate the candidate Higgs signals further. Some additional evidence was found during the extra running time, but not enough to claim a discovery. With the need to commence construction work for CERN’s LHC collider (to be built in the same 27 km tunnel as LEP) becoming increasingly urgent, the decision was finally taken to close LEP for good.
CERN’s 27 km LEP storage ring was built with the initial objective of making precision measurements on the Z particle (the electrically neutral carrier of the weak force discovered at CERN’s proton-antiproton collider in 1983). For this, LEP had energies of about 45 GeV per electron and positron beam when it began operations in 1989.
However, LEP was designed to explore much more than just the Z particle. Even before the machine was formally approved for construction in 1981, a far-sighted programme of research and development had begun to develop superconducting radiofrequency cavities, which would boost the machine’s beams towards 100 GeV per beam and probe far beyond the Z production threshold.
By 1996 the necessary technology had been established and LEP was equipped with a number of gleaming new niobium covered accelerating cavities. By 1998, 272 of these cavities had been installed in the machine, and the collision energy (the sum of the two colliding beam energies) had attained 189 GeV. This was more than enough for LEP to start seeing production of the W particle, the electrically charged companion of the Z (in electron-positron collisions the W has to be produced in oppositely charged pairs).
By the beginning of its 1999 run, LEP had been equipped with an additional 16 superconducting cavities, bringing the collision energy to 192 GeV. The supercavities lived up to their name and soon began providing accelerating fields greater than the 6 MV m-1 originally planned, and in the summer of 1999 the machine delivered its first 100 GeV beams.
The ultimate goal is to find the missing link in today’s Standard Model picture of particle physics: the Higgs particle, which breaks the underlying electroweak symmetry and makes everyday electromagnetism look very different from weak interactions, ensuring that light and nuclear beta decay look so different that it took most of the 20th century for the connection between them to be recognized.
The Higgs endows particles with mass, so that the photon carrier of electromagnetism is free to roam, while the weak interaction is mediated by very heavy particles which are confined to subnuclear dimensions. Unfortunately the electroweak theory makes no direct prediction as to what or even where the Higgs may be.
However, the quest is not entirely unguided. All of the parameters of the electroweak theory have to fit together in a consistent way. As these parameters were measured with increasing precision, the region in which the Higgs had to lie became progressively smaller.
By 1999 it had become clear that LEP was operating in the very collision energy region in which the Higgs was most likely to be found. Eagerly, the teams working on the four experiments – Aleph, Delphi, L3 and Opal – scrutinized their new data.
In the mutual annihilation of an electron and a positron, the Higgs particle could emerge back to back with a Z particle. The possibility had been pointed out long before in several prophetic papers, the first published in 1976 by John Ellis, Mary K Gaillard and Dimitri Nanopoulos at CERN.
Shakespeare wrote: “Rumour is a pipe blown by surmises, jealousies, conjectures…that…the still-discordant wavering multitude can play upon.” By the autumn of 1999 the first of such whispers were heard about the Higgs, first in CERN corridors, then on the Internet. Behind the scenes, the CERN publicity machine began to compile material for a Higgs announcement, just in case.
Physicists at CERN report the progress of their experiments via special platforms, in this case the public sessions of the LEP Experiments Committee. Normally held a few times per year, these sessions were traditionally opened by representatives from the machine operations team, who would give the latest news on the machine front, followed by a slot for each of the four major experiments to present their latest findings.
The meeting on 9 November 1999 was particularly well attended, due to the rumours that abounded about the Higgs. However, in spite of these rumblings, nothing new emerged in public that day on the Higgs front.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.