The BES II spectrometer at the Beijing electronpositron collider (BEPC) has completed a measurement of hadron production rates over the 25 GeV energy range which is valuable input for Standard Model calculations.
Three vital input parameters in the electroweak sector of the Standard Model are a, the electromagnetic coupling strength (which depends on energy), the Fermi constant of weak decay and the mass of the Z boson the neutral carrier of the weak force.
To test the Standard Model, the electromagnetic coupling strength has to be evaluated at the Z resonance. The LEP measurements of the Z mass are of such high quality that now the error on the coupling strength is a limiting factor in tests of the Standard Model. Its accurate determination of a is critical for the indirect determination of the mass of the Higgs particle. A more accurate value narrows the mass window for Higgs particle searches.
Of particular importance in the extrapolation of a is the hadronic contribution to the vacuum, such as virtual quarkantiquark pairs, which cannot be calculated reliably but can be related to a factor known as R the ratio of hadron to muon pair production in electronpositron annihilation. Uncertainties in the measured values of R in the 25 GeV energy range contribute to the error in a.
After the first R scan in spring 1998, the BES collaboration performed a finer scan in the 25 GeV energy region almost the extremes of energy region that the BEPC can cover. The scan began in February and finished in early June. Data were taken at 85 energy points. To subtract background, separated beam runs were performed at 26 energy points, and single beam runs for electron and positron beams were carried out at 7 energy points. Special runs were taken at the J/psi resonance to determine the trigger efficiency and calibrate the detector. These runs show that the 12-tracking-layer vertex chamber, rebuilt from the SLAC Mark III endplates and beryllium beam pipe, has a spatial resolution of about 100 µm.
The figure shows the on-line values from the new R scan. Note that the detection efficiency, the background subtraction, as well as the radiative corrections have not been taken into account. The plot includes the R values for 6 energy points measured last year. The upgraded BEPC, as well as the good co-operation and hard work of the BEPC staff, were essential for the success of the scan, which continued even through the traditional Chinese Spring Festival.
A regular feature of the nuclear physics scene is the Particles and Nuclei International Conf-erence (PANIC99), which was held in Uppsala, Sweden on 10-16 June. An appropriate symbol for the turn-of-the-millennium conference was a Swedish rune stone carved around the last millennium change and placed in front of the main building of the university.
The first such conference was held in 1963 at CERN and was organized by Victor Weisskopf, Amos de Shalit and Torleif Ericson. Five PANIC99 participants had also participated in the first meeting. The conference series was initially called High-Energy Physics and Nuclear Structure, and one of the main themes has been to link nuclear and high-energy physics. These two disciplines have become increasingly important in astrophysics and cosmology, which were given added weight in the programme of PANIC99.
The first plenary talk each day was designed to underline this cross-disciplinary character as well as to show how applications of instruments and methods are important to society. The lectures were “What quantum chromodynamics tells us about nature” (Frank Wilczek), “The theoretical enigma of gamma-ray bursts” (Martin Rees), “Charged particles in radiation medicine” (Gudrun Gotein), “Core collapse in supernovae” (Hans-Thomas Janka), “Transmutation of nuclear waste” (Waclaw Gudowski) and “Where in the world is the oscillating neutrino?” (Janet Conrad).
Wilczek was the keynote speaker and Gotein’s talk was dedicated to the late Boerje Larsson, who carried out pioneer work in radiation surgery and therapy using charged particles in Uppsala and elsewhere.
Of the 600 contributions, 200 were selected for oral presentations in the afternoon sessions, which were divided into the following topics: real and virtual photon interactions with nucleons and nuclei; meson, nucleon and hyperon interactions with nucleons and nuclei; hadron spectroscopy and structure; dense and hot matter; neutrino physics, nuclear and particle astrophysics; fundamental symmetries and the Standard Model; strong interactions in the non-perturbative region; and experimental techniques, applications and facilities for the next millennium.
Highlights in hadron and photon interactions were summarized by H Toki and A Thomas two of the convenors of the first two sessions, which were the largest in terms of number of papers. Four contributions were selected as topical talks in the morning plenary sessions. C Perdrisat presented precise data on the proton electric and magnetic form factor ratio by polarization transfer in electronproton scattering at the Jefferson Lab. K Seth reported on behalf of the E852 collaboration at Brookhaven on the discovery of new exotic mesons. J Belz talked about new measurements of direct CP violation in K decay and other recent results from Fermilab’s KteV experiment . A Gillitzer discussed the observation at GSI Darmstadt of well resolved nuclear states.
Owing to the very recent discoveries in Berkeley and Dubna of three new elements, K Aleklett commented on how they were found and their impact on research.
In addition to the morning and afternoon parallel sessions, there were poster presentations on Friday and Monday. The poster visitors could vote for the best poster presented during each of these sessions. Prizes were awarded to A Filippi for the OBELIX collaboration and J Bonn, L Bornschein, B Degen, Ch Kraus, E W Otten, H Ulrich and Ch Weinheimer from Mainz for posters on strangeness production and the OZI rule, and a study of a sensitive spectrometer for neutrino mass measurements.
The proceedings of the conference will be published in Nuclear Physics. PANIC99 showed that this type of event has an important role to play in bridging different subfields of physics, which was also underlined by the IUPAP C12 meeting. The 16th PANIC conference will be held in Japan and will be organized by the Osaka Research Centre for Nuclear Physics.
During PANIC99, CELSIUS/WASA a new detector facility to be used at the CELSIUS cooler storage ring was inaugurated at Uppsala’s Svedberg Laboratory a Swedish national facility for accelerator-based research. The new facility will be reported on in the next issue.
Almost a century after Einstein introduced the idea of a quantum of light, the photon (its name introduced only in 1926) continues to be a rich field of physics. Photonphoton and photonproton interactions are prolific testing grounds for quantum chromodynamics (QCD) the field theory of quarks and gluons as well as the more “classical” quantum electrodynamics (QED).
Experimentalists, mainly from CERN’s LEP electronpositron collider and DESY’s HERA electronproton collider, discussed their newest results with theorists at the International Conference on the Structure and Interactions of the Photon (PHOTON 99), which was held recently in Freiburg, Germany.
At close quarters the photon looks as though it contains quarks and gluons as well as electromagnetic particles. The HERA and LEP experiments have reached a new stage of precision in the measurement of photon structure. Claudia Glasman (Madrid) pointed out that the new data pose a challenge to the theorists.
The scattering of two virtual photons at LEP provides a pure test of a certain (BFKL) QCD regime. However, there are large discrepancies between the theory and the data, possibly owing to higher-order effects.
Photon-photon interactions are now studied from the electron-volt up to the giga-electron-volt range. Denis Bernard (Ecole Polytechnique) presented a search for elastic photonphoton scattering a reaction that has never been observed. The best limit using lasers is still 18 orders of magnitude greater than the value predicted by QED.
At high energies the physics potential of a “Compton Collider” as part of the next (electronpositron) Linear Collider project was discussed by David Miller (UCL). In a Compton Collider (originally suggested by Telnov, Serbo, Ginzburg and Kotkin from Novosibirsk three of them were at PHOTON 99), high-energy photons are produced by scattering laser photons off electron beams.
One of the most exciting possibilities is to produce Higgs particles directly from the scattering of pairs of such high-energy photons a powerful way of determining fundamental Higgs properties.
Kai Hencken (Basel) pointed out that Brookhaven’s RHIC will also be a rich source of photons. The first results are expected in time for PHOTON 2000, which will be held in Lancaster (UK).
The proceedings of PHOTON 99 will be published by Nuclear Physics B.
Soon after the experiments at Dubna, which synthesized element 114 and made the first footprints on the beach of the “island of nuclear stability”, two new superheavy elements have been discovered at the Lawrence Berkeley National Laboratory.
Element 118 and its immediate decay product, element 116, were manufactured at Berkeley’s 88 inch cyclotron by fusing targets of lead-208 with an intense beam of 449 MeV krypton-86 ions.
Although both new nuclei almost instantly decay into lighter ones, the decay sequence is consistent with theories that have long predicted the island of stability for nuclei with approximately 114 protons and 184 neutrons.
Theorist Robert Smolanczuk, visiting from the Soltan Institute for Nuclear Studies in Poland, had calculated that this reaction should have particularly favourable production rates. Now that this route has been signposted, similar reactions could be possible: new elements and isotopes, tests of nuclear stability and mass models, and a new understanding of nuclear reactions for the production of heavy elements.
The 118-isotope, identified at Berkeley, contains 118 protons and 175 neutrons in its nucleus. Less than 1 ms after its creation, it decays by emitting an alpha particle, leaving behind an isotope of nucleus 116, containing 116 protons and 173 neutrons. This alpha-decays in turn to an isotope of element 114. The chain of successive alpha decay is observed until at least element 106.
Vital to the experiment was the newly constructed Berkeley Gas-filled Separator (BGS). Another important factor was the versatility of the 88 inch cyclotron, in operation since 1961 and recently upgraded by the addition of a high-performance ion source.
It is incongruous that this new transuranic nucleus was discovered at Berkeley only a few months after the death of Glenn Seaborg, co-discoverer at Berkeley of plutonium and nine other elements heavier than uranium, the heaviest naturally occurring nucleus.
Ever since the discovery of neptunium and plutonium almost 60 years ago, physicists have continually sought to synthesize additional artificial, transuranic elements. Most of these nuclei are highly unstable, but a fundamental nuclear physics prediction says that these superheavy elements would eventually reach an “island of stability” (figure 1).
This intriguing hypothesis, which was proposed more than 30 years ago and has since then been developed intensively, seems to have received recent experimental confirmation at the Joint Institute for Nuclear Research in Dubna near Moscow.
In a 34 day bombardment of a heavy target of plutonium-244 by a calcium-48 beam (total dose 5.2 x 1018 ions), an unusual decay chain was recorded by a position-sensitive detector array. This decay chain consisted of a heavy, implanted atom, three sequential alpha decays and a spontaneous fission (SF), which altogether lasted for about 34 min (figure 2a).
All five of the signals were correlated in time and position. The large values of the alpha-particle energies and the long decay times, in addition to the termination of the sequence by spontaneous fission, provide evidence for the decay of nuclei with large atomic numbers. Under the experimental conditions given, the probability of being able to simulate such a decay chain occurring by random coincidence is significantly small.
Second attempt
The authors consider this to be an excellent candidate for a decay chain originating from the alpha decay of a parent nucleus with atomic number 114 and mass 289, produced by the evaporation of three neutrons from a compound nucleus with a cross-section of about 1 pb. There are plans to make another attempt at obtaining a second event in a forthcoming experiment in July 1999 and to make a final interpretation of the results then.
The experiment was performed in Dubna’s Flerov Laboratory of Nuclear Reactions in November and December of 1998 in collaboration with the US Lawrence Livermore National Laboratory. The Dubna gas-filled recoil separator (DGFRS), which is capable of separating, in flight, the superheavy nuclei evaporation residues from projectiles and other reaction products, was employed to extract single atoms.
The beam intensity at the U400 heavy-ion cyclotron was approximately 4 x 1012 /s at the consumption rate of 0.3 mg/h of the unique calcium-48 isotope in the ion source.
A follow-up experiment was carried out in March and April (with the participation of GSI, Darmstadt; RIKEN, Tokyo; and Comenius University, Bratislava). The objective on that occasion was the synthesis of a new isotope of element 114 with a mass number 287 in reactions between calcium-48 and plutonium-242. The VASSILISSA electrostatic recoil separator sifted the reaction products and recorded the decays of the new nuclides.
The experiment lasted for about 30 days, which involved a total beam dose of 7.5 x 1018. Two similar events were recorded as a short decay chain. They consisted of a recoil nucleus, an alpha particle emitted a few seconds later and final SF with a half-life of a few minutes (figure 2b). In each case, all three signals of the decay sequence were correlated in time and position.
Third example
The spontaneously fissioning emitter (which has a lifetime of about 1.5 min) had been observed in an earlier experiment that was performed by the same collaboration in reactions between calcium-48 and uranium-238. On that occasion, the two observed spontaneous fission events had tentatively been assigned to the decay of the new isotope of element 112 with mass number 283 (figure 2c).
In the latest experiment, the same nucleus has been produced as the daughter product owing to the alpha decay of the mother nucleus of mass 287 and proton number 114. The atomic numbers of the synthesized nuclei will be determined chemically. The first experiment, which is aimed at the chemical separation of element 112, is now being prepared.
The half-lives of the new nuclides are estimated to range from seconds to tens of seconds. Their daughter nuclei the decay products live for minutes: almost a million times as long as the lighter isotopes 110 and 112 with neutron numbers 163 and 165.
This is exactly in line with theoretical predictions. When approaching the closed 184-neutron shell, the increasing neutron number should change the shape of the nucleus from elliptical to spherical. This spherical shell, coming after the 126-neutron shell in the stable lead-208 nucleus, is so strong that its influence, according to the calculations, extends even to those nuclei that have more than 170 neutrons, thus increasing their lifetime by many orders of magnitude.
From this point of view the properties of the new nuclei, synthesized in reactions induced by calcium-48, could be considered a first experimental indication of the existence of the island of stability of superheavy spherical nuclei.
Some of the most compelling questions in particle physics today are to do with the Higgs boson and supersymmetry (SUSY). Is the Higgs mechanism responsible for electroweak symmetry breaking and the origin of mass? Can SUSY (the symmetry under which bosons and fermions are equivalent in nature which thus predicts that for every particle there is a partner “sparticle”) point the way to the ultimate unification of forces? Where is the Higgs? Where is SUSY? How can they be found? What if we don’t find them?
To address these questions in depth, more than one hundred physicists gathered at the University of Florida, Gainesville, for the international conference entitled Higgs and Supersymmetry: Search and Discovery, earlier this year.
The goal of the conference was to provide a forum for physicists involved in Higgs and supersymmetry searches in which the present status of the research could be summarized and new directions for searches and the potential for discovery established.
The conference agenda focused on Higgs and SUSY presentations (see “http://www.phys.ufl.edu/~rfield/higgs_susy.html”). One exception was M Spiro’s (CEA/Saclay) talk on dark matter. Even then SUSY was in the limelight: the lightest supersymmetric particle is one of the best candidates for dark matter in the universe.
The elusive Standard Model
G Altarelli (CERN/Rome 3) pointed out in his presentation on “The Standard Electroweak Theory and beyond” that, in spite of the great experimental work done in the 1990s, which included high-precision electroweak measurements at LEP (CERN) and SLC (SLAC, Stanford) and the discovery of the top quark at the Tevatron (Fermilab), a clear view beyond the Standard Model (SM) continues to elude researchers. Even so, there are good reasons for optimism in the coming decade as CERN’s LEP2 electronpositron collider continues towards its highest achievable beam energy (around 100 GeV), and surprises may be just around the corner.
At the same time, physicists at Fermilab’s Tevatron collider, the scene of the CDF and D0 experiments, get ready to finalize their upgrades for Run 2. The increased collision energy (2 TeV) and the large data samples expected in the first two years of running (from integrated luminosities of 2 fb-1 or more: 20 times those gathered previously) may be the key to finding the Higgs and SUSY.
Just over the horizon the ATLAS and CMS experiments at CERN’s LHC collider await their turn to push back the high energy frontier. However the Higgs could be nearby. The most recent fit results of precise electroweak measurements, discussed by F Richard (LALOrsay), favour a relatively light Standard Model (SM) Higgs
(91 +71 41 GeV). The newest results on the SM Higgs search from LEP2, with electronpositron annihilation producing a Higgs with a Z particle (the former decaying into a beauty [“b”] quark and antiquark and the Z to any quark and antiquark being the primary search channel), give a Higgs mass greater than 95.5 GeV (95% c.l.). LEP’s ultimate sensitivity to the SM Higgs is expected to be between 105 and 110 GeV. The window for discovery remains open.
The current Tevatron Higgs searches, described by E Barberis (Berkeley), focus on Higgs production associated with W or Z bosons. The levels for these processes are still an order of magnitude away from the SM expectation. The Tevatron’s reach during Run 2 and beyond, presented by M Carena (Fermilab), has been studied extensively at the Run 2 SUSY/Higgs workshop.
For a Higgs mass less than about 130 GeV the Higgs decays predominantly into b-quarks. The key to the analysis is to have excellent b-tagging capabilities and good energy resolution for b-quark jets (to reconstruct a narrow Higgs mass peak over the quarkgluon jet background). For the SM Higgs, updated results show that, with an integrated luminosity of 2 fb-1 per experiment, the 95% exclusion limit can reach about 115 GeV. However, in 10 fb1 this sensitivity can reach about 185 GeV and, if the Higgs is in that mass range, a signal could be detected for a mass of less than 125 GeV or of 150 to 175 GeV (when the Higgs decay into W pairs dominates the b-quarkantiquark decay favoured at the lower Higgs masses).
An integrated luminosity of some 50 fb-1, although difficult to envisage, would be able to exclude a Higgs below 180 GeV. Beyond the SM, where new particles enter the game, the coupling strengths of the Higgs particle(s) can change and affect the outcome. In the popular Minimal Supersymmetric extension to the SM (MSSM), two Higgs doublets result in five physical states, the masses of which can be determined by two parameters. Radiative corrections owing to the sixth “top” quark and “stop” (supersymmetric top) can be large.
The model predicts the lightest SUSY Higgs at 130 GeV. However, the reaction rates at the Tevatron for the MSSM channels can, at best, be only slightly larger than the SM in some parts of the kinematically allowed region but mostly they are expected to be somewhat smaller. Still, with enough collisions, the lightest SUSY Higgs seems to be within the Tevatron’s reach in Run 2.
Limits from LEP on MSSM Higgs masses currently reach about 85 GeV and will increase. The Higgs searches at LEP and the
upcoming Tevatron run are definitely worth keeping a close track on. A surprise could be in store before the onset of the LHC, which has the goal of ultimately reaching 300 fb-1 in 10 or so years of running, at a collision energy of 14 TeV, and finding all there is to be found.
The prospects for Higgs and SUSY at the LHC were covered by K Jakobs (Mainz/ATLAS) and D Denegri (Saclay/CMS). Extensive detector simulations of all relevant Higgs channels have been performed to understand the various detector efficiencies and resolutions needed to optimize the physics yield. Excellent lepton and photon detection and b-quark tagging are paramount. With 30 fb-1 of integrated luminosity expected after three years of running, and by combining all signatures, both detectors show the capability of detecting the SM Higgs unequivocally up to a mass of 1 TeV.
The SM Higgs mass can be measured with a precision of 0.1% up to masses of some 400 GeV.
If the decays of MSSM Higgs bosons to SUSY particles are not allowed kinematically, then at the LHC the full parameter space can readily be covered. Open decay channels producing SUSY particles complicate the Higgs searches, but still a good fraction of the available parameter space can be probed.
If SUSY exists at the electroweak scale, the LHC will discover it easily. Gluinos and squarks (the supersymmetric partners of gluons and quarks), up to masses of about 2 TeV, will be copiously produced and their decays will give signatures that differ significantly from the SM. Sleptons (the leptons’ SUSY partners) can be detected directly up to about 400 GeV.
The status of the LEP SUSY searches was reviewed by M Schmitt (Harvard). A slew of searches for gauginos, squarks and sleptons have yielded limits on SUSY masses in the 80 to 95 GeV range. In addition to the MSSM model, other models have been addressed but no signals have been observed.
SUSY searches at the Tevatron, covered in part by D Stuart (Fermilab), are also a busy industry, and in many cases the kinematic region, excluded by the LEP experiments, has already been extended significantly: the sbottom (stop) mass limits have reached 148 (119) GeV and the gluino limits almost 270 GeV.
Promising indications
At DESY’s HERA electronproton collider, “looking for non-Standard Model effects is alive and well,” affirmed F Sciulli (Columbia), who showed that promising indications in the large-x (large mass in the electronproton collision) region persist in data from both the ZEUS and the H1 experiments.
Higgs and SUSY prospects at the proposed Next Linear Collider (NLC), Muon Collider and Very Large Hadron Collider were covered by D Burke (SLAC), J Lykken (Fermilab) and D Denisov (Fermilab) respectively.
H Baer (Florida State) discussed the interface between theorists and experimentalists in the context of simulations beyond SM physics, and G Kane (Michigan) predicted that, unless we are missing some basic ideas, in the next six years or so SUSY particles and the light SUSY Higgs will be discovered either at LEP or at Fermilab.
In the final presentation of the conference, J Bagger (Johns Hopkins) argued that perhaps not all sparticles may be light, given that several rare processes, such as lepton-flavour violation and proton decay, prefer unnaturally heavy scalar particles (in the 510 TeV range). Supernaturally superheavy supersymmetry provides a scenario in which the superparticles mass spectrum is the reverse of what is encountered with ordinary particles, but still has enough particles below the searchable 1 TeV scale.
As conference participants headed for other Florida attractions, the feeling was that the near future in particle physics was as bright as the sunshine.
As reported in the June issue, in mid-April the new DAFNE phi-meson factory at Frascati began operation, with the KLOE detector looking at the physics.
The DAFNE electronpositron collider operates at a total collision energy of 1020 MeV, the mass of the phi-meson, which prefers to decay into pairs of kaons. These decays provide a new stage to investigate CP violation, the subtle asymmetry that distinguishes between matter and antimatter. More knowledge of CP violation is the key to an increased understanding of both elementary particles and Big Bang cosmology.
Since the discovery of CP violation in 1964, neutral kaons have been the classic scenario for CP violation, produced as secondary beams from accelerators. This is now changing as new CP violation scenarios open up with B particles, containing the fifth quark “beauty”, “bottom” or simply “b” (June).
Although still on the neutral kaon beat, DAFNE offers attractive new experimental possibilities. Kaons produced via electronpositron annihilation are pure and uncontaminated by background, and having two kaons produced coherently opens up a new sector of precision kaon interferometry. The data are eagerly awaited.
Strange decay
At first sight the fact that the phi prefers to decay into pairs of kaons seems strange. The phi (1020 MeV) is only slightly heavier than a pair of neutral kaons (498 MeV each), and kinematically this decay is very constrained. At first sight, phi decay via a rho-meson (770 MeV) and a pion (140 MeV) should be easier.
The phi-meson was first seen in bubble chamber experiments at Brookhaven in 1962. A subsequent paper, published the following year, outlined the properties of the new particle.
In 1963 the name of the game was to assign particles into SU3 multiplets, the Eightfold Way of Gell-Mann and Ne’eman. SU3 had to be some reflection of an underlying symmetry. At the time, physicists had little idea what this symmetry was, other than that it had to have a threefold structure. Quarks had not yet been talked about.
The 1963 paper said that the phi was a “vector” (spin-one, negative parity) meson. It could be accommodated as an SU3 singlet, supplementing an octet of other vector mesons, the rho and its cousins.
George Zweig, just completing his PhD at Caltech under Richard Feynman, looked at the paper and was immediately intrigued by the huge signal for the decay into two kaons, right at the edge of the kinematically allowed region, while the apparently easier rhopi decay was suppressed.
“Feynman taught me that in strong interaction physics everything that can possibly happen does, and with maximum strength,” said Zweig. “Only conservation laws suppress reactions. Here was a reaction that was allowed but did not proceed.”
Feynman taught me that in strong interaction physics everything that can possibly happen does, and with maximum strength.
George Zweig
This worried Zweig, who had been quietly toying with the idea that perhaps strongly interacting particles had constituents. The bizarre decay behaviour of the phi convinced him that constituents made sense. Moreover, he felt that, instead of being just some mathematical symmetry reflected as SU3 multiplets, these constituents had to be real.
His idea was that the phi has something in common with the kaons, but not the rho and the pi. This constituent, whatever it is, has to survive phi decay, and this can only be done by producing kaons, which are known to carry the quantum number strangeness.
This legacy of the phi we now call the strange quark the phi contains a strange quarkantiquark pair. When the phi decays, the strange quarks have to go somewhere, and the kaon route is the only possibility: the strange quark goes into one neutral kaon and the strange antiquark goes into a neutral kaon antiparticle.
Gell-Mann had focused on a threefold underlying symmetry and had pounced on the “quark” word from a call at the bar in James Joyce’s Finnegan’s Wake: “Three quarks for Muster Mark”.
Trying to sell his quark ideas, Gell-Mann recalled his earlier experience, when he had invented the name “strangeness” for a new quantum number. Physicists were used to dealing with particles denoted by Greek letters. If a new word had to be invented, they looked for derivations from classical Greek. Not Gell-Mann, who had gleefully flung his unorthodox “strangeness” into the arena and was accused of being frivolous. Remembering those difficulties, Gell-Mann, working in the US, sent his quark paper to the European journal Physics Letters, where it was published in 1964.
Four aces
However, in 1963 Zweig did not know about the quark ideas then going around in Gell-Mann’s head. Zweig thought that there should be four constituents (because there were four weakly interacting particles, or leptons) and called them “aces”. After leaving Caltech, Zweig worked at CERN for a while, bringing his aces idea with him. His paper appeared as a CERN internal preprint in January 1964. Wanting to publish in a US journal, Zweig came under pressure from CERN to publish in Europe. However, with no reputation, his idea fell on infertile ground and his paper was not widely published until several years later, when the value of his ideas had been recognized and the CERN “aces” preprint reproduced.
In Israel in the the early 1960s, Yuval Ne’eman and Haim Goldberg also concocted a threefold underlying symmetry pattern, but it too fell by the wayside, despite being accepted for publication.
Zweig’s suspicion that there were four basic constituents, rather than three, was ahead of its time, anticipating the elucidation and discovery of the fourth quark “charm” in the early 1970s.
Later, Zweig wrote: “The reaction of the theoretical physics community to the ace model was not benign. Getting the CERN report published in the form I wanted was so difficult that I finally gave up.”
Despite the demise of the ace picture, Zweig’s contributions are still remembered. The selection rule that recognizes that constituent quarks have to survive (for example that phi prefers to decay into two kaons) is called the “Zweig rule”.
Zweig subsequently turned his talents to research in sensory physiology, where, in 1975, his work led to the development of what is now called the continuous wavelet transform a way of displaying and extracting time and frequency information in a signal. More recently he has proposed a model of cochlear mechanics that predicts that the ear makes sounds when it listens to sounds. These predictions were confirmed in experiments. He became a fellow of the US Los Alamos National Laboratory in 1985.
Simulating the immediate aftermath of the Big Bang, strongly interacting particles at high temperature or density are expected to produce weakly interacting “deconfined” quarks and gluons the famous quarkgluon plasma.
Existing experiments using high-energy beams of heavy ions at CERN’s SPS synchrotron CERES/NA45 (electron-pair production in high-energy heavy-ion collisions) and NA38, NA50, NA51 (muon production in high-energy heavy-ion collisions) have achieved results that may indicate that the plasma has already been observed.
In a new energy domain, the Relativistic Heavy Ion Collider at Brookhaven will begin its search later this year and, looking further ahead, understanding the plasma is a primary focus of heavy-ion physics at CERN’s LHC.
Awaiting definitive observation and measurements, more work is necessary to develop a theoretical understanding of the plasma’s properties and to provide unambiguous signals of its production. In quantum chromodynamics (QCD), the field theory of quarks and gluons, which is the framework for this understanding, complementary tools are provided by numerical simulations of QCD on spacetime lattices and by QCD modelling.
A workshop at the European Centre for Theoretical Studies (ECT), Trento, Italy, followed a 1998 meeting in Bielefeld and attracted interested theorists. The SPS experimental collaborations were also represented.
ECT funding covered the bulk of participants’ local expenses. Members of the lattice-QCD community benefited from the European Union’s Finite Temperature Phase Transitions in Particle Physics training and mobility network.
Proceedings will be published by World Scientific.
With no ruler in the sky, astronomers have to locate visible cosmic milestones to help them to measure distances. These distance indicators can be visible objects, such as stars or galaxies, or effects, such as explosions, gravitational lensing or the thermal scattering of background radiation in cluster cores.
As the light from these distant milestones rushes towards us, its wavelength is stretched (redshifted) by the expansion of the universe. In the classic picture this expansion was thought to be decelerating owing to the pull of gravity.
In 1929 Edwin Hubble showed that the redshift and distance of nearby galaxies obeyed a linear law. From the assumption that the universe was expanding, one could show that Hubble’s comparison gave a direct estimate of the expansion of the universe and, as a consequence, a measurement of its age.
The difficulties of these measurements are reflected in the result that Hubble obtained: the universe was only a thousand million years old younger than most astrophysical objects. Current measurements lead to a generally accepted age of 15 gigayears, although different methods still give different values.
Cosmic fireworks
Massive stars are doomed to a violent death as supernovae. The immense gravitational crush inside such a star heats up their core, tripping new thermonuclear switches to release fusion energy. When its nuclear fuel is spent, the star can no longer resist the remorseless pull of gravity and so it implodes, cooking a stew of heavy nuclei and compressing its component atoms into mere neutrons.
Like a rubber ball that has been squeezed, the remnant of the star springs back in a huge shockwave that flings nuclear residue far out into space and releases a characteristic light in the sky. The duration and brightness of nearby supernovae is well known, so by comparing the observed variation in brightness (the light curves) of distant supernovae to those that are closer gives accurate measurements of their distances.
The drawback in this method is that these events are rare. Only a few supernovae occur every thousand years in a typical galaxy,
so finding them means looking at large numbers of galaxies at a time.
The Supernova Cosmology Project and the HighZ Supernovae Project have developed a method to look for and then follow supernovae explosions. These groups carefully scan large patches of the sky for sudden supernova flashes, then carefully monitor their evolution with optical telescopes, obtaining accurate measurements of the light curve and spectra.
In January 1998 the Supernova Cosmology Project presented its 1997 harvest the analysis of 42 newly discovered distant supernovae. To everyone’s surprise, these supernovae looked dimmer than expected in the standard, decelerating model of the universe. Instead of slowing down, the expansion of the universe appeared to be speeding up. The results were confirmed by the rival HighZ Supernova team and have become an essential ingredient of current cosmological model building.
The systematic analysis of these results has been thorough, but there a few sceptics still remain. Two main criticisms are raised: first, distant supernovae could look dimmer because intervening dust scatters the light. Second, are we completely sure that these distant supernovae explode in the same way as those that we see closer to us?
Both groups have set up campaigns to understand supernovae better, and results at different wavelengths will map out the characteristics of intergalactic dust with great precision.
With an accelerating universe, physicists had some explaining to do. The acceleration or deceleration is dictated by the sum of the energy density of the matter in the universe, and the pressure exerted by the matter in all three directions. If this sum is positive, the universe decelerates; if it is negative, the universe accelerates (the energy density is always positive).
For example, for a universe consisting mostly of ordinary massive §particles, the pressure is essentially negligible. Meanwhile, for a universe dominated by photons or massless neutrinos, the pressure is equal to one-third of the energy density. For both types of matter the pressure is not negative. Therefore, if they are dominant, the universe will decelerate.
An accelerating universe needs a negative pressure to counterbalance the energy density. The standard ploy is the cosmological constant. Introduced by Einstein in his general theory of relativity to counterbalance the pull of gravity and therefore lead to a static universe, the cosmological constant has been on the one hand a convenient fix for the Big Bang cosmology when the theory doesn’t fit the data, but on the other hand one of the major conceptual problems in particle physics and cosmology.
The problem has been stated many times: if we add up the vacuum fluctuations from all of the various quantum fields we know of, we naturally obtain a cosmological constant with an energy density of 1029eV4. With such a large value, Einstein’s theory would essentially predict that our universe is flying apart with absolutely no possibility of forming galaxies, stars or planets.
This was always an embarrassment for theorists, but if the cosmological constant were zero, there was always the hope that they had “nothing” to worry about.
There have been many attempts to get rid of the cosmological constant. One of the more promising possibilities is that supersymmetry an as yet unobserved symmetry between bosons and fermions may lead to an exact cancellation between all of the various contributions to the vacuum energy density. Suggestions abound, but a compelling model has yet to emerge that could explain why the cosmological constant is so small.
A highly speculative possibility relies on a particular feature of the quantum theory of gravity. Fluctuations in the fabric of space time modify the local topology, creating a quantum foam of holes and handles (wormholes). The overall effect is to drive the cosmological constant to zero.
The problem of how to incorporate the cosmological constant into a sensible theory of matter remains unresolved and, if anything, has become even harder to tackle with the supernova results. Until now an exact cancellation was needed so that one could argue that some fundamental symmetry would forbid it to be anything other than zero. However, with the discovery of an accelerating universe, a very special cancellation is necessary a cosmic coordination of very big numbers to add up to one small number. If we are indeed measuring a constant energy density with the supernova results, it is more at the level of 10-3eV4.
A large part of the dark matter that seems to dominate the universe is expected to be in the form of relic particles. This “dark universe” could be a unique window through which we will be able to look for new physics. The DAMA experiment at the INFN Gran Sasso National Laboratory is investigating this new frontier and sees an annual variation, which is suggestive of the Earth’s motion against a background “wind” of particles.
Particles from the dark universe
Measurements of luminous matter (stars) lead to the conclusion that the universe does not contain enough stellar matter to halt the residual Big Bang expansion. Without enough such gravitational braking, the universe will continue to expand forever. However, many experimental observations suggest that luminous matter is not the end of the story. To account for the observed motion in the cosmos, gravitational fields much stronger than those attributable to luminous matter are required more than about 90% of the mass in the universe should be the result of invisible dark matter.
This conclusion is further supported by simulations using cosmological models that point out the necessity for large numbers of relic particles weakly interacting massive particles (WIMPs) from the early universe.
This scenario implies that our galaxy should be completely embedded in a large WIMP halo. Our solar system, which is moving with a velocity of about 232 km/s with respect to the galactic system, feels a continuous WIMP “wind”.
The quantitative study of this wind would provide information on the evolution of the universe and investigate new physics possibilities. The lightest neutral particle (the neutralino) expected by the supersymmetric extension of the Standard Model is the best WIMP candidate.
How to catch a WIMP?
Direct detection of WIMPs is very difficult because they rarely interact. WIMP searches should be shielded from cosmic rays and operate in an environment of very low radioactivity. The detectors should be built using low-radioactive materials.
DAMA’s home is deep underground in the INFN Gran Sasso National Laboratory in Italy. The collaboration (involving the University and INFN-Roma2, University and INFN-Roma, IHEP-Beijing) is mainly devoted to the search for WIMPs in the same mass and cross-section region as accelerator experiments, and several results have already emerged.
Thus high-atomic-number target nuclei, such as iodine (in the form of NaI(Tl)) and xenon (in the form of a liquid xenon scintillator) are used. The search mainly focuses on WIMPnucleus elastic scattering from the target-nuclei part of the detector, which would show up via nuclear recoil energies in the kilo-electronvolt range.
To help to isolate a possible WIMP signal from the background, the main feature of the WIMP wind is its annual modulation. As the Earth rotates around the Sun, it would be crossed by a higher WIMP flux in June (when its rotational velocity is in the same direction as that of the solar system with respect to the galaxy) and by a smaller one in December (when the two velocities are subtracted). The fractional difference of the rate is some 7%.
To see such modulation requires heavy, stable detectors with appropriate features and stability control. The 100 kg highly radiopure NaI(Tl) DAMA set-up is an example. Clear signatures overcome the difficulties of comparing different experiments and techniques.
The 100 kg DAMA NaI(Tl) set-up
DAMA uses highly radiopure NaI(Tl) scintillators that are produced in collaboration with the CRISMATEC company. All of the materials and the crystal growth and handling procedures have been studied carefully. A major effort has gone into optimizing the detectors and the electronics to give a relatively high number of photoelectrons per kilo-electronvolt and a low noise level.
The low background photomultiplier tubes employed in the experiment (two for each detector, working in coincidence at a single photoelectron threshold) have been developed by Electron Tubes Ltd. The materials were preselected by the company and their radioactivity was measured deep underground.
The other main parts of the experiment are the passive shield (to exclude environmental contributions to the counting rate), which surrounds the copper box housing the detectors, and the glovebox placed on the top of the shield for calibration. The whole of the apparatus is kept in highly pure nitrogen with slight overpressure with respect to the atmosphere in order to keep out radioactive radon gas.
The materials of the shield have been selected and monitored for low radioactivity. The upper glovebox is used to insert radioactive sources to calibrate the detectors in the same experimental conditions as those occurring during the production measurements.
This set-up is mainly devoted to studies of WIMP annual modulation, therefore particular care has been taken in the stability and monitoring of the running condition parameters, such as the operating temperature, the high-purity nitrogen flux, the glovebox overpressure, the total and single hardware rates above single photoelectron threshold, the environmental radon level and so on. All of the information related to these parameters is continuously recorded with actual data.
The experiment is taking data from a single photoelectron threshold to several mega-electronvolts, although the hardware conditions are obviously optimized for the lowest energy region. Pulse shape information is recorded over a period of 3250 ns for the lowest energy events.
Searching for annual modulation
Any annual modulation of the WIMPnucleus differential energy spectrum should have all of the following features attributable to WIMP interactions:
Single-hit events with only one detector firing are the ones of interest in WIMP search, the probability of a WIMP interacting in more than one detector being negligible.
After checking the monitored parameters, the time-dependent component of the rate is extracted from the collected data by grouping the events in cells of one day, 1 keV and one detector. The number of events in each cell is then compared by applying a maximum likelihood analysis with the expectation from the standard WIMP model. The limit on the neutralino mass (the most favoured WIMP candidate) achieved at accelerators is taken into account.
The analysis of a first dataset suggested the possible presence of a signal compatible with the features of a neutralino with dominant spin-independent interaction. A second year of data taking with larger statistics has underlined this possibility. The signal modulation is shown in figure 1 and the neutralino interpretation in figure 2.
The combined analysis of these two years of data (total statistics: 19511 kg/day) provides a confidence level of 99.6% for a neutralino mass of 59 GeV (+17/14) and a proton cross-section of 7.0 (+0.4 /1.2) 10-6 pb in the frame of the standard WIMP model.
Possible systematic effects and alternative explanations have been investigated, as discussed for example at the 3K Cosmology International Conference in Rome last October. None of the effects considered could simulate all six of the criteria for the annual modulation signature and provide the observed modulation.
Despite this the collaboration has been very cautious mindful of the difficulties of dealing with rare events and has increased its efforts to investigate all aspects of this intriguing result.
The region singled out by DAMA is consistent with the hypothesis of a relic neutralino as a dominant component of the cold dark matter in the galactic halo, as has been pointed out by a Turin group, which also says that some properties of the relevant supersymmetric particles should be accessible at present accelerators and in WIMP indirect searches.
The inclusion of these relic neutralinos in supergravity models hs also been considered by American physicists. A group from Rome and Moscow suggested that the effect could be the result of a heavy neutrino. Finally, the effect of the uncertainties on the dark halo local density and on the WIMP velocity distribution has been examined recently with the conclusion that the relic neutralinos possibly involved in the annual modulation effect would have a mass in the 30130 GeV range with an upper bound extending to some 180 GeV when possible bulk rotation of the dark matter halo is introduced.
Perspectives and plans
The analysis of further statistics is in progress as well as further data taking and an upgrade of the apparatus. If new research for the improved radiopurification of NaI(Tl) is successful, the active mass could be increased to 250 kg.
A final result would mean reproducing the effect over several annual cycles, including all tof he consistency checks.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.