Electromagnetism and the weak force might appear to have little to do with each other. Electromagnetism is our everyday world – it holds atoms together and produces light, while the weak force was for a long time known only for the relatively obscure phenomenon of beta-decay radioactivity.
The successful unification of these two apparently highly dissimilar forces is a significant milestone in the constant quest to describe as much as possible of the world around us from a minimal set of initial ideas.
“At first sight there may be little or no similarity between electromagnetic effects and the phenomena associated with weak interactions,” wrote Sheldon Glashow in 1960. “Yet remarkable parallels emerge…”
Both kinds of interactions affect leptons and hadrons; both appear to be “vector” interactions brought about by the exchange of particles carrying unit spin and negative parity; both have their own universal coupling constant, which governs the strength of the interactions.
These vital clues led Glashow to propose an ambitious theory that attempted to unify the two forces. However, there was one big difficulty, which Glashow admitted had to be put to one side. While electromagnetic effects were due to the exchange of massless photons (electromagnetic radiation), the carrier of weak interactions had to be fairly heavy for everything to work out right. The initial version of the theory could find no neat way of giving the weak carrier enough mass.
Then came the development of theories using “spontaneous symmetry breaking”, where degrees of freedom are removed. An example of such a symmetry breaking is the imposition of traffic rules (drive on the right, overtake on the left) to a road network where in principle anyone could go anywhere. Another example is the formation of crystals in a freezing liquid.
These symmetry-breaking theories at first introduced massless particles which were no use to anybody, but soon the so-called “Higgs mechanism” was discovered, which gives the carrier particles some mass. This was the vital development that enabled Steven Weinberg and Abdus Salam, working independently, to formulate their unified “electroweak” theory. One problem was that nobody knew how to handle calculations in a consistent way…
…It was Gerardus ’t Hooft’s and Martinus Veltman’s work that put this unification on the map, by showing that it was a viable theory that could make predictions possible.
Field theories have a habit of throwing up infinities that at first sight make sensible calculations difficult. This had been a problem with the early forms of quantum electrodynamics and was the despair of a whole generation of physicists. However, its reformulation by Richard Feynman, Julian Schwinger and Sin-Ichiro Tomonaga (Nobel prizewinners in 1965), showed how these infinities could be wiped clean by redefining quantities like electric charge.
Each infinity had a clear origin, a specific Feynman diagram, the skeletal legs of which denote the particles involved. However, the new form of quantum electrodynamics showed that the infinities can be made to disappear by including other Feynman diagrams, so that two infinities cancel each other out. This trick, difficult to accept at first, works very well, and renormalization then became a way of life in field theory. Quantum electrodynamics became a powerful calculator.
For such a field theory to be viable, it has to be “renormalizable”. The synthesis of weak interactions and electromagnetism, developed by Glashow, Weinberg and Salam – and incorporating the now famous “Higgs” symmetry-breaking mechanism – at first sight did not appear to be renormalizable. With no assurance that meaningful calculations were possible, physicists attached little importance to the development. It had not yet warranted its “electroweak” unification label.
The model was an example of the then unusual “non-Abelian” theory, in which the end result of two field operations depends on the order in which they are applied. Until then, field theories had always been Abelian, where this order does not matter.
In the summer of 1970, ’t Hooft, at the time a student of Veltman in Utrecht, went to a physics meeting on the island of Corsica, where specialists were discussing the latest developments in renormalization theory. ’t Hooft asked them how these ideas should be applied to the new non-Abelian theories. The answer was: “If you are a student of Veltman, ask him!” The specialists knew that Veltman understood renormalization better than most other mortals, and had even developed a special computer program – Schoonschip – to evaluate all of the necessary complex field-theory contributions.
At first, ’t Hooft’s ambition was to develop a renormalized version of non-Abelian gauge theory that would work for the strong interactions that hold subnuclear particles together in the nucleus. However, Veltman believed that the weak interaction, which makes subnuclear particles decay, was a more fertile approach. The result is physics history. The unified picture based on the Higgs mechanism is renormalizable. Physicists sat up and took notice.
One immediate prediction of the newly viable theory was the “neutral current”. Normally, the weak interactions involve a shuffling of electric charge, as in nuclear beta decay, where a neutron decays into a proton. With the neutral current, the weak force could also act without switching electric charges. Such a mechanism has to exist to assure the renormalizability of the new theory. In 1973 the neutral current was discovered in the Gargamelle bubble chamber at CERN and the theory took another step forward.
The next milestone on the electroweak route was the discovery of the W and Z carriers, of the charged and neutral components respectively, of the weak force at CERN’s proton–antiproton collider. For this, Carlo Rubbia and Simon van der Meer were awarded the 1984 Nobel Prize for Physics…
…At CERN, the story began in 1968 when Simon van der Meer, inventor of the “magnetic horn” used in producing neutrino beams, had another brainwave. It was not until four years later that the idea (which van der Meer himself described as “far-fetched”) was demonstrated at the Intersecting Storage Rings. Tests continued at the ISR, but the idea – “stochastic beam cooling” – remained a curiosity of machine physics.
In the United States, Carlo Rubbia, together with David Cline of Wisconsin and Peter McIntyre, then at Harvard, put forward a bold idea to collide beams of matter and antimatter in existing large machines. At first, the proposal found disfavour, and it was only when Rubbia brought the idea to CERN that he found sympathetic ears.
Stochastic cooling was the key, and experiments soon showed that antimatter beams could be made sufficiently intense for the scheme to work. With unprecedented boldness, CERN, led at the time by Leon Van Hove as research director-general and the late Sir John Adams as executive director-general, gave the green light.
At breathtaking speed, the ambitious project became a magnificently executed scheme for colliding beams of protons and antiprotons in the Super Proton Synchrotron, with the collisions monitored by sophisticated large detectors. The saga was chronicled in the special November 1983 issue of the CERN Courier, with articles describing the development of the electroweak theory, the accelerator physics that made the project possible and the big experiments that made the discoveries.
• Extracts from CERN Courier December 1979 pp395–397, December 1984 pp419–421 and November 1999 p5.
High-energy proton–proton (pp) and antiproton–proton (pp) elastic-scattering measurements have been at the forefront of accelerator research since the early 1970s, when pp elastic scattering was measured at the Intersecting Storage Rings (ISR) at CERN – the world’s first proton–proton collider – over a wide range of energy and momentum transfer. This was followed by measurements of pp elastic scattering in a fixed-target experiment at Fermilab, by pp elastic-scattering measurements at the Super Proton Synchrotron (SPS) at CERN operating as a pp collider and, finally, in the 1990s by pp elastic-scattering measurements at Fermilab’s Tevatron. Table 1 chronicles this sustained and dedicated experimental effort by physicists, which extended over a quarter of a century as the centre-of-mass energy increased from the giga-electron-volt region to the tera-electron-volt region.
With the first collisions at CERN’s LHC on the horizon, pp elastic scattering will come under the spotlight at the experiment known as TOTEM – for TOTal cross-section, Elastic and diffractive scattering Measurement. The TOTEM collaboration has detailed plans to measure pp elastic scattering at 14 TeV in the centre-of-mass – that is, seven times the centre-of-mass energy at the Tevatron – over a range of momentum transfer, |t| around 0.003–10.0 GeV2. By contrast, the ATLAS collaboration at the LHC plans to measure pp elastic scattering at 14 TeV in the small momentum-transfer range, |t| around 0.0006–0.1 GeV2, where the pp Coulomb amplitude and strong interaction amplitude interfere.
A phenomenological investigation of high-energy pp and pp elastic scattering commenced in the late 1970s with the goal of quantitatively describing the measured elastic differential cross-sections as the centre-of-mass energy increased and as one proton probed the other at smaller and smaller distances with increasing momentum transfer. This three-decade long investigation has led to both a physical picture of the proton and an effective field-theory model that underlies the picture (Islam et al. 2009 and 2006).
The proton appears to have three regions, as figure 1 indicates: an outer region consisting of a quark–antiquark (qq) condensed ground state; an inner shell of baryonic charge – where the baryonic charge is geometrical or topological in nature (similar to the “Skyrmion Model” of the nucleon); and a core region of size 0.2 fm, where the valence quarks are confined. The part of the proton structure comprised of a shell of baryonic charge with three valence quarks in a small core has been known as a “chiral bag” model of the nucleon in low-energy studies (Hosaka and Toki 2001). What we are finding from high-energy elastic scattering then is that the proton is a “condensate-enclosed chiral bag”.
The proton structure shown in figure 1 leads to three main processes in elastic scattering, illustrated in figure 2. First, in the small |t| region, i.e. in the near forward direction, the outer cloud of qqcondensate of one proton interacts with that of the other, giving rise to diffraction scattering. This process underlies the observed increase of the total cross-section with energy and the equality of pp and pp total cross-sections at high energy. It also leads to diffraction minima, like in optics, which are visible in figure 5. Second, in the intermediate momentum-transfer region, with |t| around 1–4 GeV2, the topological baryonic charge of one proton probes that of the other via ω vector–meson exchange. This process is analogous to one electric charge probing another via photon exchange. The spin-1 ω acts like a photon because of its coupling with the topological baryonic charge. Third is the process in the large |t| region – where |t| is around 4 GeV2 or larger. Here one proton probes the other at transverse distances around or less than 1/q, where q = √|t|, i.e. at transverse distances of the order of 0.1 fm or less. Elastic scattering in this region originates from the hard collision of a valence quark from one proton with a valence quark from the other proton – a process that can be better visualized in momentum space (figure 3).
We have considered two alternative quantum-chromodynamical processes for the qq-scattering mechanism (represented by the blob in figure 3). One is the exchange of gluons in the form of ladders – called the “BFKL ladder” – as figure 4a shows. The other process we have considered is where the “dense low-x gluon cloud” of one quark interacts strongly with that of the other, as in figure 4b. The low-x gluons accompanying a quark are gluons that carry tiny fractions of the energy and longitudinal momentum of the quark. The finding of the high-density, low-x gluon clouds surrounding quarks is one of the major discoveries at the HERA collider at DESY.
The solid curve in figure 5 shows our predicted elastic differential cross-section at the LHC at a centre-of-mass energy of 14 TeV and in the momentum-transfer region |t| = 0–10 GeV2 arising from the combination of the three processes – diffraction, ω-exchange, and valence quark–quark scattering (from low-x gluon clouds). The figure also indicates separately the differential cross-sections for each of the three processes. It shows that diffraction dominates in the small |t| region (0 <|t| <1 GeV2), ω-exchange dominates in the intermediate |t| region (1 <|t| <4 GeV2) and valence qq scattering dominates in the large |t| region (5 GeV2 <|t|).
In figure 6 we compare our predicted differential cross-section at the LHC with the predictions of several prominent dynamical models (Islam et al. 2009). A distinctive feature of our prediction is that the differential cross-section falls off smoothly beyond the bump at |t| around 1 GeV2. By contrast, the other models predict visible oscillations. Furthermore, these models lead to much smaller differential cross-sections than ours in the large |t| region, i.e. where |t| is greater than or about 5 GeV2.
If the planned measurement of the elastic differential cross-section by the TOTEM collaboration in the momentum-transfer range of |t| around 0–10 GeV2 shows quantitative agreement with our prediction, then it will support the underlying picture of the proton as depicted in figure 1. The consequent discovery of the structure of the proton at the LHC at the beginning of the 21st century would be analogous to the discovery of the structure of the atom from “high-energy” α-particle scattering by gold atoms at the beginning of the 20th century.
• The authors wish to thank the members of the TOTEM collaboration for discussions and comments.
The nuclear shell model remains an essential tool in describing the structure of nuclei heavier than carbon, with shells corresponding to the “magic” numbers of protons (Z) or neutrons (N) associated with particular stability. A good way to probe the shell model is through the study of the magnetic dipole moment of a nucleus. Indeed, the model should describe particularly well the magnetic dipole moment of an isotope with a single particle outside a closed shell, as in this case the moment should be solely determined by this last nucleon. Copper isotopes (Z = 29), with one proton outside the closed shell of nickel (Z = 28), provide an example of such a system, which has been systematically studied at CERN’s ISOLDE facility with the Resonance Ionization Laser Ion Source (RILIS). The COLlinear LAser SPectroscopy (COLLAPS) collaboration uses collinear laser spectroscopy on fast beams and the NICOLE facility employs nuclear magnetic resonance on oriented nuclei.
Unfortunately, the tendency for chemical compounds to form in the thick target of the ISOLDE facility, does not permit the efficient release of the short-lived isotope 57Cu (T½= 199 ms), This isotope is of particular interest as it can simply be described as the doubly-magic 56Ni plus one proton, but a recent measurement of its magnetic moment strongly disagreed with this picture (Minamisono et al. 2006).
The Leuven Isotope Separator On-Line (LISOL), a gas-cell-based laser ionization facility at the Cyclotron Research Centre in Louvain-La-Neuve in Belgium, is perfectly suited for 57Cu. Beams of protons at 30 MeV and of 3He at 25 MeV impinge on a thin target of natural nickel. The radioactive copper isotopes produced recoil directly out of the target and are thermalized and neutralized in the argon buffer gas. The flow of the buffer gas then transports the isotopes to a second chamber where two laser beams, tuned on atomic transitions specific to the element of interest, give rise to resonance ionization of the atoms.
Resonance ionization has provided very pure beams of radioactive isotopes for more than a decade. It also enables in-source resonance ionization laser spectroscopy, as at ISOLDE’s RILIS. The new feature recently developed at LISOL is the implementation of laser spectroscopy in a gas-cell ion source (Sonoda et al. 2009). Its first on-line application has been the measurement of the magnetic dipole moment of the interesting copper isotopes 57,59Cu.
A team at LISOL observed the hyperfine structure spectra of several isotopes of copper, namely 57,59,63,65Cu, and extracted the hyperfine parameters, which yield the magnetic dipole moments. They were able to perform the measurement of 57Cu with yields as low as 6 ions a second, showing the high sensitivity of the technique (Cocolios et al. 2009). The accuracy is demonstrated by the very good agreement with known hyperfine parameters for 63,65Cu and with the measured magnetic dipole moments for the stable isotope 65Cu and for the radioactive isotope 59Cu, studied previously at ISOLDE. This meant that the team at LISOL was able to disprove with confidence the previous measurement of the magnetic dipole moment of 57Cu. Moreover, the new value is in agreement with several nuclear shell model calculations based on the N = Z = 20 40Ca core and the N = Z = 28 56Ni core, thereby confirming understanding of nuclear structure in this region.
This new technique opens the door for the study of short-lived refractory elements, which are not accessible at ISOLDE, to be probed in new radioactive ion beam facilities, such as at the accelerator laboratory at the university of Jyväskylä (JYFL), GANIL in Caen, RIKEN in Tokyo and the National Superconducting Cyclotron Laboratory at Michigan State University.
Understanding the fundamental structure of matter requires determining how the quarks and gluons of QCD are assembled to form hadrons – the family of strongly interacting particles that includes protons and neutrons, which in turn form atomic nuclei and hence all luminous matter in the universe. Leptons have proved to be an incisive probe of hadron structure because their electroweak interaction with the hadronic constituents is well understood. Experiments to probe the quarks and gluons within the hadrons require high-intensity, high-energy lepton beams incident on nucleons; and if the leptons and nucleons are polarized, then measurements of spin-dependent observables are possible, so casting light on the spin structure of the hadrons.
Current experiments with polarized leptons focus predominantly on the valence quarks. To learn more about the sea quarks and gluons, physicists who study hadron structure have identified a high-luminosity, polarized electron–ion collider (EIC) as the next-generation experimental facility for exploring the fundamental structure of matter. The proposed EIC would be unique in that it would be the first to collide highly energetic electrons and nuclei, and be the first to collide high-energy beams of polarized electrons on beams of polarized protons and, possibly, a few other polarized light nuclei. It would be designed to achieve at least 100 times the integrated luminosity of the world’s first electron–proton collider, HERA, over a comparable operating period.
The EIC would offer unprecedented opportunities to study, with precision, the role of gluons in the fundamental structure of matter. Without gluons, matter as we know it would not exist. Gluons collectively provide a binding force that acts on a quark’s colour charge but – unlike the photons of QED – they also possess a colour charge, so can self-interact. These self-interactions mean that gluons are the dominant constituents of matter, making QCD equations extremely difficult to solve analytically. Recent theoretical breakthroughs indicate that analytic solutions may be possible for systems in which gluons collectively behave like a very strong classical field – the so-called “colour glass condensate (CGC)”. This state has weak colour coupling despite the high gluon density and is characterized by a “saturation” momentum scale, Qs, which is related to the gluon density. QCD also predicts a universal saturation scale where all nuclei, baryons and mesons have a component of their wave function with identical behaviour, implying that they all evolve into a unique form of hadronic matter.
The discovery of CGC would represent a major breakthrough in the understanding of the role of gluons in QCD under extreme conditions. To probe the CGC optimally requires collisions of high-energy electrons and heavy ions (with large atomic number, A) resulting in large centre-of-mass energy (i.e. small gluon momentum fraction, x). The EIC would allow exploration of this novel regime of QCD because the use of heavy nuclei in experiments amplifies the gluon densities significantly over electron–proton collisions at comparable energies. Figure 1 shows the dependence of the saturation scale Qs2 on x and A and indicates the region that would be accessible to the EIC.
The ability to collide spin-polarized proton and light-ion beams with polarized electrons (and possibly also positrons) would give the EIC unprecedented access to the spatial and spin structure of protons and neutrons in the gluon-dominated region, complementary to the existing polarized-proton collider, RHIC, at Brookhaven National Laboratory (BNL). Figure 2 illustrates how the EIC would extend greatly the kinematic reach and precision of polarized deep-inelastic measurements compared with present (and past) polarized fixed-target experiments at SLAC, CERN, DESY and Jefferson Lab.
The polarizations measured so far for the sea quarks and gluons are consistent with zero, albeit with large uncertainties. Given that the quarks contribute only about 30% to the spin of the proton, this is surprising. The EIC is ideally suited to resolve this puzzle: it would measure with precision the contribution of the quarks and gluons to the nucleon’s spin deep in the non-valence region (figure 3) and also study their transverse position and momentum distributions, which are thought to be associated with the partonic orbital angular momentum. This could provide tomographic images of the nucleon’s internal landscape beyond the valence-quark region, which will be probed with the 11 GeV electron beam at Jefferson Lab’s Continuous Electron Beam Accelerator Facility (CEBAF). Both measurements are essential to understand the constitution of nucleon spin.
Excited by these prospects, physicists came together in 2006 to form the Electron–Ion Collider Collaboration (EICC) to promote the consideration of such a machine in the US. They have developed the scientific case for an EIC with a centre-of-mass energy in the 30–100 GeV range and luminosity of about 1033 cm–2s–1. The flagship US nuclear-physics laboratories BNL and Jefferson Lab have developed preliminary conceptual designs based on their existing facilities, namely RHIC and CEBAF, respectively. These early concepts have since evolved into significantly more advanced designs (figure 4). Options include the possibilities of realizing electron–nucleus and polarized electron–proton collisions at lower energies and at lower initial costs. Considerable effort is underway to achieve the highest luminosities – up to 1035 cm–2s–1 – which would maximize the access to the physics and help in make the strongest possible case for the EIC.
Future prospects
The scientific argument for the EIC has been discussed in the US nuclear-physics community since its first formal presentation at the Nuclear Science Advisory Committee’s (NSAC) 2002 long-range planning exercise and most recently in a similar exercise held in 2007. The result is that the EIC has been embraced as embodying the vision for reaching the next QCD frontier. The community recognizes that the EIC would provide unique capabilities for the study of QCD well beyond those available at existing facilities worldwide and would be complementary to those planned for the next generation of accelerators in Europe and Asia. NSAC has recommended that resources be allocated to develop the necessary accelerator and detector technology for the EIC.
Two separate proposals for EICs are being considered in Europe. In the LHeC the existing LHC hadron beam would collide with a 70–140 GeV electron beam. The resulting collisions with the 7 TeV proton beam would allow a centre-of-mass energy of about 1.4 TeV. Such a high energy would enable the study of gluons and their collective behaviour at their highest possible densities (lowest possible x). It would also allow exploration of possible physics beyond the Standard Model with a lepton probe at very high Q2. The other European EIC proposal is motivated by the spin structure of the nucleon. The European Nucleon Collider (ENC) would make use of the High-Energy Storage Ring (HESR) and the PANDA detector at the proposed Facility for Antiproton and Ion Research (FAIR) at GSI. The centre-of-mass energy proposed for this facility is around 14 GeV, which lies between the fixed-target experiments HERMES at DESY and COMPASS at CERN. The primary goal of ENC is to explore the 3D structure of the nucleons, including the transverse-momentum distributions and generalized parton distributions for the quarks.
Since 2007 the EICC has met approximately every six months at Stony Brook University in New York, Hampton University in Virginia, Lawrence Berkeley National Laboratory and, most recently, at GSI. The next meeting is scheduled to take place at Stony Brook University in January 2010. The directors of BNL and Jefferson Lab have formed an EIC International Advisory Committee (EICAC) to help prepare the case for the project in the US. The EICAC met for the first time in Washington DC in February 2009 and will meet again in November at Jefferson Lab. The EICC is working towards the consideration of the EIC by NSAC as a priority for new construction in its next long-range plan anticipated in 2012 or 2013.
Heavy-ion collisions at ultrarelativistic energies explore the transition from ordinary matter to a plasma of deconfined quarks and gluons – a state of matter that probably existed in the first few microseconds of the universe. Early experiments of this kind began 25 years ago at CERN, at the Super Proton Synchrotron (SPS), and at Brookhaven, at the Alternating Gradient Synchrotron followed by the Relativistic Heavy Ion Collider in 2000 – and now the LHC at CERN is preparing for heavy-ion collisions in 2010. Studies of the hadrons produced have given insight into numerous aspects of the medium formed in the collisions, including collective behaviour and thermalization. They have also indicated that the temperatures reached at beam energies above about 40A GeV may already exceed the critical temperature Tcfor deconfinement into a quark-gluon plasma.
Electromagnetic probes such as photons and dileptons (l+l– pairs) have long held the promise of a more direct insight. Escaping without final-state interactions, they can reveal the entire space–time evolution of the produced medium, from the early partonic (quark–gluon) phase to the final freeze-out of hadrons, when all interactions cease. In the case of dileptons, experimental difficulties associated with low signal-to-background ratios (from high multiplicity densities), the superposition of nonthermal sources and a lack of sufficient luminosity have hindered clear insight in the past. Nevertheless, experiments at CERN observed an encouraging excess above known sources: CERES/NA45 in the mass region below 1 GeV, NA38/NA50 in the region above 1 GeV and HELIOS/NA34-3 in both mass regions. The very existence of an excess gave a strong boost to theory, leading to hundreds of publications, and provoked a number of open questions.
For masses below 1 GeV, thermal dilepton production is dominated by the hadronic phase and mediated mainly by the light vector meson ρ (770 MeV). With its strong coupling to μ+μ– and a lifetime of only 1.3 fm – much shorter than that of the “fireball” produced – the ρ is the key test particle for “in-medium” changes of hadron properties such as mass and width close to the transition where chiral symmetry is restored, as Robert Pisarski first proposed. However, questions about how the ρ changes in the medium – does it shift in mass or broaden? – remained open. Above 1 GeV, thermal dileptons could be produced as “Planck-like” continuum radiation in both the early partonic and late hadronic phases, so offering access to the expected deconfinement transition, as first Edward Shuryak, and later Keijo Kajantie and many others, have pointed out. However, the ori-gin of the dilepton-excess observed above 1 GeV was not clear. Does it arise from the enhanced production of open charm or from thermal radiation? Is it from partonic or hadronic sources? The status of thermal dilepton production in both mass regions at RHIC is even less clear.
Novel detectors
The NA60 experiment at CERN’s SPS was built specifically to follow up on these open questions. By taking a big step forward in technology this third-generation experiment has achieved completely new standards of data quality in the field. Approved in 2000, it took data on indium–indium collisions at 158A GeV for just one running period, in 2003. Briefly, the apparatus complements the muon spectrometer (MS) previously used by NA10/NA38/NA50 with a novel radiation-hard, silicon-pixel vertex telescope (VT), placed inside a 2.5 T dipole magnet between the target region and the hadron absorber (Arnaldi et al. 2009a). The VT tracks all of the charged particles before they enter the absorber and determines their momenta independently of the MS, free from multiple-scattering effects and the energy-loss fluctuations that occur in the absorber. The associated read-out pixel chips were originally developed for the ALICE and LHCb experiments.
The matching of the muon tracks in the VT and the MS, in both co-ordinate and momentum space, greatly improves the dimuon mass resolution in the region of the vector mesons ρ, ω, and φ, reducing it from approximately 80 MeV to around 20 MeV. It also significantly reduces the combinatorial background from μ and K decays and makes it possible to measure the muon offset with respect to the primary interaction vertex, thereby allowing the tagging of dimuons from simultaneous semileptonic decays of DDı pairs – that is, open charm. The additional bend by the dipole field gives a much greater acceptance for opposite-sign dimuons at low mass and low transverse momentum than was possible in all previous dimuon experiments. Finally, the selective dimuon trigger and the radiation-hard vertex tracker, with its high read-out speed, allowed the experiment to run at high rates for extended periods, enabling a high luminosity.
Starting with the low mass region, M <1 GeV, figure 1 shows the net dimuon mass spectrum from NA60, integrated over centrality, after subtraction of the two main background sources: combinatorial background and fake matches between the two spectrometers (Arnaldi et al. 2006 and 2008). The plot contains about 440,000 dimuons in this mass region and exceeds previous results by up to three orders of magnitude in effective statistics, depending on mass. The spectrum is dominated by the known sources: the electromagnetic two-body decays of the η, ω and φ resonances, which are completely resolved for the first time in nuclear collisions, and the Dalitz decays of the η, η’ and ω. While the peripheral, “p–p like” data – the very glancing collisions – are quantitatively described by the sum of a “cocktail” of these contributions together with the ρ and open charm, this is not true for the more centrally weighted – more “head on” – total data shown in figure 1. This is because of the underlying dilepton excess observed previously.
Now, for the first time, the high data quality allows this excess to be isolated without any assumptions about its nature and without fits. The cocktail of decay sources is subtracted from the total data using local criteria that are based solely on the measured mass distribution itself; the ρ is not subtracted. Figure 2 shows the excess for one region in centrality (Arnaldi et al. 2006 and 2009b). The peaked structure seen here appears for all centralities, broadening strongly for the more central collisions, but remaining centred on the nominal pole position of the ρ. At the same time, the total yield relative to the cocktail ρ increases with centrality, becoming up to six times larger than for the most peripheral collisions.
All of this is consistent with an interpretation of the dilepton excess as arising predominantly from μ+μ– annihilation via intermediate ρ mesons, which are continuously regenerated throughout the hadronic phase of the expanding fireball. (This is the “ρ-clock”, which “ticks” at the rate of the ρ’s lifetime and is presumably the most accurate way to measure the lifetime of the fireball). It is important to point out that the data as plotted, i.e. without any acceptance correction and pT selection, can be directly interpreted as the space–time averaged spectral function of the ρ, owing to a fortuitous cancellation of the mass and pT dependence of the acceptance filtering by the photon propagator and Bose factor associated with thermal dilepton emission (Damjanovic et al. 2007).
Figure 2 also shows the two main theoretical scenarios for the in-medium spectral properties of the ρ: dropping mass, suggested by Gerald Brown and Mannque Rho, and broadening as proposed by Ralf Rapp, Jochen Wambach and colleagues. The dropping-mass scenario, which ties hadron masses directly to the value of the chiral condensate (with vanishing values as chiral restoration is approached), leads to a shifted and broadened distribution that is clearly ruled out. The unmodified ρ, defined as the full amount of regenerated ρ mesons without any in-medium spectral changes (“vacuum ρ”), is also clearly ruled out. Only the broadening scenario, based on a hadronic many-body approach, describes the data well, up to about 0.9 GeV where processes other than 2μ set in, as described below.
The results from NA60 thus end a decades-long controversy about the spectral properties of hadrons close to the QCD phase boundary. In general terms, chiral restoration should restore the degeneracy between chiral partners such as the vector ρ and the axialvector a1, which are normally split by 0.5 GeV. Whether this happens by moving masses or by a complete “melting” with full overlap of the two partners has always been open to debate, but the question is now answered for the ρ – and with it probably for all light hadrons. Meanwhile, a more explicit connection between chiral-symmetry restoration and the hadron “melting” observed is under discussion by Rapp, Wambach and others.
Turning now to the mass region above 1 GeV, the use of the silicon VT has allowed NA60 to measure the offset between the muon track and the primary interaction vertex and thereby disentangle, for the first time in nuclear collisions, prompt dimuons from offset pairs from D-meson decays (Arnaldi et al. 2009a). The results are perfectly consistent with no enhancement of open charm relative to the level expected from scaling up the results from NA50 for masses above 1 GeV in proton–nucleus collisions. The dilepton excess, previously observed by NA34-3 and NA38/NA50, is therefore solely prompt, with an enhancement over Drell–Yan processes by a factor 2.3±0.08. This excess can be isolated, rather as for masses below 1 GeV, by subtracting the expected known sources, here Drell–Yan and open charm, from the total data. The resulting mass spectrum is quite similar to the shape of open charm and much steeper than that for Drell–Yan.
A true thermal spectrum
In the absence of resonances, the signature of any thermal source should be a Planck-like radiation spectrum. Now a 25-year-old dream has become reality with NA60’s measurement of such a spectrum in high-energy nuclear collisions, isolated from all other sources. Figure 3 shows the mass spectrum of the excess dileptons for the complete range 0.2 <M <2.6 GeV, corrected for experimental acceptance and normalized absolutely to the charged-particle rapidity density (Arnaldi et al. 2009a). The shape is mainly a pure exponential, indicative of a flat spectral function as in the black-body case, except for the slight modulation around the nominal pole position of the ρ.
The figure also shows recent theoretical results from the three major groups working in this field. The general agreement between the data and these theoretical results, which are not normalized to the data, but are calculated absolutely, is remarkable, both for the spectral shapes and the absolute yields, and strongly supports the term “thermal”. At the level of the detailed description of the dominant dilepton sources, all three groups agree on μ+μ– annihilation for M <1 GeV, one doing somewhat better than the others below 0.5 GeV through additional secondary sources and a larger contribution from ρ–baryon interactions. Above 1 GeV, 2μ processes become negligible, and other hadronic processes such as 4μ (including vector–axialvector mixing) and partonic processes such as quark–antiquark annihilation, qqı → l+l–, take over.
All three models explicitly differentiate between the hadronic and partonic processes. But while the spectral shape and total yield for M >1 GeV are described about equally well, the fraction of partonic processes relative to the total varies from 25% to more than 85% depending on the model. The large variations are from differences both in the underlying spectral functions and the fireball dynamics, which at least partially compensate each other in the total yields. However, the space–time trajectories are not the same for genuine partonic and hadronic processes, the former being “early” (i.e. from the initial temperature Tinit to Tc) and the latter only “late” (i.e. from Tc to thermal freeze-out at temperature Tf). The question therefore arises whether these differences leave a measurable imprint on the dileptons that could reveal the dominant source.
The answer is “yes”. Unlike real photons, lepton pairs are characterized by two variables: mass and transverse momentum pT. Quite different from mass, pT not only contains contributions from the spectral functions, but also encodes the key properties of the expanding fireball: temperature and transverse expansion (“radial flow”). The latter causes a blue-shift of pT, which is well known from hadron production. However, in contrast to hadrons, which receive the full flow reached at the moment of decoupling, dileptons are continuously emitted during the evolution of the fireball and so reflect the space–time integrated temperature-flow history in their final pT spectra. Because flow builds up monotonically during this evolution – being small in the early partonic phase (in particular at SPS energies, owing to the “soft point” in the equation-of-state) and increasingly larger in the late hadronic phase – the final pT spectra keep a memory of the time ordering of the different dilepton sources, thereby offering a diagnostic tool for the emission region.
The variable commonly used here is mT = (pT2 + M2)1/2 and all mT spectra for the dilepton excess are found to be nearly exponential (Arnaldi et al. 2008, 2009a, 2009b). The full information can therefore be reduced to one parameter, the inverse slope Teff, obtained by fitting the spectra with the expression: 1/mTdN/dmT α exp(–mT/Teff). Figure 4 shows the mass dependence of Teff for the complete mass range 0.2 <M <2.6 GeV. It also includes the hadron data for μ and for η, ω, φ obtained as a by-product of the cocktail-subtraction procedure. A separate value is added for the ρ peak visible in figure 2, which is generally interpreted as the “freeze-out ρ” without in-medium effects. It is obtained by disentangling the peak from the underlying continuum through a side-window method.
Taken together, the dilepton data and the hadron data suggest the following interpretation. The parameter Teff is roughly described by a temperature part and a radial-flow part: Teff ≈ T + Mv2, where v is the average flow velocity. The general rise of Teff with mass up to about 1 GeV is therefore consistent with the expectations for radial flow. Maximal flow (about half of the speed of light) is reached for the ρ, owing to its maximal coupling to pions, while all other hadrons freeze out earlier. The dilepton values rise nearly linearly up to the pole position of the ρ, but always stay well below the ρ line (dotted). This is exactly what would be expected for radial flow of an in-medium, hadron-like source (here μ+μ– → ρ) decaying continuously into dileptons. The average temperature associated with this region is 130–140 MeV.
For M >1 GeV, i.e. beyond the 2μ region, the dilepton values fall suddenly by about 50 MeV down to a level of 200 MeV – an effect that is even more abrupt for the pure in-medium continuum (Arnaldi et al. 2009b). The trend set by a hadron-like source in the low-mass region makes it extremely difficult to reconcile such a fast transition with emission sources that continue to be of predominantly hadronic origin above 1 GeV. A much more natural explanation is a transition to a mainly early, i.e. partonic source with processes such as qqı → l+l– for which flow has not yet built up. The observed slope parameter of Teff around 200 MeV, which is essentially independent of M in this region, is then perfectly reasonable and reflects the average thermal values in the fireball evolution between a Tinit of around 220–250 MeV and a Tc of about 170 MeV. All in all, these findings on Teff may well represent a further breakthrough, pointing to a partonic origin of the observed thermal radiation for M >1 GeV and thus, rather directly, to deconfinement at SPS energies.
One final point further underlines the thermal-radiation character of the observed excess dileptons. The study of the dimuon angular distributions in NA60 has yielded complementary information on the production mechanism and the distribution of the annihilating particles, again a first in the field of nuclear collisions (Arnaldi et al. 2009c). Because of the lack of sufficient statistics for higher masses the study is restricted to the region M <1 GeV, but it finds that all coefficients describing the distributions (the “structure function parameters” λ, μ and ν, related to the spin-density matrix elements of the virtual photon) are zero and projected distributions in ¦cosθ¦ and ¦φ¦ are uniform (figure 5). This is a non-trivial result: the annihilation of partons or pions along the beam direction would lead to λ = +1, μ = ν = 0 (the well known lowest-order Drell–Yan case) or λ = –1, μ = ν=0, corresponding to transverse and longitudinal polarization of the virtual photon, respectively. The absence of any polarization is consistent with the interpretation of the excess dimuons as thermal radiation from a randomized system, as Paul Hoyer first suggested.
To summarize, the NA60 experiment, a latecomer at the SPS, has provided answers to all of the major questions left over by previous dilepton experiments: on the spectral function of the ρ in connection to the chiral transition; on the origin of the excess dileptons for M >1 GeV in connection to the deconfinement transition; and on the thermal-radiation character of all excess dileptons. In addition, there has been major progress on charmonia. The answers are probably as clear as they could be at this stage of the field, but they will surely benefit from further progress in theory.
The Belle collaboration at KEK has recently analysed the angular distribution of leptons in the decays of B mesons into a K* meson and a lepton anti-lepton pair, where the lepton is an electron or a muon. The team finds that the results, which were presented in August at the Lepton–Photon International Symposium in Hamburg, are larger than expected from the Standard Model.
The figure shows the forward–backward asymmetry of the positively charged lepton with respect to the direction of the K* in B → K*l+l–, based on the analysis of 660 million pairs of B and anti-B mesons. The measured data points are above the Standard Model expectation (solid curve). In the Standard Model this decay mode proceeds via a “Penguin diagram” involving intermediate virtual particles, such as a Z boson or a W boson, which are much heavier than the B meson. New heavy particles beyond those already known in the Standard Model should also participate in a similar way. The difference between the measurements and the Standard Model expectation (blue curve) might indicate that such new particles are indeed produced in addition to Z and W bosons. Indeed, the data points are closer to the prediction that includes supersymmetric particles (green curve).
This rare decay process was discovered by Belle in 2002. However, the measurement of the lepton forward–backward asymmetry has been quite difficult owing to the small decay rates. It has become possible only with increased data samples that the experiment has gathered, thanks to the improved performance of the KEKB accelerator. The analysis so far has yielded about 250 signal events. To clarify whether the results are indeed hinting at new physics, the Belle collaboration is continuing its measurements with a larger sample of accumulated data.
The 13th-century merchants’ town of Krakow and former capital is now one of the largest and oldest cities in Poland. The scenic city centre – a UNESCO World Heritage Site – with its fascinating history and pleasant climate, provided the perfect setting for discussing new results and future developments at the biennial European Physical Society (EPS) conference on High Energy Physics (HEP). The event was held on 16–22 July at the new conference centre of the Jagiellonian University – the Auditorium Maximum.
The conference began with 35 parallel sessions and more than 350 contributions over two and a half days. Then, as is tradition, the EPS and European Committee for Future Accelerators scheduled a joint plenary meeting for Saturday afternoon. This focused on a number of talks concerning the future of the field: with Christian Spiering of DESY on “Astroparticle physics and relations with the LHC”; CERN’s director-general, Rolf Heuer, on “The high-energy frontier”; Alain Blondel of Geneva University on “The future of -accelerator-based neutrino physics'”; and Tatsuya Nakada of the École Polytechnique Fèdèrale de Lausanne on “Super-B factories”. Each presentation was followed by a lively discussion.
Sunday provided the opportunity for several sightseeing trips in and around Krakow. Monday saw a fresh start to the week and time for another tradition: the presentation of the EPS awards. For the first time, the High Energy and Particle Physics (EPS HEPP) prize was awarded to an experimental collaboration, Gargamelle, for the observation of the weak neutral current. After the awards ceremony, Frank Wilczek, the 2004 Nobel laureate, gave a special talk on “some ideas and hopes for fundamental physics”. This provided an excellent start to three days of plenary sessions, with around 35 presentations.
The Tevatron proton–antiproton collider continues its smooth operation. With more than 6 fb–1 integrated luminosity delivered and peak luminosities exceeding 3.5 × 1032 cm–2s–1, the CDF and DØ experiments are steadily increasing their statistics. Both collaborations are pushing forward on the analysis of their latest data in a joint effort to confirm and enlarge the previously reported exclusion region for the Higgs mass of around 160–170 GeV. At the same time, several new ideas are emerging on how to improve the sensitivity of these experiments to more challenging Higgs decay channels. In addition to the direct search for the Higgs boson, both collaborations reported on new mass measurements of the W boson (MW = 80.399 ± 0.023 GeV) and confirmed the combined experimental result for the top quark mass (mt = 173.1 ± 1.3 GeV), pushing the error below the 1% level. These values lead to a further reduction of the preferred mass region for the Standard Model Higgs, as John Conway of the University of California Davis pointed out in his plenary presentation. Moreover, these and other precision measurements of the weak parameters (sin2θW = 0.2326 ± 0.0018stat ± 0.0006sys as compared with the theoretical prediction of sin2θW = 0.23149 ± 0.00013) show growing evidence that the Standard Model prefers a light Higgs, which, as Conway concluded, will make life difficult. Even for the large LHC experiments, ATLAS and CMS, this region of the window on the Higgs mass will require high statistics, combining different decay modes and sophisticated analyses.
A number of sophisticated statistical procedures are being developed and becoming available as complete software packages – for example, GFITTER – to simplify or fine-tune multidimensional analyses of experimental data. At the same time, there is impressive progress in calculating amplitudes for multileg processes and loops. A rather complete set of automatically derived “2 → 4 particle” cross-sections (the “Les Houches 2007 wish list”) demonstrates that higher-order corrections to important physics processes at the LHC cannot be ignored.
Increasing statistics at the Tevatron are also consolidating the observation of single top production, but at the same time the parameter space for new physics at or below the 1 TeV scale is becoming smaller, as Volker Büscher of Mainz explained. CDF and DØ have conducted studies that probe mass values for the charginos of supersymmetry up to 176 GeV; they find no evidence for neutralino production in their current data sample. In addition, the studies shift the possibility of quark compositeness or large extra dimensions further towards a higher energy scale.
While the latest updates on analyses of data from RHIC and the SPS were presented in the parallel sessions, Urs Wiedemann of CERN covered theoretical aspects of collective phenomena in his plenary talk. He summarized the motivation for experiments at RHIC (√sNN = 200 GeV) and the LHC (√sNN = 5500 GeV) to study the QCD properties of dense matter at the 150 MeV scale, which will be accessible at these high collision energies.
A wealth of new data is also emerging from the experimental analysis of B-physics – from both hadron colliders and e+e– machines – ranging from analyses of rare exclusive decay modes to spectroscopy and physics related to the Cabibbo–Kobayashi–Maskawa matrix (CKM). The results further confirm oscillations in the neutral D and Bs sectors. This is another area where the Standard Model seems not to be seriously challenged: the CKM triangle appears to remain “closed” (within experimental errors). Nevertheless, as Andrzej Buras of TU Munich pointed out in his talk on “20 goals in flavour physics for the next decade”, there are still many challenges ahead. A breakthrough could come with firm experimental evidence for flavour-changing neutral currents in excess of Standard Model predictions. Buras’s message is clear: stay focused on the many observables that are not yet well measured and the decay modes that are not so far (or poorly) studied; spectacular deviations from the Standard Model remain possible.
With a new series of experiments under construction and several experiments producing new data, neutrino physics remains an experimentally driven enterprise. The neutrino sessions were – not surprisingly – very well attended. Better mass measurements are coming within reach, be it from obtaining upper limits by measuring time shifts in neutrinos from supernovae (mν < 30 eV) or from measuring the tritium β-decay spectrum (mν < 2 eV) or mass differences from oscillations (all Δm2 < 1 eV2). Because neutrinos are abundant in the universe, even a small neutrino mass will have implications in astrophysics. Dave Wark of Imperial College summarized the broad spectrum of neutrino physics experiments and their discovery potential. Anticipating the various experimental approaches and progress, he explained under which terms the Majorana phases, for example, could be determined.
While the large LHC experiments are commissioning their triggers, new ideas on the future of the LHC machine are being explored. These include high-luminosity schemes and higher beam energies, which will have different implications for future upgrades of both machine and experiments. R&D on accelerators is focusing not only on higher-energy frontiers and currents, but also on more efficient beam-crossing (“crab”) scenarios.
In a worldwide effort, the International Linear Collider collaboration aims to present a Technical Design Report in 2012 for a high-energy e+e– machine. The Compact Linear Collider Study (CLIC) based at CERN, which aims for a Conceptual Design Report at the end of 2010, investigates different approaches and may reach a higher beam energy (3 TeV v 1 TeV). However, the physics simulations and detector designs for the two schemes face equal challenges.
The development of “super factories” is an ongoing effort that is complementary to the high-energy machines. These facilities should provide high-statistics experiments on, for example, the neutrino, charm and bottom sectors, with the necessary infrastructure for high-precision measurements. Caterina Biscari of Frascati presented a comprehensive overview of existing machines and (possible) future accelerators, in which she compared their main parameters.
The conference saw substantial contributions from astroparticle physics. The Auger experiment probing the highest energy cosmic rays (1020 eV) shows growing evidence for the Greisen–Zatsepin–Kuzmin cut-off. The energy spectrum agrees well (within the 25% calibration uncertainty on the energy scale) with results from the HiRes collaboration. Active galactic nuclei are now also observed by the High Energy Stereoscopic System (HESS) and the Large Area Telescope on the Fermi Gamma-ray Space Telescope. In particular, the core of Centaurus A appears to be extremely interesting owing to the bright radio source in its centre. High-energy cosmic rays are predominantly produced by “nearby” (< 100 Mpc) sources, while there is a slight indication that the composition changes with increasing energy, towards more heavy nuclei.
PAMELA (launched in 2006), the Advanced Thin Ionization Calorimeter balloon experiment (2008), Fermi (launched in 2008) and HESS show some excesses in the e± spectrum. The interpretation of these signals remains uncertain. Is it related to the nature of non-baryonic, that is, dark matter, or can the spectra be explained by astrophysics phenomena such as pulsars or supernova remnants? The PAMELA data have generated huge theoretical interest resulting in a multitude of dark-matter models. However, much more data are needed from both space-based experiments and ground-based searches for decaying weakly interacting massive particles. The Alpha Magnetic Spectrometer, finally scheduled to be launched in 2010, should at least provide much improved limits on the antiproton flux.
The next international Europhysics conference on high-energy physics will take place in Grenoble on 21–27 July 2011. After last years’ successful injection of proton beams into the LHC, followed by the unfortunate incident and subsequent repairs and consolidation, the starting date for high-energy collisions at the LHC is now rapidly approaching. At the meeting in Grenoble there will be lively discussions on Tevatron data – perhaps with surprises – and extensive reports, among others, on dark-matter searches. Of course, we all look forward to reports on first data analyses by the LHC experiments.
• The local organization of EPS-HEP 2009 by the Institute of Nuclear Physics PAN, Jagiellonian University, the AGH University of Science and Technology and the Polish Physical Society is acknowledged.
Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have succeeded in making and measuring the production rates of 15 new neutron-rich isotopes. Several of these rare isotopes were produced at significantly higher-than-expected rates. The results suggest the existence of a new “island of inversion” – a region of isotopes with enhanced stability in a sea of mostly fleeting and unstable nuclei at the edge of the nuclear map.
Motivation to explore this region of nuclides was provided in part by an earlier experiment at NSCL that produced and measured the production rates of three new isotopes of magnesium and aluminium. In particular, the aluminium isotope measured (42Al) was beyond the limit of stability predicted by one of the leading theoretical models. It was therefore logical to ask: how well do existing theories describe the behaviour of heavier, neutron-rich nuclei?
Perhaps not so well, according to the results of continued studies at NSCL, which have investigated the nuclei of elements from chlorine to manganese. Most of the nuclei in this region were expected to be characterized by low binding energies, and thus be exceedingly unstable and difficult to produce. However, the experiments revealed unexpectedly higher production rates for several isotopes of potassium, calcium, scandium and titanium (Tarasov et al. 2009).
The results could imply the existence of a new island of inversion for neutron-rich nuclei. The island would be the result of changes in the interaction strength between protons and neutrons, which is already known to depend on the number of protons and neutrons inside the nucleus. Nearest the stable isotopes, the change is often small enough to go unnoticed, but in very neutron-rich nuclei the effects can be amplified in localized areas, leading to small groupings of isotopes with very distinctive properties.
The team that discovered element 112 at GSI Darmstadt has proposed naming it “copernicium”, with the element symbol “Cp”, in honour of the scientist and astronomer Nicolaus Copernicus. The International Union of Pure and Applied Chemistry (IUPAC) should officially endorse the new element’s name in around six months, the period set to allow the scientific community to discuss the proposal.
Copernicus, who lived from 1473 to 1543, paved the way for the modern view of the universe when he firmly planted the Earth in orbit about the Sun in his famous work De revolutionibus orbium coelestium. With its planets revolving around the Sun on different orbits, the solar system became a model for other physical systems, in particular the atom, with electrons in orbit around the nucleus. Although this model of the atom soon became surpassed by quantum mechanics, it still provides a strong visual image. In an atom of the new element, 112 electrons surround the nucleus.
Element 112 was first observed 13 years ago but has only recently received official recognition from IUPAC. It is the heaviest element discovered so far in the periodic table, being 277 times heavier than hydrogen. Produced by nuclear fusion when bombarding zinc ions onto a lead target, the element rapidly decays so its existence can be proved only with the help of extremely fast and sensitive analysis methods. Twenty-one scientists from Germany, Finland, Russia and Slovakia were involved in the experiments at GSI that led to the discovery.
The CDF collaboration has announced the observation of a new particle, the Ωb– baryon, containing three quarks: two strange quarks and a bottom quark (ssb). The sighting of this “doubly strange” particle, predicted by the Standard Model, is significant because it strengthens physicists’ confidence in their understanding of how quarks form matter. However, it conflicts with a result announced in 2008 by CDF’s sister experiment, DØ.
The Ωb– is the latest entry in the “periodic table of baryons” illustrated in the figure. The Tevatron is unique in its ability to produce baryons containing the b quark, and the large data samples now available after many years of successful running have enabled experimenters to find and study these rare particles. The discovery of the Ωb– follows the first observations of two types of Σb baryons at the Tevatron in 2006 and the discovery there of the Ξb– baryon in 2007.
Combing through almost 5 × 1011 proton–antiproton collisions produced by the Tevatron, the CDF collaboration isolated 16 examples in which the particles emerging from collisions reveal the distinctive signature of the Ωb–, which travels only a fraction of a millimetre before it decays into lighter particles. CDF has performed the first ever measurement of the Ωb–‘s lifetime and obtained 1.13 + 0.53 – 0.40(stat.) ± 0.02(syst.) × 10–12 s.
In August 2008, the DØ experiment announced its own observation of the Ωb– based on a smaller sample of data from the Tevatron. Interestingly, the new observation from CDF conflicts with this earlier result. The CDF collaboration measures the mass of the Ωb– to be 6054.4 ± 6.8(stat.) ± 0.9(syst.) MeV/c2, compared with DØ’s findings of 6165 ± 10(stat.) ± 13(syst.) MeV/c2. These two results are statistically inconsistent, leaving the teams from the two experiments wondering whether they are measuring the same particle. Furthermore, the experiments observed different rates of production for this particle. Perhaps most interesting is that neither experiment sees a hint of evidence for a particle at the mass value measured by the other.
Although the latest result announced by CDF agrees with theoretical expectations for the Ωb–, both in the measured production rate and in the mass value, further investigation is needed to solve the puzzle of these conflicting results.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.