The old spa area of Herlany in Slovakia welcomed more than 50 physicists from over 30 countries last September for the 2002 Hadron Structure conference, which took place in the Educational Centre of the Technical University. The area is famous for its cold-water geyser, which is unique in Europe, and which did not disappoint as it erupted four times during the conference. Nor did the conference itself disappoint, with its mix of theoretical talks and experimental reviews.
The Hadron Structure conferences, which have become one of the major events in the Slovak high-energy physics community, are based on a tradition of more than 30 years. The origins of the conferences can be traced back to the late 1960s, when informal meetings of theoreticians from Bratislava, Budapest and Vienna – the so-called Triangle Meetings – were organized three to six times a year and moved between the different locations. The meetings held in Slovakia were called the Hadron Structure meetings and they gradually developed into a series of conferences.
Although the Triangle Meetings were predominantly devoted to theoretical topics, at Hadron Structure 2002 the theoretical reports were balanced by impressive experimental review talks. The following is only a brief report of the scientific programme, which involved a wide range of high- and medium-energy particle physics and heavy-ion physics.
The LEP experiments presented reports on W boson physics, Higgs boson mass limits, and on the searches for neutralinos and large extra dimensions, as well as electroweak, heavy flavour and QCD measurements at LEP. The results are in good agreement with the Standard Model expectations. The H1 and ZEUS experiments at HERA reviewed results on proton structure functions, inclusive diffraction measurements, open charm and beauty, as well as vector meson production. The beauty results seem in general to be above perturbative QCD predictions. Recent spin physics results from HERMES, as well as the latest results from the HERA-B experiment, were also presented.
In B physics, the two dedicated spectrometers BaBar and Belle presented their results on CP violation in B0 decays, the B0 lifetime and branching fractions. Their measurements of the unitary triangle angle ß are found to be consistent with the expectations of the Standard Model and can be used to constrain extensions of the model.
Moving on to heavy-ion collisions at RHIC, in Brookhaven, the STAR collaboration reported results on transverse momentum distributions, hadronic yields and correlations. The azimuthal correlations at moderately high transverse momenta demonstrate the existence of hard scattering processes at RHIC, while the disappearance of di-jets and the suppression of single inclusive particle production are consistent with the jet-quenching scenario. PHENIX presented results on high-pt charged-particle azimuthal correlations, which may indicate a novel particle production mechanism.
In relativistic nuclear physics, selected problems studied at the Veksler and Baldin Laboratory of High Energies at JINR, Dubna, were reported. These studies make use of the Synchrofasotron Nuclotron acceleration system. A plan to upgrade the Nuclotron and organize a user centre for relativistic nuclear physics and applied research with ions of a few GeV energy is foreseen.
Two review talks at the conference were presented on behalf of the ATLAS collaboration. One of these concerned the overall detector concept, the status of the subsystems and the magnet. The second talk was an overview of the ATLAS physics potential for searches at the LHC for the Higgs boson(s), supersymmetric particles, quark and lepton compositeness, new gauge bosons and extra dimensions.
The conference was organized by the Nuclear Physics Department in the Faculty of Sciences at P J Safárik University in Kosice, in association with the Department of Subnuclear Physics, Institute of Experimental Physics, Slovak Academy of Sciences, Kosice, the Physics Institute, Slovak Academy of Sciences, Bratislava, the Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, and the Physics Department, Faculty of Electrical Engineering and Informatics, Technical University, Kosice.
In 1966/7 Steven Weinberg, Abdus Salam and John Ward proposed a local gauge theory, SU(2) x U(1), for a unified description of electromagnetic and weak interactions, with a Higgs mechanism to give mass to the (weak) field quanta. When I arrived as a student at Johns Hopkins University in 1966, Ward was a professor there. I could understand that something exciting was going on from the discussions at the physics seminars, but could not appreciate the importance it would subsequently acquire.
The most striking feature of the weak interactions is their very short range of less than around 10-15 cm, i.e. less than 1% of the size of a nucleon. This is compared with a range of around 10-13 cm for nuclear (strong) forces, and is in stark contrast to the “infinite” range of the electromagnetic force. The short range of the weak interactions implied very massive mediating particles or quanta, W+ and W–, for the charged current, the only known weak interactions at the time. However, the unified description of Weinberg, Salam and Ward had four field quanta, two charged and two neutral, implying that a new type of “neutral current” weak interaction should exist. This would be mediated by the Z0 – a particle that is closely related to the massless photon, in fact it is almost identical except for being very massive. The renormalizability of the theory, shown in 1971 by Gerard ‘t Hooft and Martin Veltman, and by the discovery of the weak neutral currents at CERN in 1973, made this unified electroweak scheme appear plausible. But what could the mass of the W and Z particles be?
Where and how?
The observed linear increase of the neutrino-nucleus cross-sections with incident energy up to Eν ~350 GeV, which was consistent with the (old) Fermi four-fermion point interaction, could not last forever. At a neutrino-nucleon (or rather, neutrino-quark) centre-of-mass energy of the order of 300 GeV the cross-section would reach the S-wave unitarity limit, so the effects of W-exchange had to come in, in order to modify this unacceptable behaviour. The non-deviation from linearity in the measured cross-section indicated mw > 50 GeV, and was consistent with infinite mw. Meanwhile, the charged current and neutral current data from neutrino interactions, when incorporated into the Weinberg-Salam-Ward scheme, were giving a weak mixing angle sin2Θw ~0.3-0.6, which implied mW,Z ~60-100 GeV. Subsequently, measurements of sin2Θw narrowed its value down to around 0.23, providing by 1982/3 a much better estimate of mw ~80 GeV and mz ~90 GeV to within a few GeV. In the late 1970s and early 1980s the forward-backward angular asymmetry, due to γ-Z interference, in e+e– → µ+µ– at the top PETRA energies (√s ~30-40 GeV) also indicated mz < 100 GeV rather than an infinite mz. So, the question was where could these W and Z intermediate vector bosons be produced and how could they be detected?
In 1976 CERN’s SPS began operating with particle beams of energies up to 350-400 GeV onto a fixed target, i.e. with centre-of-mass energies of √s ~30 GeV, which was insufficient for W and Z production. The same year David Cline, Carlo Rubbia and Peter McIntyre proposed transforming the SPS into a proton-antiproton collider, with proton and antiproton beams counter-rotating in the same beam pipe to collide head-on. This would yield centre-of-mass energies in the 500-700 GeV range. Provided the antiproton intensity was sufficient, the W and Z particles could be produced through their couplings to quarks and antiquarks, and detected through their couplings to leptons as prescribed by the Weinberg-Salam-Ward model. Then, in 1979, Weinberg, Salam and Sheldon Glashow were awarded the Nobel prize for electroweak unification and the prediction of weak neutral interactions, which implied the existence of the Z particle. (Ward was no doubt of the same class, but the Nobel prize can only be awarded to three people at most.) This indicated that the theoretical community was more convinced of the existence of the W and Z than most of the experimentalists at the time.
The proton-antiproton collider
CERN meanwhile went ahead with the proton-antiproton collider, and by the summer of 1981 the heroic endeavour of transforming the SPS into a proton-antiproton collider had been accomplished, despite the many uncertainties, including unknown unpredictable beam-beam effects. There is no doubt that Carlo Rubbia, with his enthusiasm, power of conviction and charisma, played a key role in this phase of the project. The first proton-antiproton collisions occurred on 9 July 1981, almost exactly three years since the project had been officially approved. Within hours, the first events that had been seen, detected and reconstructed in UA1’s central tracker were shown by Rubbia at the Lisbon conference (UA1 collaboration 1981).
The PS proton beam at 26 GeV was used on a fixed target to produce antiprotons at ~3.5 GeV, creating about one antiproton per 106 incident protons. The antiprotons were then stacked and stochastically cooled in the antiproton accumulator at 3.5 GeV, and this is where the expertise of Simon Van der Meer and coworkers played a decisive role. With a few times 1011 antiprotons accumulated per day, the cooled (phase-space compactified) antiprotons were reinjected into the PS, accelerated to 26 GeV and injected into the SPS, counter-rotating in the same beam pipe with a proton beam. Both beams were then accelerated to 270 GeV and brought into collision in two interaction regions at √s = 540 GeV. Sufficient luminosity remained for about half a day. The initial luminosity in November/December 1981 was about 1025 cm-2 s-1, but subsequently increased by a factor of 105 over the following years.
The UA1 (Underground Area 1) detector was conceived and designed in 1978/9, with the proposal submitted in mid-1978. At that time we were in barracks on the parking lot in front of building 168, at the same time and place that CDF was designed, for Alvin Tollestrup was spending a year at CERN. UA1 was approved in 1979, and was constructed and essentially functional – including the reconstruction software – by the summer of 1981 (although part of the tracker electronics was still missing). At the time of approval there was a general incredulity in the particle physics community (although not obviously in UA1) that UA1 could be built – and even less operated – in time when compared with the much more focused design and modest size of the UA2 detector. That this was possible was largely thanks to Rubbia’s enlightened absolutism (or more diplomatically to his unrelenting efforts), and to his unbelievable intellectual and professional capabilities and stamina.
The two detectors
UA1 was a huge (~10 x 6 x 6 m3, ~2000 tonnes) and extremely complex detector for its day, exceeding any other collider detector by far. The design was simple, beautiful, economical and, as it turned out, very successful. In the days of initial construction, the collaboration counted around 130 physicists from Aachen, Annecy, Birmingham, CERN, College de France, Helsinki, London/QMC, UCLA-Riverside, Rome, Rutherford, Saclay and Vienna. There was a large, normally conducting dipole magnet with a field of 7 kG perpendicular to the beamline. The collision region was surrounded by a central tracker – a 5.8 m long, 2.3 m diameter drift chamber with 6176 sensitive wires organized in horizontal and vertical planes. Tracks were sampled about every centimetre and could have up to 180 hits, with a resolution of 100-300 µm in the bending plane. This detector was at the cutting edge of technology; it was the first “electronic bubble chamber” and the reconstruction software was done by ex-bubble chamber track reconstructors. The tracker was surrounded by electromagnetic (27 radiation lengths deep) and hadronic calorimeters (about 4.5 interaction lengths deep) down to 0.2° to the beamline. This almost complete coverage in solid angle became known as “hermeticity”. The central electromagnetic calorimeter – which was to play a key role in the subsequent discoveries – was very effectively and economically designed as a lead-scintillator stack in the form of two cylindrical half-shells each subdivided into 24 elements (gondolas). The entire detector was doubly surrounded by ~800 m2 of muon drift chambers with a spatial resolution of ~300 µm. The overall cost of the detector was about 30 million Swiss Francs, and the central ECAL about 3 million – which was probably the best ever investment in particle physics.
While UA1 was designed as a general-purpose detector, UA2 was optimized for the detection of e± from W and Z decays. The emphasis was on calorimetry with a spherical projective geometry – much simpler than that in UA1. There was full coverage in solid angle, except for 20° cones along the beamlines. There were about 500 calorimeter cells with a granularity of about 10° by 15° in polar and azimuthal angles, with a three-fold segmentation in depth in the central region (40-140°) and two-fold segmentation in the forward regions (20-40° and 140-160°) to allow electron-hadron separation. The central calorimetry was, in total, about 4.5 interaction lengths deep, while the forward one was about 1 interaction length (two sections of 18 and six radiation lengths). There was no central magnetic field, but the two forward regions were equipped with magnetic spectrometers (two sets of 12 toroid coils). In the central part there was a vertex detector made of coaxial drift and proportional chambers to detect charged tracks and the collision vertex. Preshower counters improved electron identification through the spatial matching of tracks and clusters. The collaboration counted about 60 physicists, with groups from Bern, CERN, Copenhagen, Orsay, Pavia and Saclay.
The jet run
The first real physics run was in December 1981. Known as the jet run, it was devoted to the search for jets arising from the hard scattering and fragmentation of partons as expected from QCD. The integrated luminosity was about 20 events per µb. The main initial effort in UA1 was based on the tracker, i.e. the measurement of high-momentum tracks and the correlations in azimuth and rapidity between charged particles. Within the collaboration, not enough attention was paid to the searches based on energy clusters in the calorimeters. The UA2 search, based exclusively on calorimetry, was simpler and gave more telling results. At the Paris conference in the summer of 1982, UA2 had clear back-to-back two-jet events, one of which was particularly spectacular with a total transverse energy (Et) of about 130 GeV. The UA1 result was somewhat less elegant. The subsequent studies by UA1 and UA2 were based on calorimetric jet algorithms and the data were selected by total Et or localized Et depositions. This gave an excellent confirmation of QCD expectations in terms of cross-sections, fragmentation functions, angular distributions, etc. But what about the W and Z particles?
On the trail of the W
In the case of the W particle, both experiments looked for Drell-Yan production – that is ubard → W–, udbar → W+ with the antiquarks, qbar, largely from the valence antiquarks in the incident antiprotons, and the quarks from the incident protons, with a fractional momentum x ~mW,Z/√s ~0.2. This identification of incident partons was to facilitate the unambiguous identification of a possible resonance mass peak with the expected properties of the W+/- – namely the spin of 1 and the V-A nature of weak interactions – which should manifest themselves through characteristic forward-backward asymmetries in the decays of the W to a charged lepton and neutrino (W → lν). For the running period at the end of 1982 we expected a luminosity in excess of 1028 cm-2s-1 and an experimental sensitivity of > ~10 events/nb – an increase of 1000 compared with the previous run. The theoretical predictions for the cross-section for W → lν were ~0.5 nb, so few events were expected.
In the run in November/December 1982 the collider attained a peak luminosity of 5 x 1028 cm-2s-1. UA1 collected 18 nb-1 of data, with the total number of recorded triggers about 106 for 109 interactions in the detector. The electron trigger in UA1 was two adjacent gondolas or bouchon petals with > 10 GeV, with a rate of ~1 s-1. The criteria that in December allowed UA1 to select the first five W → eν candidates unambiguously, required an ECAL cluster of > 15 GeV, a hard isolated track of pt > 7 GeV/c roughly pointing to the cluster, missing Et > 14 GeV, and no jet within 30° back-to-back in the plane transverse to the electron candidate. This became known as the Saclay missing Et method. This selection in fact gave six events, five of which turned out to be fully compatible with e±. In these five events, the electron had an Et of ~25 GeV in one case and between 35 and 40 GeV in the others, closely balanced event-by-event by the missing Et. Thanks to the hermeticity of the UA1 design, the resolution on missing Et in UA1 was 7 GeV in hard/jetty events, so the observed missing Et was highly significant in each event (> 5σ). The sixth event had 1.5 GeV of leakage in the HCAL and, upon detailed inspection, turned out to be a case of W → τν → π±π0ν.
In the first weeks of January 1983 an independent search – not based on a missing Et selection, but on stringent electron selection requirements – was performed at CERN. It found the same events, without the tau event, but with an additional event in the endcaps that was below the Saclay/missing Et selection cuts. These events were announced later the same month at the Rome conference and went in the publication announcing the discovery of the W (UA1 collaboration 1983a). The key to this success was the built-in redundancy of UA1 – which allowed the same events to be found by two largely independent methods, resulting in clean samples with no nearby background events – and the fact that the reconstruction software was ready and working. The already perceptible Jacobian peak behaviour giving mw = 81±5 GeV clinched the day.
In the same run UA2 had four W → eν candidates (UA2 collaboration 1983a). The electron identification was based on a calorimetric cluster of more than 15 GeV, with longitudinal and transverse shower profiles consistent with e+/-, track-preshower-calorimetric cluster spatial matching, and electron isolation within a cone of 10°. In the forward-backward regions, where there was a magnetic field, momentum/energy (p/E) matching was enforced but the electron was not required to be isolated. Moreover, events with significant Et opposite to the electron were rejected. These events also had missing Et, but the 20° forward openings resulted in poorer resolution, and thus the separation of events from the background was not so good. In fact one of the consequences of UA1’s hermeticity and the selective power it provided for W → lν events, was that the D0 detector at Fermilab, which was designed in 1983/4, was made as hermetic as possible.
Catching the Z
In April/May 1983 came the next run with 118 nb-1 of integrated luminosity for UA1. This gave an additional sample of 54 W → eν events, giving mw = 80.3 + 0.4-1.3 GeV – and the angular asymmetry in the W decay due to the V-A coupling was unmistakable. The first W → µν events were also seen, but most importantly the first Z → e+e– events and one Z → µ+µ– were found. An express line selected events with two electromagnetic clusters of Et > 25 GeV with small HCAL deposition, and also muon pair events, thereby allowing very fast analysis. The selection of Z → e+e– was much easier than the W selection. The additional requirement of track isolation in the tracker, track-cluster spatial matching and < 1 GeV in the HCAL cell behind the cluster, selected four Z → e+e– events with no visible experimental background in 55 nb-1 of data. At this stage UA1 decided to publish its evidence for the Z. The first mass determination gave mz = 95.5 ± 2.5 GeV and the cross-section for Z decay to lepton pairs was about one-tenth that of the W, as theoretically expected (UA1 collaboration, 1983b).
UA2 accumulated a comparable integrated luminosity during April/May 1983. In the UA2 selection for Z events, while one electron candidate again had to satisfy the same stringent requirements as in the W → eν search, the requirements on the second electron candidate were much looser, essentially a narrow electromagnetic cluster and a cluster-cluster invariant mass of more than 50 GeV. This procedure selected eight events altogether, all clustering in mass around 90 GeV. For three out of these eight events, the second electron candidate in fact also satisfied all the tight electron requirements (UA2 collaboration 1983b). With results from UA1 and UA2, the Z particle was definitely found.
This period, around the end of 1982 and throughout 1983, was an amazing time from both a professional and personal point of view. It was an unforgettable time of extreme effort, tension, excitement, satisfaction and joy. Subsequent runs allowed us to nail down the properties of the W and Z better and initiate other searches that were not always as successful but still extremely interesting and exciting.
The discovery of the W and Z particles was a definitive vindication of the idea of gauge theories as appropriate descriptions of nature at this level, and the unified electroweak model combined with QCD became known as the Standard Model. In the 10 years of experimentation at LEP, this Standard Model became one of the most thoroughly tested theories in physics, down to the level of a part in a thousand. However, in the SU(2) x U(1) scheme with spontaneous symmetry breaking, one of the four scalars that did not disappear into the W± and Z masses has still to be found – and the discovery of the Standard Model Higgs, in the ATLAS and CMS detectors at CERN should eventually complete this story. The discovery of the W and Z at CERN also signalled that the “old side” of the Atlantic regained its eminence in particle physics. “…L’espoir changea de camp, le combat changea d’âme….” (Victor Hugo, “Waterloo”.)
Physicists working at the spallation neutron source at the Research Centre for Nuclear Physics at Osaka University in Japan have for the first time produced ultracold neutrons using phonon excitations in a quantum liquid. A group led by Yasuhiro Masuda of KEK succeeded in the efficient production of ultracold neutrons in superfluid helium, which is free from the limitations of previous ultracold neutrons sources, imposed by Liouville’s theorem.
Ultracold neutrons (UCN) are important experimentally because, although neutrons are very small when compared with the interatomic distances in a material, UCN can be confined in a material bottle due to their wave properties. The attractive nuclear force inside a nucleus in a material distorts the wave associated with a neutron, pushing it back from the centre of the nucleus. Moreover, neutrons of long wavelength (low energy) see the nuclear force of many nuclei in a material. As a result, neutrons below a critical energy – UCN – are completely reflected from a material surface and can be confined in a bottle. UCN are also confined by the magnetic potential in a magnetic bottle (figure 1).
As neutrons are a fundamental constituent of the universe, confined neutrons can be used in various experiments to study the creation of matter in the universe, nucleosynthesis after the Big Bang, and the burning of the Sun. The energy available at the time of the Big Bang created a huge number of particle and antiparticle pairs, which annihilated and transformed back to energy. However, a CP-violating interaction broke the balance of particle and antiparticle numbers, and in due course quarks and leptons were formed. The quarks then condensed into protons and neutrons, and the protons and neutrons formed the nuclei of heavier elements in the process of nucleosynthesis. The nuclei later joined with electrons to form atoms, and eventually stars were born.
The neutron lifetime and the neutron cross-sections of nuclei together played a crucial role in nucleosynthesis immediately after the Big Bang. The neutron lifetime is also relevant to the proton-proton chain in the burning of the Sun. In addition, the same CP violating that created the imbalance between matter and antimatter in the early universe induces an electric dipole moment (EDM) in the neutron. UCN are used for precision measurements of both the EDM and the lifetime of the neutron, and can be used in neutron cross-section measurements. They are also useful for other precision experiments on neutron beta-decay and gravity, and are used in research in surface physics.
In any of these experiments, a high UCN density is very desirable. At the reactor at the Institut Laue-Langevin (ILL) in Grenoble, France, UCN have been extracted from a cold neutron source using gravity and a mechanical decelerator to produce the world’s highest UCN density – 10 UCN per cubic centimetre in an experimental bottle. Further improvement in the density is, however, not expected because of the limitations imposed by Liouville’s theorem, which says that the density in phase space should remain constant.
Now the Japanese group has employed a new UCN production method. Neutrons are produced in a spallation reaction, which generates a smaller photon (γ) to neutron production ratio than in a reactor. A pulsed proton beam, with a typical pulse width of 40 s and a power of 78 W, was used for the spallation reaction. The spallation neutrons, with energies in the MeV region, were then moderated down to cold neutron energies by collisions in thermal (300 K) and cold (20 K) heavy water (figure 2). The cold neutrons were further cooled down to UCN velocities through phonon interactions in 1.2 K superfluid helium. This cooling process is not limited by Liouville’s theorem because the decrease of neutron phase space is compensated by the increase in phase space of the phonons.
The UCN were then extracted, with negligible losses, through a guide tube into an experimental bottle, where the number of UCN was counted using two (15 and 24 mm diameter) solid-state detectors behind a 6Li film. A typical UCN count time was 60 s, and the UCN were found to remain in the bottle with a decay time constant of 14 s (figure 3). The UCN density was 0.7 UCN per cubic centimetre at the beginning of the counting, and this doubled to 1.4 UCN per cubic centimetre when the proton beam power was doubled.
The new UCN source is expected to produce a UCN density of greater than 10,000 UCN per cubic centimetre, through improvements in the proton beam power, the UCN lifetime in the bottle, etc. The main limitation comes from the ability of the cryostat cooling to remove γ heating in the superfluid helium after the spallation reaction. The above expectation is based on practical values of superfluid helium temperature (0.8 K) and proton beam power (30 kW).
The discovery of the J/Ψ particle in November 1974 by the teams led by Burton Richter at SLAC and Sam Ting at Brookhaven came as a great surprise. However, after a period of uncertainty, ended by the discovery of the Ψ′ at SLAC, the J/Ψ was identified as a bound state of a charm quark and an antiquark, c-cbar, which had been explicitly predicted in 1970 by Sheldon Glashow, John Iliopoulos and Luciano Maiani. In the J/Ψ and the Ψ′, the spins of the c and ccbar are parallel and form a triplet state (spin 1) associated with a space wave function of orbital momentum l=0. However, as in positronium (e+e–), there also exist singlet states in which the spins are antiparallel, with orbital angular momentum l=0 or l>=1, as shown in figure 1.
The story of the experimental search for the l=0 singlet states and the efforts of theoreticians to explain the successive and contradictory experimental results, is an interesting one. The table summarises the history of the ground and first excited singlet states, ηc and ηc′ (or ηc(2S)). Δm and Δm′ give the hyperfine splittings, or in other words, the mass differences between these singlet states and the related triplet states.
In the late 1970s, experiments found Δm, the mass difference between the ηc and the J/Ψ, to be about 300 MeV (Braunschweig et al. 1977, Apel et al. 1978). However, this result was difficult to swallow for two reasons. First, naive estimates of the hyperfine splitting give much smaller values, and second, the radiative decay width J/Ψ → ηc + γ is proportional to Δm3, so any theory correctly predicting Δm ~300 MeV would overestimate this width. This is why most theoreticians were extremely sceptical about the result from the DASP experiment (Braunschweig et al. 1977). Fortunately, the Mark II and Crystal Ball groups found in J/Ψ → ηc + γ what we believe is the true ηc, with a splitting of Δm = 119 MeV (Himel et al. 1980, Partridge et al. 1980).
A little later, the Crystal Ball group also found a candidate for the ηc′, again by radiative decay, but from the Ψ′ (Edwards et al. 1982). The Δm′ =~ 90 MeV splitting they found is acceptable – for instance Wilfried Buchmuller, Yee Jack Ng and Henry Tye found 80±10 MeV in a QCD-inspired calculation (Buchmuller, Ng and Tye 1981). However, the ratio Δm′/Δm seems difficult to accept. First, a naive estimate using a Fermi-like hyperfine interaction suggests that Δm′/Δm is related to the ratio of the leptonic widths of the Ψ′ and J/Ψ. This gives Δm′/Δm =~ 0.6±0.1, which is hardly consistent with the Crystal Ball result. In addition, there are effects due to the coupling of the ccbar bound states to the charm-anticharm meson pairs, D(*)Dbar(*), as we pointed out in 1981. The coupling to the very close DDbar threshold is allowed for a vector state, so this should make the Ψ′ lower than predicted by naive potential-model calculations. The pseudoscalar ηc, on the other hand, does not couple to DDbar and so is shifted much less. Using the Cornell model (Eichten et al. 1978, 1980), we found that this effect reduces Δm′ by at least 20 MeV.
The puzzling Crystal Ball result on ηc′ was never confirmed. Searches for the ηc′ in formation experiments in proton-antiproton collisions, first at the ISR and then at the Fermilab accumulator, were unsuccessful. This may be because these experiments had too high a resolution in energy, and perhaps because of prejudice that the ηc′ would not be too close to the Ψ′. The coupling of the ηc′ to proton-antiproton might also be less favourable than for ηc. Meanwhile the ηc was seen at LEP, in its γγ decay mode, but no signal was found for the ηc′.
Charmonium can also be investigated through B decay, as proposed by several authors (e.g. Eichten et al. 2002). The Belle experiment at KEK, whose primary purpose is to study the CP violation in B decays, has seen both the ηc and ηc′ in two distinct channels, which we can call Belle I and Belle II. The BaBar experiment at SLAC should also produce similar results.
In Belle I, the decays B → Kηc(ηc′) → KKsK–π+ reveal two main peaks, as in figure 2 (Choi et al. 2002). The first is clearly the ηc, while the second is most likely the ηc′, as the background from B → K + J/Ψ or K + Ψ′ is expected to be rather small. This implies that m(ηc′) = 3654±6 MeV, i.e. Δm′ = 32±6 MeV, which is much smaller than the Crystal Ball-obtained value, and even smaller than we expected from the effect of the coupling to charm-anticharm channels.
In Belle II, the reaction studied is e+e– → J/Ψ + ccbar, i.e. double ccbar production with one pair constrained to match the J/Ψ (Abe et al. 2002). The recoil spectrum against the J/Ψ gives a set of ccbar bound states. If the process takes place via e+e– annihilating into one photon, charge conjugation conservation strictly forbids J/Ψ and Ψ′, and three peaks corresponding to ηc, Χ0 and ηc′ can be seen (figure 2). This time Δm′ is somewhat higher, about 60 MeV, which is more consistent with our 1981 expectation. On the other hand, the ηc is shifted with respect to the standard value of the Particle Data Group. The imperfect agreement between Belle I and Belle II will hopefully disappear in the final analysis and in particular it should be decided whether or not a background B → Ψ′ + K or (unlikely) e+e– → J/Ψ + Ψ′ contributes to the observed spectrum. In any case, we are very close to the complete clarification of the ηc′, with a mass much closer to the Ψ′ than was indicated by the Crystal Ball group.
Theory also predicts a ccbar singlet P-state called hc. Paradoxically, the corresponding state in positronium has only been observed relatively recently (Conti et al. 1993). First indications for hc came from the R704 experiment at the ISR, in which a cooled antiproton beam collided with a gas jet target (Baglin et al. 1986). This was at the time when the ISR was to be stopped and dismantled. At the request of one of us (A M), a few extra days running were granted by the director-general, Herwig Schopper, but no firm conclusion could be reached. Years later a similar experiment, E760, was carried out at Fermilab and gave strong indications of the hc at the same mass that happens to agree with the most naive prediction, i.e. the weighted average of the triplet P-state masses (Armstrong et al. 1992). However, these indications have disappeared in the latest experiment, E835 (Patrignani et al. 2001). Assuming that E760 was right, it is tempting to wonder if the same scenario will not repeat itself with the Higgs search: indication in the last runs of LEP of a Higgs at 115 GeV, which might be right and so be definitely seen years later with the LHC.
Quantum chromodynamics (QCD), the theory of the strong force, is a marvellous example of how the physical laws that describe a large variety of complex phenomena can be condensed into a very simple and elegant mathematical structure, known as non-abelian gauge theory. The fundamental equations can be written down in a single line, yet they describe how the nucleons acquire their masses from “nothing”, or how two nucleons smashed together at high energies disintegrate into dozens of new particles bundled into “jets” – the visible manifestations of the quarks and gluons. The fundamental equations are extremely hard to solve. At higher energies where the strong force weakens, the equations may be expanded in a perturbation series, where each new term demands more sophisticated analytical or numerical methods of computation. At energies of the order of the proton mass, the equations can only be solved by large-scale computers.
From September 24-27, 2002, approximately 130 high-energy physicists gathered in Hamburg at the annual DESY Theory Workshop to discuss their recent advances in the development of computational methods, and their successes (and sometimes failures) in comparing their calculations with experiments that continue to become more precise or to explore new phenomena. As emphasized by many talks at the workshop, these efforts go far beyond understanding how hadronic phenomena work. As the high-energy community gathers its resources to attack the fortress of the Standard Model, which has stood unconquered for the past 30 years, the strong interaction is a faithful, though not always loved, companion. Whether protons collide at the LHC to produce perhaps the Higgs boson or new particles, whether B mesons decay at SLAC and KEK to reveal the subtle asymmetry of matter and anti-matter, or whether the anomalous magnetic moment of the muon is measured to a part in a billion, an accurate computation of strong interaction effects will be required to ascertain finally a failure of the Standard Model.
Inside the proton
What is a proton? The answer is more difficult than just “three quarks”. In high-energy collisions the proton appears as a bunch of quarks and gluons collectively called partons. The (longitudinal) momentum distributions of these partons are fundamental input to the computation of any proton collision. James Stirling of Durham reviewed the current knowledge of parton distributions and concluded that the global fit is satisfactory. Methods are now being developed to assign reliable errors to these functions, which may soon be known with higher (“next-to-next-to-leading order”) theoretical accuracy. Closely related to the conventional parton distributions are the diffractive parton distributions, which give the probability of finding a parton in the proton under the additional condition that the proton stays intact in the collision. One of the surprising results of DESY’s HERA experiments is that this probability remains large, even at the highest momentum transfers. The physical interpretation of this was provided by John Collins of Penn State, who also emphasized that models of soft interactions in diffractive scattering should be taken as models for the corresponding parton distributions.
At high collision energies the number of partons with small momentum fraction x of the proton increases rapidly and the conventional, perturbative equations should break down. They should be replaced by an equation that sums logarithms in x, known as the BFKL equation. In the leading approximation, the solution to the BFKL equation overestimates the growth of high-energy cross sections. Victor Fadin of Novosibirsk discussed the progress made towards a next-to-leading approximation. With many parts now being completed, the calculation of the so-called photon impact factor is required before a comparison with experiments can be attempted. Whatever the result, at very small momentum fraction the growth of parton densities must stop. As explained by Alfred Mueller of Columbia, this occurs for quarks because the Pauli principle limits the number of fermions per phase space cell. For gluons, however, “saturation” already occurs classically when the density of gluons is so high that their combined field strength is non-perturbatively large. Mueller discussed the applicability of a classical description and estimates of the saturation scale during various stages of the collision process under the conditions at HERA and at Brookhaven’s RHIC collider.
The experimental verification of these phenomena at HERA remains ambiguous, according to Brian Foster of Bristol. He also showed an impressive amount of jet data – all in agreement with QCD computations – and demonstrated that the strong coupling constant can now be determined from electron-proton collisions with high accuracy. The HERA collider has become a veritable QCD factory, providing data over many orders of magnitude in momentum transfers and for many final states that probe different aspects of the strong interaction. Understanding the transition to soft, non-perturbative physics remains one of the most difficult challenges. This transition appears to be surprisingly smooth. Hans-Günther Dosch of Heidelberg showed that a simple model which views the QCD vacuum as an ensemble of Gaussian gauge field fluctuations, allows many features of soft hadronic interactions at high energy to be related to properties of the QCD vacuum.
The spin of the proton is 1/2, but how is it distributed over the various partons? A decade ago the “spin crisis” was proclaimed, after it was observed that the quarks carry only a fraction of the total spin. The talks by Elke-Caroline Aschenauer of DESY and Daniel Boer of Amsterdam highlighted that experimentalists and theorists still struggle to account for the remainder. For example, the gluon’s contribution to the spin remains largely unknown and its direct determination requires less inclusive measurements than polarised deep-inelastic scattering. Getting hold of orbital angular momentum is even harder and demands the introduction of new theoretical concepts (“generalized parton distributions”), which can be constrained by observing Compton scattering of virtual photons off protons.
Lattice calculations
Perturbative approximations are not adequate for ab initio calculations of hadron masses or, more generally, hadronic matrix elements, which are governed by strong-coupling physics. In these cases numerical simulation of QCD on a discrete space-time lattice provides the only systematic approach. Lattice QCD benefits greatly from the increasing speed of computers, where the scale of machines is currently set by TeraFlops (1012 operations per second). However, as emphasized by several speakers at this workshop, conceptual progress and the improvement of simulation algorithms play at least an equally important role.
Most calculations are still performed with a truncated version of QCD, which neglects quark-antiquark quantum fluctuations. Allowing quarks to fluctuate is costly, as discussed by Sinya Aoki of Tsukuba, and forces the use of smaller, coarser space-time lattices. Aoki showed that the computed hadron spectrum is in much better agreement with observations for dynamical quarks, but pointed to the need for better algorithms that would allow the simulation of light quarks with masses closer to their real values.
A different avenue was pursued by Hartmut Wittig of DESY, who reviewed the various methods to put massless (but still non-dynamical) quarks on the lattice. This became a real possibility a few years ago when it was discovered that QCD at finite lattice spacing has an exact symmetry that approaches the conventional chiral symmetry in the continuum limit. Wittig showed that the efforts to put this into practice are now bearing fruit for quantities such as the strange quark mass or the quark condensate, where the chiral behaviour is particularly important. Chiral symmetry is also important for kaon physics, where results from lattice calculations have a large impact on the interpretation of direct and indirect CP-violating effects. Indirect CP violation in kaon decay to two pions poses a particular challenge to lattice theorists, since the relevant matrix elements include final state interactions. Chris Sachrajda of Southampton described new ideas to extract these matrix elements by exploiting the finite size of the lattice.
Another impressive demonstration of progress in lattice gauge theory was given by Martin Lüscher of CERN. Using a new algorithm that allows the computation of large Wilson loops, he showed that the large-distance behaviour is consistent with the assumption that the low energy limit of SU(N) gauge theory is a bosonic string theory. Moreover, the perturbative regime joins smoothly to the string regime at a distance of about 0.5 fm.
Charm and bottom quarks are produced in large numbers at today’s high-energy colliders. The theory of single-inclusive heavy meson production and of quarkonium production was reviewed by Bernd Kniehl of Hamburg, who described the efforts to treat heavy quark mass effects correctly at all energy scales. He also concluded that, with the exception of polarisation measurements, the non-relativistic factorization approach to quarkonium production appears to be supported by existing data. A particularly interesting quarkonium system consists of a top-antitop pair. Although this system decays after little more than 10-25 s, the strong Coulomb force the quarks exert on each other leaves a visible enhancement in the energy dependence of the production cross section. This can be used (at an electron-positron collider) to determine the top quark mass to an accuracy of less than a permille. Thomas Teubner of CERN showed that many of the theoretical difficulties involved in the calculation of the threshold cross section have now been solved with non-relativistic effective field theory. Very similar calculations also determine the bottom and charm quark mass from quarkonium systems.
A complementary method uses inclusive heavy quark production in e+e– collisions far above the production threshold. The corresponding hadronic spectral functions also provide an indispensable source of information for other fundamental constants, such as the strong coupling, the hadronic contribution to the electromagnetic coupling (at the scale of the Z mass) or the anomalous magnetic moment of the muon. The accuracy needed for these quantities is reflected in the development of sophisticated symbolic manipulation programs, which enable the computation of thousands of multi-loop Feynman diagrams. Matthias Steinhauser of Hamburg discussed recent advances, particularly in including quark mass effects and their impact on precision determinations of the coupling constants. Similar methods of algebraic reduction of Feynman integrals are now also being applied for jet physics, where many further difficulties come from the more complicated kinematics. The new frontier, stated Nigel Glover of Durham, is set by next-to-next-to-leading order calculations. He explained that while all the two-loop virtual effects are now completed, the construction of a usable Monte Carlo program that combines them with bremsstrahlung effects will probably require another few years of hard work.
QCD and the Standard Model
Many processes that would otherwise provide clean probes of fundamental interactions are ultimately sensitive to QCD through quantum fluctuations. One particularly well known example is the flavour-changing neutral current process B → Xsγ, reviewed by Mikolaj Misiak of Warsaw, where strong interaction effects double the predicted branching fraction. Experiment and theory currently agree, but to what precision can one compute strong interaction effects? Misiak explained how quark mass renormalization prescriptions influence the prediction, but concluded that the dominant uncertainties can still be reduced by perturbative calculations. They would however be very difficult. The discussion was continued with a review of exclusive heavy meson decays, where the problem of hadronization is even more direct. Understanding decays such as B → ππ, which can now be studied in detail at the B-factories, is crucial in order to ascertain the (in)consistency of the Kobayashi-Maskawa mechanism for CP violation in the quark sector. Gerhard Buchalla of Munich reported progress in applying QCD factorization methods to exclusive B decays, which have led to new insights into dynamical details of these reactions.
The anomalous magnetic moment of the muon has remained a hot topic since the announcement of the result by Brookhaven in 2001 (CERN CourierApril 2001 p4 and September 2002 p8). The experimental value, precise to 0.7 ppm, is not quite in agreement with the theoretical result, but whether the discrepancy is the first signal of a breakdown of the Standard Model is a matter of debate. The blame could once more be on the strong interaction. The current status of theoretical calculations was presented by Eduardo de Rafael of Marseille and Fred Jegerlehner of DESY. A controversy on the sign of the so-called light-by-light scattering contribution, a tiny but relevant quantum effect has now been settled, bringing the prediction in better agreement with the data. However, the size of the effect itself remains quite uncertain. Another important development concerns hadronic photon vacuum polarisation effects, which must be determined from low-energy data, in particular around the ρ meson resonance. This year has seen new results from CMD-2 and from an analysis of τ-decays at LEP. Unfortunately, the two do not agree, the difference being more than twice the estimated error. Depending on the input, the theoretical muon anomalous magnetic moment is now 1 to 3 standard deviations smaller than Brookhaven’s experimental result.
QCD also matters in the production of new particles, foremost the Higgs boson. Michael Krämer of Edinburgh reviewed higher order calculations of Higgs, Higgs with top-antitop, and supersymmetric particle production. All these processes are now under good theoretical control. Krämer emphasized, however, that the calculation of signal processes must be accompanied by an equally detailed understanding of backgrounds.
Extreme conditions
The investigation of quark or nuclear matter under extreme conditions of temperature and density has a long history, with possible applications to neutron stars and quark-to-nuclear matter phase transitions in the early universe. With the advent of heavy-ion collisions, most recently (and on-going) at Brookhaven’s RHIC collider, these phenomena are now subject to terrestrial explorations. An interpretation of the first RHIC results was given by Miklos Gyulassy of Columbia, who described the geometric and saturation effects that appear in the collisions of large nuclei. Some of these effects are clearly seen in the data. He also explained how the pattern of energy loss should reveal information about the matter density in the collision region. While the dynamics of a nuclear collision is extremely complicated, the thermodynamics of strong matter is amenable to simulations in lattice QCD. The critical temperature and energy density at the phase transition are now rather well determined, says Edwin Laermann of Bielefeld, at least in the approximation that all quarks are massless. The influence of the strange quark mass on the phase diagram is a very interesting question. Recent theoretical developments concern lattice simulations at finite chemical potential. The difficulty lies in numerical cancellations that occur for a complex action. Laermann explained that it is now possible to investigate small chemical potentials using expansions, reweighting methods or analytic continuation from imaginary chemical potentials.
The phase diagram in the direction of chemical potential was illustrated in the concluding talk by Krishna Rajagopal of MIT, who showed that gluon exchange makes the Fermi surface unstable, rendering dense quark matter a BCS-like colour superconductor. Many more phenomena can occur, depending on the number of quark flavours or the strange quark mass, such as a condensation of colour and flavour quanta in an intertwined pattern. The workshop concluded with the tantalizing speculation that quark matter could actually be crystalline, and a review of the possibilities of detecting this phenomenon in supernovae explosions or pulsar quakes.
The plenary talks were preceded by introductory lectures on Deep Inelastic Scattering and Jets (Keith Ellis of Fermilab), Lattice QCD (Karl Jansen of NIC and DESY), Non-perturbative Methods (Andreas Ringwald of DESY) and Finite-temperature Field Theory (Dietrich Bödeker of Bielefeld), which were very well received both by students and experts. The interest of the community in strong interaction physics was also reflected by around 35 parallel session talks given by young researchers from different countries.
The first workshop of the recently founded Quarkonium Working Group (QWG) took place at CERN on 8-10 November 2002, nearly 30 years after the observation of charmonium – the first of the heavy quarkonia states. Almost 100 experimentalists and theorists from places as far away as Japan and Hawaii came together to discuss recent advances and open problems in the field of quarkonium physics, which should eventually also include studies of toponium. The topics covered ranged from spectroscopy and decays of quarkonium to its production in quark-gluon plasma. With 58 plenary talks, parallel talks and discussion sessions, this successful first workshop has already achieved the QWG’s first goals: to bring together experts from the various branches of the field, to clarify the status of experiments and theory, and to formulate the key questions that should be addressed in the framework of the QWG. Specific projects are now being organized in sub-groups, and future meetings as well as a comprehensive write-up are planned.
The first results from six months of data-taking by the KamLAND experiment in Japan indicate that electron antineutrinos from distant nuclear reactors are “disappearing” on their way to the detector. This is the first observation of such a disappearance in a reactor-based experiment. The results support evidence from solar neutrino experiments for neutrino oscillations, in which the electron neutrinos change into another type.
KamLAND, which consists primarily of a 13 m diameter “balloon” filled with liquid scintillator viewed by more than 1800 photomultiplier tubes, is located on Japan’s main island of Honshu, near the city of Toyama. It is exposed to electron antineutrinos emitted from some 51 nuclear reactors in Japan, plus 18 in South Korea, at a variety of distances. While experiments detecting solar neutrinos have for more than 30 years found fewer electron neutrinos reaching the Earth than expected, there has been no evidence for a similar effect in experiments studying neutrinos from nuclear reactors. However, the mounting evidence for oscillations from experiments with solar and atmospheric neutrinos show that in these experiments, the detectors were too close to the reactors to observe an effect. Now KamLAND has found a clear deficit in the number of electron antineutrinos arriving from an average distance of about 180 km.
KamLAND detects electron antineutrinos through the inverse beta-decay process, in which an electron antineutrino interacts with a proton to create a positron and a neutron. For data collected on 145.1 days between March and October 2002, the experiment recorded 54 electron antineutrino events in the energy range 1-10 MeV, as opposed to around 86 events predicted by the Standard Model, assuming that no oscillations occur. More precisely, the ratio of the number of observed inverse beta-decay events to the expected number (i.e. without disappearance) was found to be 0.611 ± 0.085 (stat) ± 0.041 (syst), for antineutrino energies greater than 3.4 MeV.
These results agree well with those of recent best-fit predictions of the large mixing angle (LMA) oscillation solutions, and indeed reduce the allowed LMA region for the oscillation parameters sin22θ and Δm2. The best fit to the KamLAND data in the physical region for the parameters gives sin22θ = 1.0 and Δm2 = 6.9 x 10-5 eV2. Further analysis with more data should reduce the errors and provide a higher precision measurement of these key parameters.
At the end of last year, the first images from the INTEGRAL gamma-ray satellite were released to enthusiastic astronomers. The first observations were of Cygnus X-1, a nearby black hole, just 10,000 light-years from Earth. Fittingly, the observations coincided with the emission of a gamma-ray burst from that very same region of sky.
Gamma-ray bursts are one of the exotic and poorly understood phenomena that INTEGRAL was launched to investigate. They are by far the most powerful events known to occur since the Big Bang, and the mechanisms fuelling them are still unknown. Right from the moment of first light, INTEGRAL has shown a promise of many interesting discoveries to come.
Launched last autumn, INTEGRAL is designed to detect hard X-ray and gamma-ray sources in the energy range 15 keV-10 MeV. The satellite contains an imager and a spectrometer, plus X-ray and optical monitors. Gamma-ray sources are often highly variable, fluctuating on timescales of minutes or seconds, despite their size. This makes it crucial to record information simultaneously at different wavelengths.
Picture of the month
The light from these distant galaxies has been bent by a huge cluster of intervening matter which acts as a gravitational lens. The lensing helps bring the distant universe into focus, revealing faint galaxies that would otherwise be missed. The image was taken by the Hubble Space Telescope’s new Advanced Camera for Surveys with a 13 h exposure time. Some of the distant galaxies in the image are thought to be twice as faint as those on the original Hubble Deep Field images, and to have a redshift greater than 6. This is a new milestone for the Hubble, improving once more our view of the early universe. (NASA/ESA.)
The natural constants are to some extent abnormal features of the theories considered today. On one hand they are needed to describe the theories, but on the other hand nobody understands their rather strange values. Indeed, no-one knows if they are accidents, or whether they can be calculated from some basic principles – a question that ranks in the top 10 unsolved problems for string theorists.
Recent observations in astrophysics suggest that α, the fine structure constant (see “The magic number ” box), which is of fundamental importance in describing the electromagnetic interaction, was in earlier periods a little smaller than today. A research group from Australia, the UK and the US analysed the spectra of distant objects, obtained in particular at the Keck I telescope in Hawaii. They studied around 150 quasars, some of them about 11 billion light-years away (Webb et al. 2001). The redshifts of the objects varied between 0.5 and 3.5, which corresponds to ages varying between 23 and 87% of the age of the universe. The team used the so-called “many multiplet” method – in particular on the spectra of iron, nickel, magnesium, zinc and aluminium – and found a value of α at early times of close to 1/137.037, as opposed to near 1/137.036, as is observed today. This is a small departure – the observations indicate Δα/α = (-0.72 ± 0.18) x 10-5 – but it could have important consequences for theory.
The idea that certain fundamental constants are not constant at all, but have a certain cosmological time dependence, is not new. In the 1930s, the idea was discussed by Paul Dirac (Dirac 1937) and by Arthur Milne (Milne 1937), but with respect to the gravitational constant. Dirac wrote his article at that time during the holiday following his marriage, prompting his colleague George Gamow to remark: “That happens if people get married.”
At around the same time, Pascal Jordan discussed the possibility that other constants could also be time-dependent (Jordan 1937; 1939), but he refused to consider that the constant of the weak interactions or the ratio of the electron and proton mass might be time-dependent. Later, Lev Landau considered the possibility of a time dependence of α in connection with the renormalization of the electric charge (Landau 1955).
We can also say something about the time dependence of α by studying the remains of the natural reactor found near Oklo in Gabon, West Africa, which was in operation about 2 billion years ago. The isotopes of the rare earths, for example of samarium, were produced by the fission of uranium. The observed distribution of the isotopes today is consistent with calculations, assuming that the isotopes were exposed to a strong neutron flux. The value of α that one can deduce agrees rather precisely with the value observed today. The change of α has to be smaller than about 10-17 per year, according to the calculations by Thibault Damour and Freeman Dyson (Damour and Dyson 1996). Taking the astrophysics values and the Oklo data together, one arrives at the curious possibility that the value of α increased in the early universe by a few 10-5, but has remained constant during the past 2 billion years.
However, the significance of the Oklo data becomes less clear if, besides a change of α, changes of other parameters are also considered, for example the parameters of the strong interaction. The limit for the change in α comes from the observation that the cross-section for the scattering of thermal neutrons off samarium-149 is dominated by a nuclear resonance. The position of this resonance cannot have changed during the past 2 billion years, according to experimental data, and this limits the change of α. Because of the Coulomb repulsion in the nucleus, an increase of α would lead to an increase in the energy of the resonance. However, a change of the strong coupling constant, αs, could easily compensate for this effect.
Observing a time dependence of α is certainly an important, if not spectacular result, but a certain measure of scepticism should be kept. If the fundamental constants really do depend on time, rather severe consequences are expected for cosmological evolution since the Big Bang. Nevertheless, the data should be taken seriously, as there are no strong theoretical arguments why the constants should really be absolutely constant.
Grand unification
In the Standard Model of the elementary particles, the overall gauge group is given by SU(3) x SU(2) x U(1), and the electromagnetic and weak interactions are described by the subgroup SU(2) x U(1). Both the Z boson and the photon are superpositions of the neutral SU(2) component and the U(1) boson. This means that the electromagnetic coupling constant e, i.e. the fine structure constant, is not a basic coupling constant. It is related to the basic coupling constant of the SU(2) theory by the relation: e = g/2 sinθw. Experiments give the value of the weak angle, renormalized at the mass of the Z boson, as sin2θw(Q2 =Mz2) = 0.2113 ± 0.00015.
The three coupling constants of the strong and the electroweak interactions vary with energy, but they converge if they are extrapolated to very high energies (about 1016 GeV). This is precisely what one expects if the three interactions are unified. Such a “grand unification” is realized if the gauge groups of the strong interactions, i.e. the colour group SU(3) and the two gauge groups of the electroweak interactions, SU(2) and U(1), are subgroups of a simple group that unifies the three interactions.
Two groups are of particular interest – SU(5) (Georgi and Glashow 1974) and SO(10) (Fritzsch and Minkowski 1975). The group SU(5) has the property that the fermions of one generation are described by two representations. The group SO(10) has an interesting property: the leptons and quarks of one generation can be described by a single representation, the so-called spinor representation or 16-representation. For example, for the fermions of the first generation, this contains six quarks (u and d in three colours) and six antiquarks, together with the electron, positron, a left-handed electron-neutrino and a right-handed electron-neutrino. Note the introduction, in addition to the normal left-handed neutrino, of a right-handed neutrino, which in the normal weak interaction does not appear. However, its existence is important for the appearance of a mass for the neutrino. In fact, in the SO(10) theory, one expects in general that neutrinos have a mass, in accordance with evidence from current experiments.
The coupling constants of the Standard Model seem to converge if extrapolated to high energies. It turns out that in the SU(5) model, they do not come together at one point, but in models based on the SO(10) group a convergence can be achieved, since in those theories a new energy scale besides the unification energy plays a role at high energies. However, one can also achieve a convergence of the coupling constants in the SU(5) model, if supersymmetry is realized at energies above about 1 TeV. The contributions of the supersymmetric particles change the renormalization coefficients so that a convergence takes place at about 1016 GeV.
If we take the idea of grand unification seriously, it implies that the variation of α in time should go parallel to a variation in time of the unifying coupling constant gun – otherwise the grand unification would only work at a particular time, which does not make much sense. Consequently we would expect that all three coupling constants g1, g2 and g3, would be time-dependent. Of particular interest here is a time dependence of the QCD coupling, i.e. of αs, since this coupling determines the hadronic mass scale and many other parameters in hadronic and nuclear physics.
Consider now the behaviour of αs in lowest order only. It is given by the renormalization group equations as follows:
Here µ is a reference scale, ß0 = -11 +2/3 x nf (nf is the number of quark flavours), and Λs is the QCD scale parameter.
Experiments, especially the measurements carried out at LEP, give αs = 0.116 + 0.003/-0.005 (exp.) ± 0.003 (theory). A typical value for the scale parameter is Λs = 213 + 38/-35 MeV. Of course, if αs is not only a function of the reference scale, but also of time, then the scale parameter Λs also varies with time. We find for the time dependence:
The relative time dependencies are related by: δΛ/Λ = (δαs/αs) ln (µ/Λ). It follows that the relative change of αs cannot be uniform, i.e. identical for all reference scales, but must change logarithmically if the reference scale changes. We could, for example, consider a relative change of αs at very high energies, for example close to the energy where the grand unification sets in. The corresponding change of Λ would then be larger by a factor ln (µ/Λ) =~ 38.
Further time dependencies
In QCD, the proton mass as well as all other hadronic mass scales are proportional to Λ, if the quark masses are neglected. In fact, the masses of the light quarks, mu, md and ms, are different from zero, but the mass terms contribute only a little to the total mass, typically less than 10%. We shall not consider these contributions, and we shall also neglect a small contribution of electromagnetic origin to the nucleon mass.
So if the QCD coupling or the QCD scale parameter changed in time, we would expect a corresponding change in time of the nucleon mass and of the masses of the atomic nuclei (Calmet and Fritzsch 2002). Such a change could be observed through a measurement of the mass ratio me/mp. Since a change in the QCD parameters would not influence the electron mass, the result would be a change in this mass ratio.
Independent of the details of the unification scheme, one would expect that a change in time would in particular imply a change in time of the unified coupling constant, defined for example at the point of unification. In order to be specific, consider as an example SU(5) theory with supersymmetry, which is broken at about 1 TeV to yield the Standard Model. The change in time of the three gauge couplings is given in figure 1. The unification takes place at ΛGUT = 1.5 x 1016 GeV, where the coupling constant is αun = 0.03853.
A variation in time can occur through a time dependence of the unified coupling constant, but also through a time dependence of the energy at which unification takes place. In the case where only the coupling constant varies with time, one finds that the time change of α and αs are related. In fact, both time changes are linked to each other by the ratio 8/3 x (α/αs), which is about 1/10. That is, the time change of the strong coupling constant is roughly an order of magnitude larger than the time change of the electromagnetic coupling constant.
In the case where the coupling constant remains invariant, but the energy at which the unification takes place depends on time, one finds that the time change of the scale Λ for the strong interactions is about 31 times larger than the time change of α, but has the opposite sign. This is interesting. While α increases with a rate of about 10-15 per year, Λ and the nucleon mass both decrease at a rate of about 2 x 10-14 per year. At the same time, the magnetic moments of the proton and of the nuclei would slowly increase, at a rate of about 3 x 10-14 per year.
Future observations
A change in time of the proton mass and of α could be observed through precise measurements in quantum optics. The wavelength of light emitted in hyperfine transitions, for example in the transitions that are measured in caesium clocks, is proportional to α4me/Λ, which would be time-dependent via both α and Λ. On the other hand, the wavelength of light that is generated in atomic transitions depends only on α, and would vary in time accordingly. We would expect that light emitted in hyperfine transitions should vary in time about 17 times more strongly than light emitted in normal atomic transitions, but in the opposite direction, i.e. the atomic wavelength becomes smaller with time, but the hyperfine wavelength increases.
The second is currently defined as the duration of 6,192,631,770 cycles of microwave light, which is emitted in the hyperfine transitions of caesium-133. If Λ were to change in time, it would mean that the flow of time, which is measured with caesium clocks, does not fully correspond to the flow of time tested in atomic transitions. Experiments to look for an effect of this kind will be carried out soon at the Max-Planck-Institute for Quantum Optics in Munich, under the leadership of Theodor Haensch.
If such an effect is discovered, it will be important to determine the sign and magnitude of the double ratio R (equation 2). If one obtains R ~ -20, it would be a strong indication for unification of the strong and electroweak interactions. Furthermore, this value would be of great interest in better understanding any changes in the constants of nature with time.
The fine structure constant α is composed of e, h/2π and c. Thus, if α depends on time, it would mean that at least one of these numbers depends on time. Today we usually start with the hypothesis that h/2π and c are fundamental unities, which in suitable systems can also be set to 1. Thus a change of time of α would correspond to a change of e.
In the theories of “superstrings”, one has, in fact, an additional motivation that fundamental constants are not really constant. In these theories, dimensionless coupling constants such as α are related to functions of vacuum expectation values of scalar fields, which could easily depend on time. Furthermore, a time dependence could also easily arise if, besides the three space dimensions, there are more hidden dimensions.
It would be particularly interesting to find information about the coupling constants such as α or αs in the early universe. A direct measurement is not possible, but recent measurements of the cosmic microwave background, which has its origin in the early universe, do not show within an accuracy of about 10% any time dependence of α. Data from the MAP satellite, launched in 2001, will allow us to improve this limit or to find an effect. Further hints towards a time dependence of α or αs, or both, will have important consequences.
Last September, the 265 seats of Chicago’s Adler Planetarium, on the Lake Michigan shoreline, were filled with participants at the COSMO-02 International Workshop on Particle Physics and the Early Universe. The conference was co-organized by the Center for Cosmological Physics at the University of Chicago, the Adler Planetarium and the Theoretical Astrophysics Group at Fermi National Accelerator Laboratory. COSMO conferences provide a forum for particle physicists, cosmologists and astrophysicists to discuss new results in the exciting and fast-moving field of particle astrophysics and cosmology. One of the new features this year was the presence of string theorists, showing that the latest cosmological observations have attracted the attention of a very large and diverse physics community.
The conference opened with a talk by Wendy Freedman of Carnegie, who addressed the recent emergence of a “standard model” in cosmology. From an observational point of view, our universe can be described by only a few parameters, such as the Hubble “constant” and the contribution of the different constituents of the universe to the total energy density. As Robert Kirshner of Harvard, David Weinberg of Ohio and Tim McKay of Michigan pointed out, a combination of the results of different cosmological observations already allows us to measure those parameters with unprecedented accuracy (by cosmological standards). Moreover, ongoing or planned projects, such as large-scale structure catalogues (2dF, SDSS), cosmic microwave background maps (MAP, Planck) and supernova surveys (ESSENCE, SNAP) will soon allow further significant reductions in the error bars. These precision measurements will help us to refine our understanding of the universe, and will certainly shed light on what is currently the most challenging puzzle for cosmologists and particle physicists – the nature of dark energy. This is currently the dominant energy component of the universe that causes its expansion to accelerate.
On the theoretical side, the standard model of cosmology rests on two pillars: cold dark matter (CDM) and inflation. In a CDM cosmology, most of the matter of the universe consists of non-baryonic, non-relativistic and collisionless particles. Numerical simulations show that the gravitational attraction between these particles yields structures – galaxies, clusters and superclusters – that agree with the ones observed in the universe, possibly up to certain discrepancies at subgalactic scales. The potential problems of the CDM scenario and the properties of some alternative scenarios, such as self-interacting dark matter or modified Newtonian dynamics, were critically discussed by Marc Kamionkowski of Caltech and Arthur Kosowsky of Rutgers. At this stage it is still disputed whether the CDM scenario is free of problems, but as the talk by Andreas Albrecht of Davis suggested, it is fair to say that theorists continue to be in the dark regarding dark energy.
Inflation goes on and on
Inflation remains one of the cornerstones of modern cosmology. According to the inflationary paradigm, the early universe experienced a stage of accelerated expansion. As a result of this expansion, inflation produces a homogeneous and flat universe, as confirmed by cosmic microwave background (CMB) measurements. Inflation also explains the origin of the tiny primordial density fluctuations that developed into galaxies and clusters by gravitational instability. David Wands of Portsmouth described how inflation relates these primordial density perturbations to quantum fluctuations of the scalar field that drives inflation. Despite the fact that there is no theoretically preferred inflationary scenario, most inflationary models make definite predictions about the properties of these primordial density perturbations. They should be Gaussian, adiabatic and nearly scale-invariant. These predictions have been confirmed in an impressive series of experiments, and as Lloyd Knox of Davis reported, new CMB missions, such as the MAP and Planck satellites, will further test, scrutinize and constrain inflationary models.
Essentially the same mechanism that explains the origin of primordial density perturbations – quantum fluctuations of the inflation field – seems to imply that inflation will be eternal. As discussed by Alan Guth of MIT, who also delivered a widely attended public lecture at the Adler Planetarium, an inflating universe resembles a fractal. In a given inflating region of the universe, inflation has a finite lifetime, but at any given moment of time, there are always patches of the universe that continue to inflate. It is unclear whether such a prediction can be experimentally tested, but it certainly poses dramatic views on the global structure of the universe.
An important confirmation that our theoretical understanding about CMB fluctuations is on the right track came with the announcement by John Carlstrom of Chicago of the first measurement of CMB polarization by the DASI experiment. According to the standard theory, the temperature anisotropies we observe in the CMB are due to acoustic oscillations of the primordial baryon-radiation plasma. If this is true, the light that last scattered at the time of recombination – i.e. the CMB – should be partially polarized. The measurement of such polarization is a success of the standard theory, and represents the first step towards more ambitious measurements of the properties of the CMB polarization. As Alessandra Buonanno of Paris pointed out, the sea of relic gravitational waves that inflation predicts should leave a characteristic imprint on the polarization pattern of the CMB. This imprint could be used to determine the amplitude of gravitational waves produced during inflation, which in turn fixes the energy scale at which inflation took place.
Because of the high-energy scale at which inflation is expected to take place (around 1015 GeV in the simplest models), the primordial perturbations generated during inflation might be our only hope of probing the physics close to the Planck scale. This possibility was explored in a plenary talk by Nemanja Kaloper of Davis. Although in some inflationary models, Planck-scale suppressed corrections may leave an observable imprint in the primordial spectrum, Kaloper argued that generically, such an imprint is expected to be too small to be observable in ongoing experiments. Such a conclusion was also the subject of a lively debate in the parallel sessions.
Neutrinos, neutralinos and WIMPs
The major experimental accomplishment in particle physics in recent years has been the evidence for non-vanishing neutrino masses from solar and atmospheric neutrinos. This has provided the first solid hint of physics beyond the Standard Model. While neutrino oscillation experiments provide information on the neutrino mass squared difference, the absolute scale of neutrino masses is so far unknown. To date, as Alexander Dolgov of INFN Ferrara mentioned in his talk, “astronomy opens the best way to measure mn.” Big Bang nucleosynthesis, large-scale structure and CMB radiation constrain the contribution of massive neutrinos to the total mass density. A recent limit obtained in the 2 Degree Field (2dF) galaxy redshift survey gives an upper bound on the sum of neutrino mass eigenvalues Simi < 1.8 eV. In the near future, the Sloan Digital Sky Survey, combined with the CMB data of the MAP satellite, should reach a sensitivity of Smn ~ 0.65 eV. As far as sterile neutrinos are concerned, George Fuller of San Diego devoted an entire plenary talk to their effects on the dynamics of the universe and how cosmology can constrain them.
One of the fundamental unsolved questions of astroparticle physics is the origin of ultra-high-energy cosmic rays, a topic that was reviewed by Günter Sigl of Paris. To understand the acceleration and sky distribution of cosmic rays, a better knowledge of the strength and distribution of cosmic magnetic fields is needed. Sigl stressed that ultra-high-energy cosmic rays with energies above 1018 eV involve centre of mass energies above 1 TeV, which are beyond the reach of accelerator experiments. They thus provide a low-cost laboratory to probe potential new physics beyond the electroweak scale.
The question “How can particle accelerators directly attack major cosmological issues?” was addressed by Joe Lykken of Fermilab. The two main topics about which both theorists and experimentalists in particle physics have much to say are dark matter and baryogenesis. If supersymmetry has anything to do with the stabilization of the electroweak scale, the superparticles are expected to be seen at the LHC, and the hypothesis of a neutralino as a dark matter candidate – also discussed by Leszek Roszkowski of Lancaster – will be covered by the LHC with a great degree of complementarity with direct (elastic scattering) and indirect (signals from its cosmic annihilation) neutralino searches. The status of other supersymmetric dark matter candidates was reviewed (sneutrinos: ruled out; gravitinos: safe) as well as the recently proposed TeV mass Kaluza-Klein dark matter candidate, which will also be probed at the LHC. As for non-accelerator searches of CDM candidates, Maryvonne De Jesus of Lyon reported the results from and prospects for the numerous ongoing and planned direct searches for WIMPs via elastic scattering experiments, while Georg Raffelt of MPI, Munich, described the status of axion searches.
Regarding baryogenesis, the theory of electroweak baryogenesis in the Minimal Supersymmetric Standard Model (MSSM), which was reviewed by Mariano Quiros as well as Mark Trodden, has exciting prospects. The remaining very tiny corner of parameter space for which it works corresponds to a light Higgs and a light stop. Those should be found by Tevatron Run II if the MSSM is consistent with electroweak baryogenesis.
Other important activities led by high-energy physicists were emphasized at the conference – in particular, B physics will teach us about the sources of CP violation. Still in the domain of flavours, experiments with neutrino beams (such as MiniBooNE at FNAL) will help us to understand neutrino flavours. And finally, we heard that “electroweak precision measurements are not boring”: the measurement of the anomalous magnetic moment of the muon at Brookhaven, the electroweak mixing angle by the NuTeV collaboration, and the bottom quark forward-backward asymmetry at LEP look like anomalies in the present global fit to electroweak data, and could be a sign of new physics.
Extra dimensions and strings
The field of extradimensional cosmology was well represented in plenary talks by Ruth Gregory of Durham, Lev Kofman of Toronto and Lisa Randall of Harvard. Extradimensional cosmology is very rich, but it is still in its infancy, and there is much left to explore. The evolution of the universe at late times can be described within the context of extradimensional cosmology, or in other words, the presence of extra dimensions can be reconciled with constraints from late-time cosmology. On the other hand, extradimensional cosmology at early times is much more difficult to understand. There is no experimental constraint to guide model-builders overwhelmed by an excessive freedom. Kofman reported new ideas on inflation from extra dimensions (for example colliding branes and radion potentials), as well as recent work on string signatures on cosmological observations.
Regarding attempts by particle theorists to explain dark energy with something other than a cosmological constant, Maxim Perelstein of Berkeley discussed networks of domain walls – quite generic in attempts to go beyond the Standard Model of particle physics. Another proposal to interpret supernovae data, presented by John Terning of Los Alamos, is to introduce photon-axion oscillations in an intergalactic magnetic field as a way of rendering supernovae dimmer, an explanation that does not need cosmic acceleration (but still requires a dark energy component of negative pressure).
Joe Polchinski of Santa Barbara addressed the question “Does string theory have vacua like ours, i.e. with (nearly) zero cosmological constant, a non supersymmetric spectrum and a stable (or long-lived) vacuum?” To date, there is no positive satisfying answer to this question. Polchinski also showed how the simplest string moduli potentials have difficulty in describing “quintessence”.
Will string theory lead to a theory of the Big Bang? Nathan Seiberg of Princeton explained how string theorists are trying to address the problem of cosmological singularities, and presented the new challenges and recent explorations in the field of time-dependent solutions in string theory. In a very different approach, Willy Fischler of Texas presented a new cosmological model in which the primordial universe is dominated by a dense gas of black holes.
The question of whether string theory will yield the principle that determines the history of the universe was also raised by David Gross of Santa Barbara, who gave the closing talk of the conference. Gross confessed that his major feeling at the end of the conference was envy: “This is a golden age of cosmology: beautiful observations and the emergence of a standard model.” He made the comparison with the situation he experienced 30 years ago when the Standard Model of particle physics was emerging. Astrophysical observations represent a testing ground for fundamental physics; experimental cosmology will provide increasingly precise tests of the Standard Model and constraints on new physics.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.