Comsol -leaderboard other pages

Topics

D0 sharpens its top-quark measurement

cernnews3_7-04

The D0 collaboration at Fermilab has applied a new technique for measuring the mass of the top quark that yields a more precise result than previously. The new result affects constraints on the mass of the Higgs boson, increasing slightly the most likely value of its mass.

In the D0 experiment pairs of top quarks and antiquarks – ttbar – are produced in head-on proton-antiproton collisions in the Tevatron. The t (tbar) swiftly decays to a bottom quark, b (bbar), and a W+(W) boson. In this new analysis the team has re-examined events from Run I in which one of the W particles decays into a charged lepton (electron or muon) and a neutrino, while the other decays into a quark and an antiquark. The new technique is based on ideas developed several years ago by Kunitaka Kondo at Waseda University in Japan, and independently by Richard Dalitz and Gary Goldstein at Oxford. The method gives more weight to well measured events and allows more information to be extracted from each event. Basically, the team calculates as a function of the top mass the probability that the measured variables in any event correspond to a signal. The best estimate of the mass is then given by the maximum of the product of these probabilities.

The new analysis yields an improvement in statistical uncertainty for this data sample that is equivalent to collecting 2.4 times as much data. The result for Mt is 180.1 ± 5.3 GeV/c2, which, when combined with the dilepton sample also collected by D0 in Run I, gives Mt = 179.0 ± 5.1 GeV/c2, and a new world average of 178.0 ± 4.3 GeV/c2 (D0 collaboration 2004). The effect on constraints on the mass of the Higgs boson is to increase the most likely value from 96 to 117 GeV/c2. This is clear of the mass range that is excluded experimentally.

The method used is now being applied to data collected in Run II, in both the D0 and CDF experiments, offering the possibility of an ultimate precision on the top-quark mass of about 2 GeV/c2.

Latest K2K results support neutrino oscillations

K2K, the KEK to Kamioka long-baseline neutrino-oscillation experiment, has announced results based on data collected from the start-up in 1998 through to February 2004. During this time the experiment, which uses the underground Super-Kamiokande detector to record the interactions of a beam of neutrinos generated about 250 km away at KEK, in Tsukuba, has observed 108 beam-induced neutrino interactions.

In the absence of neutrino oscillations, in which one type of neutrino can change to another, the expected number of such events would be 150.9+11.6-10.0. The observations therefore show a deficit consistent with the oscillation effects previously reported by Super-Kamiokande using data from naturally produced (atmospheric) neutrinos. K2K also reported the first significant evidence for the energy dependence of the oscillation effect. Taking into account measurements of the beam obtained from “near” detectors on the KEK site, the probability that the observed data are consistent with the hypothesis of no oscillations, and hence massless neutrinos, is negligible at 10-4.

The Super-Kamiokande detector suffered a major accident in November 2001 when many of its 11 200 photomultiplier tubes were destroyed. Rebuilding began during 2002 so that operation of the K2K experiment could re-start in January 2003, albeit with a reduced detector. The K2K collaboration expects to increase the number of observed events in a run starting in October 2004 and ending before the anticipated shutdown of the KEK proton accelerator in 2005. These additional data will include special studies to refine plans for the next-generation experiment, T2K (Tokai to Kamioka), at the new J-PARC accelerator facility under construction in Tokaimura, Japan. A new scintillating bar detector system – SciBar – was installed in the K2K “near” detector and commissioned in September 2003. Scibar is intended in part as an in-service prototype for a critical detector element in the T2K experiment.

MEG goes in search of the forbidden

cernpsi1_7-04

This year marks 30 years of successful operation of the 590 MeV proton accelerator complex at Switzerland’s national user laboratory, the Paul Scherrer Institut (PSI) in Villigen. Originally designed for proton currents of 100 µA, the ring cyclotron is now routinely producing beam currents of close to 2 mA. This megawatt beam is the progenitor of the world’s most intense direct current (DC) pion and muon beams and makes possible the measurement of rare decays and the search for “classical” forbidden decays. If found, the latter would signal new physics and allow access to these phenomena in a regime complementary to that possible with high-energy colliders.

PSI has a long tradition in this field, especially in searches for lepton-flavour violation (LFV); some of the most competitive limits in the charged lepton sector stem from the laboratory’s high-intensity muon beams. Notable examples of such searches are for µ → 3e, µ → e conversion and muonium (µ+e) → anti-muonium (µe+) conversion. Although all of the results so far are still limits, the next generation of charged LFV searches may promise more. Recent results from the neutrino sector – for example at Super-Kamiokande, SNO and KamLAND – demonstrate flavour-changing processes in the realm of neutral leptons, which are associated with non-zero neutrino mass. There is also a continued interest in the results from the Muon g-2 Collaboration at Brookhaven.

While in simple extensions to the Standard Model the interesting rates are prohibitively small, in supersymmetric extensions (SUSY) and especially in grand-unified theories (SUSY-GUT) – which turn out to be particularly favourable to LFV – the branching ratios are predicted to lie only one to two orders of magnitude lower than the present experimental bounds. In particular, compared with other LFV processes, such as µ → 3e, µ → e conversion in a nucleus and τ → eγ, the decay process µ → eγ is expected to have a higher sensitivity to supersymmetric unification. A new generation of LFV searches is therefore now in preparation, notably the MEG experiment (µ+ → e+γ search) at PSI and MECO (µ → e conversion search) at Brookhaven.

The MEG collaboration, comprised of some 50 members from 11 institutes in Italy, Japan, Russia and Switzerland, is currently commissioning the initial part of the beam line, as well as the first detector components, which have already arrived at PSI. The ambitious goal of this experiment is to achieve a single-event sensitivity for the decay µ+ → e+γ that will be more than two orders of magnitude lower than the current best limit on the branching ratio µ+ → e+γ < id= 1.2 x 10sup>-11, achieved by the MEGA collaboration at the Los Alamos Meson Physics Facility (LAMPF).

For the MEG detector to be able to distinguish the coincident back-to-back µ+ → e+γ events at a high rate from the main combinatorial background of normal and radiative muon decays, a high-current DC duty-cycle machine, such as the Ring Cyclotron at PSI, combined with the highest intensity surface muon beam is a prerequisite. On the detection side, two main components – an 800 litre liquid xenon (LXe) photon calorimeter using scintillation light together with a gradient-field, thin-coil superconducting positron spectrometer – make possible the required energy/momentum, spatial and timing resolutions.

cernpsi2_7-04

The gradient magnetic field of the COBRA (Constant Bending-Radius) spectrometer allows the decay positrons to execute spiral paths of constant projected bending radius and increasing axial pitch, which depend entirely on the particle’s total momentum while being independent of its emission angle. This allows a background of lower energy Michel positrons to be swept away more effectively from the fiducial tracking volume of the azimuthally spaced, staggered-cell drift chambers. Timing information and hence trigger information for events is provided by a set of fast, double-layered, orthogonally placed timing-counter arrays, positioned at either end of the magnet.

The LXe photon calorimeter, which is viewed from all sides by some 800 photomultiplier tubes (PMTs) immersed in the cryogenic fluid, allows a homogeneous measurement of the energy, spatial and timing co-ordinates of the photon. A milestone in the development of the calorimeter was recently achieved with beam tests in Japan and at PSI. A large prototype detector with about one-tenth of the volume and 228 PMTs has yielded a preliminary value of better than 4.5% full width at half maximum for the energy resolution at 55 MeV, as well as a position-dependent spatial resolution, σ, of 2-4 mm, demonstrating that the required resolutions can be achieved.

The R&D phase for MEG is now slowly drawing to a close and the production phase is beginning to gain impetus. Initial engineering runs are planned during 2005, with full detector assembly expected for the end of 2005. Data taking will start in 2006, almost 60 years after the first attempt by E P Hincks and Bruno Pontecorvo to see what was then known as a “meson” decay to an electron and photon, using cosmic rays. Naturally, we hope not to measure “zero”!

The beta-decay route to a high-flux neutrino source

cernbeta1_7-04

Neutrino physics is very much in vogue. The discovery that these elusive particles can oscillate between their three established identities has made the mixing of neutrino flavours more than just “flavour of the month” for theorists and experimentalists alike. Such behaviour introduces the possibility of differences between the matter and antimatter versions of neutrinos. This charge-parity violation, already observed in quarks, may in turn help to explain the disparity between the amounts of matter and antimatter we observe in our universe.

For experimentalists, particle accelerators and nuclear reactors already supplement the neutrinos that come from the Sun or are produced by cosmic rays in the Earth’s atmosphere. However, to investigate properly the new phenomena, the demands on neutrino flux far outstrip supply and some distinctly unconventional sources have been proposed to meet the shortfall. The most recent of these is the “beta-beam” concept, which envisages the production of a pure beam of electron neutrinos (or their antiparticles) through the beta decay of radioactive ions circulating in a high-energy storage ring.

Several factors determine a suitable choice of ion. The flux of neutrinos from the decay ring is determined solely by the rate ions can be accumulated, but the flux at a detector a given distance away also depends on the average energy with which the neutrinos are emitted in the rest frame of the parent ion. Further constraints on ion choice are set by the decay losses that can be tolerated in the accelerator chain and by the decay products that could create long-lived contamination in the low-energy part of that chain. Together these considerations suggest two isotopes of particular interest: 18Ne, giving electron neutrinos, and 6He for antineutrinos. Both can be produced in large quantities by the ISOL (isotope separator online) method, which has been used routinely at CERN for more than 35 years.

Efficient production of the helium isotope requires a two-stage target. Spallation neutrons released in a heavy-metal converter by a very intense proton beam can produce large amounts of 6He in a secondary target of beryllium oxide. The advantage of the converter technology is that the primary proton beam does not impinge directly on the sensitive beryllium oxide. This prolongs the lifetime of the target considerably. The proton-rich neon isotope, 18Ne, can be produced directly by spallation in a suitable target material, e.g. magnesium oxide, but this requires irradiation in a primary proton beam. Consequently, it is possible to produce about a factor of 10 more 6He atoms than 18Ne ones because the proton beam power must be limited for the latter.

The next steps, following isotope production, are to strip off all the remaining electrons and bunch the beam prior to acceleration. It is hoped that both tasks can be accomplished efficiently by a high-frequency (60 GHz) electron cyclotron resonance source. While such a system does not exist today, theoretical calculations are encouraging and a first feasibility test could be envisaged in the near future. Once fully stripped, the ions would be accelerated in a linac to increase their lifetime. The acceleration of high-intensity radioactive ion beams to around 100 MeV/u using a linac has already been studied by the EU-financed EURISOL study.

Further acceleration could be achieved using the existing CERN accelerator infrastructure. However, space-charge effects at injection dictate a beam energy of at least 300 MeV/u before the intensities required for the beta-beam can be digested by CERN’s Proton Synchrotron (PS). This constraint, together with the need for bunches much shorter than those provided by the linac, means that an additional stage of bunching and acceleration is required. The most promising scenario involves multi-turn injection of the linac beam into a rapid-cycling synchrotron, followed by bunching, acceleration and transfer to the PS. This procedure is repeated until all RF buckets are filled and the beam is then accelerated to its top energy and sent to the next synchrotron at CERN, the Super Proton Synchrotron (SPS).

cernbeta2_7-04

Injection into the SPS is an established space-charge bottleneck, so the bunches must fill the maximum available transverse aperture. It is also foreseen to keep them as long as possible using a new RF system during the early part of acceleration until the standard high-frequency system can take over near the transition energy.

Finally, the decay ring will be an accumulator for the bunches delivered by the accelerator chain. Accumulation is possible because the half-life of the highly relativistic stored ions is more than an order of magnitude longer than the cycling time of the injectors. It is proposed to stack the ions by asymmetric bunch pair merging (figure 2). This relies on a dual-harmonic RF system to combine adjacent bunches in longitudinal phase space such that a fresh, dense bunch is embedded in the core of a much larger one, while diluting the emittance of the beam as little as possible. Each new bunch must be injected in the neighbouring RF bucket to an existing bunch in the stack, but this cannot be done using conventional kicker magnets and septa because of the short risetime that would be required.

An alternative injection scheme exploits the fact that the stack is located at only one azimuth in the decay ring and that the revolution period is relatively long. The new bunches are off-momentum and are injected in a high-dispersion region on a matched dispersion trajectory. This allows a full turn to bring the off-momentum orbit inside the machine at the entry point of the beam. Subsequently, each injected bunch rotates a quarter of a turn in longitudinal phase space until the initial conditions for merging the bunch pairs are met and stacking can proceed.

The aim is to collect an unprecedented 1014 helium ions (5 x 1012 neon ions) in just four bunches, each only 10 nanoseconds long. This is to ensure that enough neutrinos are localized in time sufficiently well to overcome the background issues at the detector in Fréjus some 130 kilometres away in France.

CDMS II narrows search for WIMPs

cernnews1_6-04

With the first data from their underground observatory in northern Minnesota, the scientists of the Cryogenic Dark Matter Search (CDMS) have peered with greater sensitivity than ever before into the suspected realm of WIMPs, or weakly interacting massive particles. The results show, however, that if they do exist WIMPs are still staying out of sight.

WIMPs are of interest for the two extremes of the very large and the very small. There is strong evidence for large amounts of nonluminous dark matter in the universe, which cannot consist of normal matter (baryons) but seems likely to consist of WIMPs. At the opposite end of the scale supersymmetry yields a range of massive new particles, but the lightest – such as the neutralino – could be stable and therefore a good candidate to be a WIMP.

The CDMS II experiment, which is run by a collaboration of 48 scientists from 13 institutions, plus 28 engineering, technical and administrative staff, is located nearly 780 m below ground in a former iron mine in Soudan, Minnesota. The experiment uses four 250 g germanium detectors and two 100 g silicon detectors, which are cooled to less than 50 mK so that molecular motion becomes negligible. Substantial shielding and the 780 m of rock together reduce the background due to cosmic rays and radioactivity.

The detectors simultaneously measure the ionization and vibration (phonons) produced by particle interactions within the crystals. WIMPs should reveal their presence by creating less ionization than other particles for the same amount of vibration. This is because the WIMPs will scatter from nuclei in the detectors while other particles are more likely to scatter from electrons, and recoiling electrons create more ionization than recoiling nuclei. The timing of the phonons also provides a means of distinguishing between WIMPs and other particles.

The CDMS II results show with 90% certainty that the interaction cross-section for a WIMP with a mass of 60 GeV must be less than 4 x 10-43 cm2, or about one interaction every 25 days per kilogram of germanium (Akerib et al. 2004). This measurement is at least four times more sensitive than the best previous measurement offered by the EDELWEISS experiment in the Fréjus Underground Laboratory in France.

The results, which are described in a paper submitted to Physical Review Letters, were presented at the April Meeting of the American Physical Society on 1-4 May in Denver. The data set the world’s lowest exclusion limits on the cross-section for coherent WIMP-nucleon scalar interactions for all WIMP masses above 15 GeV. They thereby rule out a significant range of neutralino supersymmetric models.

Joining up the dots with the strong force

cernlatt1_6-04

Experimental particle physicists are well used to the fact that many years must elapse between the planning of a big experiment and the analysis of the results. Theorists do not generally have to wait so long for their ideas to bear fruit.

The attempt to solve the theory of the strong force by numerical simulation, however, has been a long-running saga. The technique, called lattice quantum chromodynamics, or more usually lattice QCD, was suggested 30 years ago and first attempted numerically in the late 1970s. Since then particle theorists have tried to monopolize each generation of the world’s fastest supercomputers with their calculations, and battled with improved algorithms and sources of systematic error. Now they are close to a solution at last.

New calculations, which have simulated the most realistic QCD vacuum to date, have shown agreement with experiment for simple hadron masses for the first time at the level of a few percent. This is an important milestone. Only when well known quantities are accurately reproduced can we have faith in the calculation of other quantities that experiment cannot determine. A number of such calculations are eagerly awaited by the particle-physics community as a new era of high-precision calculations in lattice QCD begins.

Precision testing

Precise lattice QCD calculations are needed as part of the worldwide programme of testing the Standard Model so rigorously that flaws are exposed that will allow us to develop a deeper theory, explaining nature more completely. QCD is a key component of the Standard Model because any experiment aimed at the study of quark interactions must necessarily confront the issue that quarks are confined inside hadrons by the strong interactions of QCD. It is this feature of QCD that makes it hard to tackle theoretically as well. Although perturbation theory works well for QCD when high energies are involved, such as in jet physics, it is not an appropriate tool for physics at the hadronic scale. QCD interactions are so much stronger there that a fully non-perturbative calculation must be done using lattice QCD.

An important example of where lattice QCD calculations are needed is the attempt by experiments at B-factories to test the self-consistency of the Standard Model through the determination of the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This is now seriously limited by the precision with which QCD effects can be included in theoretical calculations of B-meson mixing and decay rates. Errors of a few percent are needed to match the experimental precision. Another important task is that of the determination of quark masses and the QCD coupling constant, αs. Like the CKM matrix elements, they are fundamental parameters of the Standard Model that are a priori unknown and must be determined from experiment. Because quarks cannot be observed as isolated particles we cannot determine their masses or colour charges directly and theoretical input is required.

In lattice QCD the procedure is straightforward but numerically challenging. We must solve QCD for well known observable quantities, such as hadron masses, as a function of the quark masses and the coupling constant in the QCD Lagrangian. The value of a quark mass is then that which gives a particular hadron mass in agreement with experiment. The scale of QCD, Λ, is likewise determined by the requirement for another hadron mass to have its experimental value, and this is equivalent to determining the coupling constant. Quark masses, particularly those for u, d, s and c, are rather poorly known at present and this hampers a number of phenomenological studies. Precision of a few percent, rather than the current 30%, on the s quark mass would greatly reduce theoretical errors in the CP-violating parameter ε´/ε of kaon physics, for example.

The space-time lattice

Lattice QCD proceeds by approximating a chunk of the space-time continuum using a four-dimensional box of points, similar to a crystal lattice. The quark and gluon quantum fields then take values only at the lattice points or on the links between them, and the equations of QCD are discretized by replacing derivatives with finite differences. In this way a problem that would be infinitely difficult is reduced to something tractable in principle. We still have to perform a many-dimensional integral over the fields that we have, however, and this is done by an intelligent Monte Carlo process that preferentially generates field configurations of the QCD vacuum that contribute most to the integral.

cernlatt2_6-04

This part of a lattice QCD calculation increasingly resembles a particle-physics experiment. Collaborations of theorists with access to a powerful supercomputer generate these configurations and store them for the second phase, which resembles the data analysis that experimentalists perform. In this phase hadron correlation functions are calculated on the configurations and fits are performed to determine hadron masses and properties. Many hundreds of configurations are typically needed to reduce the statistical error from the Monte Carlo to less than 1%.

Lattice QCD results are calculated for a lattice with a finite volume and a finite lattice spacing, but we want them to be relevant to the infinite volume and zero lattice spacing of the real world. For the accuracy we need we must understand how the results depend on the volume and lattice spacing, and reduce the systematic error from this dependence below the 1% level. The dependence on volume of most results falls very rapidly for large-enough volumes, so lattices 2.5 fm across or larger are thought to be sufficient for calculations at present. The dependence on lattice spacing is more difficult to remove and this was the subject of a great deal of work throughout the 1990s. The development of higher order, “improved” discretizations of QCD has allowed calculations to be performed that give answers close to continuum QCD, with values for the lattice spacing of around 0.1 fm. These are feasible on current supercomputers. With unimproved discretizations we would need to work with lattice spacing values 10 times smaller to achieve the same systematic error. This would cost, even naively, a factor of 10,000 in computer time, and in practice much more.

One key problem remained at the end of the 1990s. This was the huge computational cost of including the effect of dynamical (sea) quarks: u, d and s quark-antiquark pairs that appear and disappear through energy fluctuations in the vacuum (c, b and t quarks are too heavy to have any significant effect). The anticommuting nature of quarks, as fermions, means that their fields cannot be represented directly on the computer as the gluon fields are. Instead the quark fields are “integrated out” and the effect of dynamical quarks then appears as the determinant of the enormous quark interaction matrix that connects quark fields at different points. The dimension of the matrix is the volume of the lattice times 12 for quark colour and spin, typically 107.

Honing techniques

The “quenched approximation”, used extensively in the past, misses out the quark determinant entirely, and this is clearly inadequate for precision results. However, it has been a useful testing ground for theorists to hone their analysis techniques. More recently, dynamical quarks have been included, but often these are only u and d quarks with masses many times heavier than the very light values in the real world because the cost of including the quark determinant rises as the quark mass falls. A figure of merit for modern lattice calculations is how light the u and d quark masses (almost always taken to be the same) are in terms of the s quark mass (ms). Given a range of u/d masses below ms/2, it should be possible to extrapolate down to physical results using chiral perturbation theory. Early simulations with dynamical quarks struggled to reach ms/2 from above and so it was hard to distinguish results from the quenched approximation, although encouraging signs of the effects of dynamical quarks were seen.

Recent simulations by the MILC collaboration, however, have managed to include u, d and s dynamical quarks with four different values of the u/d quark mass below ms/2 and going as low as ms/8. This has allowed well controlled extrapolations to find the physical u/d quark mass where the (isospin averaged) pion mass agrees with experiment. Two different values of the lattice spacing have been simulated to check discretization errors and two different volumes (2.5 and 3.5 fm across) to check finite volume errors.

These results have been made possible by a new formulation of quarks in lattice QCD called the improved staggered formulation. The staggered formulation is an old one and has always been very quick to simulate. It uses the most naive discretization of the Dirac action possible on the lattice and the quark spin degree of freedom is then in fact redundant, immediately reducing the dimension of the quark interaction matrix by a factor of four. The discretization errors were originally very large with this formalism, however, and it is the realization that these can be removed using the improvement methodology discussed above that has enabled the improved staggered formalism to be a viable one for precision calculations.

cernlatt3_6-04

One caveat remains. Because of the space-time lattice and the notorious “doubling” problem, the staggered formalism actually contains four copies (called “tastes”) of each quark. This four-fold over counting is removed by taking the fourth root of the determinant of the quark matrix when generating configurations that include dynamical quarks. Some theorists object that the fourth root, despite being correct in perturbation theory, may introduce errors in a non-perturbative context. Extensive testing is required to be sure that we do have real QCD, but this is exactly the testing of lattice QCD that is necessary anyway to assure ourselves that precision calculations are possible. So far the formalism has passed with flying colours.

Figure 1 shows the results of an analysis using the MILC configurations by the MILC, HPQCD, UKQCD and Fermilab collaborations. Nine different quantities are plotted, covering the entire range of the hadron spectrum from light hadrons represented by the π and K decay constants (related to the leptonic decay rate) all the way to the heavy hadrons represented by orbital and radial excitation energies in the ϒ system. Light baryons, B-mesons and charmonium are also included. These quantities have been chosen to be “gold-plated” – that is, masses or decay constants of hadrons that are stable in QCD and therefore well defined both theoretically and experimentally. Lattice QCD calculations of these must work if lattice QCD is to be trusted at all. The quark masses and QCD scale have been fixed (as they must be) using other gold-plated hadron masses that do not appear in the plot. These are the masses of the π, K, Ds and ϒ, and the splitting between the ϒ´ and the ϒ. In the figure on the left-hand side are plotted results in the quenched approximation, in which dynamical quarks are ignored. Some quantities disagree with experiment by 10% and there is internal inconsistency in the sense that quantities can be shifted in or out of agreement with experiment by changing the hadrons used to fix the parameters of QCD. On the right are the new results, which include u, d and s dynamical quarks. Now all the quantities agree with experiment simultaneously, as they must if we are simulating real QCD, and this is tested with a precision of a few percent.

Calculating with confidence

This is a major advance. Now calculations of other quantities can be carried out, knowing that the correct answer in QCD should be obtained. For example, calculations of leptonic and semileptonic decay rates for B and D mesons and B-mixing rates are in progress. Checks of the D results, providing confidence in the B results for the B-factories, will be possible against measurements from the CLEO-c experiment at Cornell. Masses of hadrons that are unstable or close to threshold will be more difficult to calculate with high precision, but many of these, such as glueballs and pentaquarks, are very interesting states. Other quark formulations that escape the doubling problem but are much more costly to simulate will provide important checks once the necessary computational resources are available. Supercomputing for lattice QCD is just entering the teraflops era, and it promises to be a very productive one in which precision calculations are possible at last.

The search for the disappearing neutrinos

cernneu1_6-04

In late March in Japan only a few buds from the cherry blossom trees are beginning to show their shades of pink, but in Niigata this year new ideas for neutrino experiments at nuclear reactors were in full bloom. It was here that an international group of physicists met to discuss these ideas at a workshop hosted by the University of Niigata and the Tokyo Electric Power Company. This was the third in a series on future low-energy neutrino experiments that had begun in Alabama in April 2003 and proceeded to Japan via Munich, in October 2003.

The basic idea being considered is to use several detectors to search for anti-electron neutrino disappearance, as this can provide evidence for a non-zero value of the parameter θ13 in the Maki-Nakagawa-Sakata (MNS) mixing matrix, the analogue for neutrinos of the Cabbibo-Kobayashi-Maskawa matrix for quark mixing. In its simplest form the 3 x 3 neutrino MNS matrix can be parameterized with three angles and one phase. Experiments using atmospheric neutrinos have shown clear evidence for neutrino oscillations, with the mixing angle – the parameter θ23 – near its maximal value of 45°. The long-standing solar neutrino problem has also been solved by neutrino oscillations with a large value of the parameter θ12. This result has been confirmed by the reactor neutrino experiment KamLAND, which has an average distance to the reactors of 180 km.

The current best limit for θ13 comes from the reactor experiment CHOOZ. This was originally designed to look for a large signal from θ12 related to the atmospheric neutrino anomaly, and used only one detector. Now, however, it has been realized that an experiment with two (or more) detectors could greatly reduce the dominant systematic uncertainties from the reactor fuel cycle and detector efficiencies. This would allow a more sensitive search for θ13.

The meeting in Niigata began with a number of talks reviewing the theoretical situation. Hisakazu Minakata from Tokyo Metropolitan University described how a neutrino measurement of θ13 could be combined with measurements from future long-baseline accelerator experiments to measure the sign of Δm2 and the CP parameter δ. He also introduced a concept for a θ12 experiment that would use the Kashiwazaki-Kariwa nuclear-reactor complex near Niigata and a detector on Sado Island, about 70 km away. Morimitsu Tanimoto from Niigata University then addressed the issues of why θ13 is so small, why θ23 is near maximal and why θ12 is not maximal. He covered several theoretical frameworks: anarchy (in effect, random oscillations), radiative origins, grand unified theories and texture zeros (very small entries in the mass matrices). In one example of texture zeros the small value of θ13 could be related to the smallness of the neutrino masses. However, even in that case he concluded that only experiment could reveal the real size of θ13.

The recent increase in interest in a new reactor experiment goes hand-in-hand with ideas for large “off axis” long-baseline neutrino experiments at accelerators to measure νµe oscillations. Unlike reactor experiments, accelerator experiments are also sensitive to CP-violating effects in the neutrino sector and to matter effects. This is both an advantage and a disadvantage. The advantage is that there is obviously a richer physics programme to investigate. The disadvantage is that a particular measurement is more difficult to interpret due to ambiguities and degeneracies. Takashi Kobayashi from KEK described the status of the T2K experiment, which was recently approved to send a beam from the new JPARC accelerator at Tokai to the Super-Kamiokande facility in the Japanese Alps, a distance of 295 km. The first beam is currently expected in 2009. Bob McKeown from Caltech then showed how reactor and accelerator measurements could be combined to provide greater precision and insights. As examples, he used both the Japanese experiment T2K and the proposal for an off-axis experiment at the NuMI beam in the US, now called NOνA.

cernneu2_6-04

The main parameters of a reactor neutrino disappearance experiment were outlined in the talk by Karsten Heeger from the Lawrence Berkeley Laboratory. This also served as an excellent summary for the three workshops on this subject as he addressed three main questions. Why do a new reactor experiment? How would such an experiment be configured? What are the experimental challenges that new multi-detector reactor experiments face? Heeger concluded that a disappearance measurement of θ13 with reactor neutrinos is a promising method to measure the true value of sin213, but it is experimentally challenging. Combined with the result of a five-year experiment with a high-intensity neutrino “superbeam”, reactor measurements can provide significant new constraints and perhaps even decide the neutrino mass hierarchy and yield information on the CP-violating phase angle in the MNS matrix. The sensitivity to a normal mass hierarchy is better. An optimized baseline of 1.7 km helps to reduce the impact of systematics, and limits of the order of sin213 < 0.014 are achievable. Smaller, quicker reactor experiments will yield sin213 < 0.04.

Talks and discussion at the meeting also explored six projects that are under development in five countries and on three continents (see table “A comparison of the features of some of the proposed reactor sites”). In Japan, the Kashawazaki-Kariwa nuclear-power complex consists of seven nuclear reactors. Located about an hour from Niigata, it is the highest power nuclear site in the world. The Tokyo Electric Power Company arranged a tour for conference participants, who – after the appropriate security and radiation protection requirements – were able to stand on top of one of the seven cores, 20 m from the release of enough energy to power 3% of the Tokyo area. Fumihiko Suekane from Tohoku University showed the design for the KASKA (Kashawazaki-Kariwa) project, in which 8 tonne gadolinium-loaded liquid-scintillator detectors would be placed deep in shafts at two near-detector locations and one far location, with an average distance 1.3 km from the reactor cores. Osamu Yasuda from Tokyo Metropolitan University demonstrated that for multiple reactors and near-detectors the uncorrelated error is reduced and there is no loss of precision.

Turning to the US, Ed Blucher from the University of Chicago and Jonathan Link from Columbia University described progress on the proposal to use a site at Braidwood in Illinois, about 80 km from Chicago. One way to control systematic errors is to move the detectors between the near and far sites, and the relatively flat terrain in Illinois allows this to be done relatively inexpensively. A cost estimate has been made for the underground construction of two shafts, 300 and 1800 m from the centre of the two reactor cores at the Braidwood nuclear plant. The next step is to drill boreholes to full depth at the positions of both shafts, to provide information about geology, radioactivity and density. A site-specific estimate of isotope production by muons is being used to calculate the optimum depth for the detectors. Karsten Heeger described similar considerations for the Diablo Canyon site in California.

cernneu3_6-04

In Europe the original CHOOZ experiment, 1050 m from the reactor cores, is still available. Thierry Laserre from CEA/Saclay and Herve de Kerret of APC (AstroParticule et Cosmologie)/Collège de France presented the Double CHOOZ concept, with a larger detector and the possibility of placing a relatively shallow near-detector about 100-200 m from the reactors. A dense mound of shielding would probably be needed to reduce backgrounds. A letter of intent by a proto-collaboration from France, Germany, Italy and Russia is nearing completion, and early stages of approval have already been obtained.

For South America, a site at Angra in Brazil is a possibility, as described by Orlando Peres from the State University of Campinas (UNICAMP) and David Reyna from Argonne National Laboratory. Due to a favourable local geology, two 50 tonne detectors could be placed 350 and 1350 m from the reactor core. Back in Asia, the site at Daya Bay in China was described by Yifang Wang from the Institute for High Energy Physics in Beijing. Located near Hong Kong, the power plant has four reactor cores in two clusters, providing a total thermal power of 11.6 GW, with two further cores (6 GW) planned for 2011. A tunnel that would service two near-detector locations and one far-detector location is being considered, as well as a design for multiple 10 tonne detectors.

In addition to describing the site characteristics, most speakers also addressed a myriad of issues, including optimal distances, detector design, scintillator properties, backgrounds, calibration and systematic errors. Two talks focused on the progress in understanding gadolinium-loaded liquid scintillators. The neutron absorption cross-section on gadolinium is so high that it provides an attractive target for this kind of experiment. Both the high cross-section and large energy release (8 MeV) provide a high efficiency to look for the neutron in coincidence with the positron in inverse beta-decay. However, previous experiments have found large degradations as a function of time of the light attenuation length in gadolinium scintillators, which would make a precision experiment more difficult. Francis Hartmann from the Max Planck Institute in Heidelberg described the progress that is being made there in scintillator chemistry, including the promising metal beta-diketone structure that is being investigated. Dick Hahn from the Brookhaven National Laboratory reported on a series of tests undertaken at Brookhaven and elsewhere to understand the optical properties of gadolinium-loaded scintillators in various solvents and with a variety of concentrations.

The unit of luminosity for reactor experiments is gigawatt-tonne-years, a product of the reactor power, the detector size and the running time. Manfred Linder from the Technical University Munich had shown in earlier workshops that there were two limiting cases, low luminosity (below 400 GW-tonne-years) and high luminosity (above 8000 GW-tonne-years). The former allows a measurement of rates, while the latter allows the shape of the energy distribution to be studied. However, different systematic errors are important for the different ranges of luminosity. David Reyna from Argonne National Laboratory focused on the advantages of using larger detectors to get enough statistics to see the change in shape of the energy distribution due to electron-antineutrino disappearance.

cernneu4_6-04

Although the main goal of the experiments being discussed in Niigata is to discover and measure θ13, there are other physics goals that could be pursued. Valery Sinev from the Kurchatov Institute considered the sensitivity of such an experiment to sterile neutrinos. He also looked at the issues of “burn-up” (changes in fissile content based on changes in the antineutrino rate) and changes in the energy distribution at the near-detector as studies in reactor physics. Michael Shaevitz from Columbia University presented a study showing that the events from the near-detector, if it is deep enough, could be used to measure neutrino-electron elastic scattering with an accuracy good enough to make a measurement of the weak mixing angle. This could be valuable as the NuTeV neutrino experiment has a measurement of this angle that is somewhat in conflict with other ways to measure it. A measurement of the antineutrino flux from reactors could also prove useful for the International Atomic Energy Association in its monitoring of the fuel cycle of nuclear reactors, as Thierry Laserre described.

The three workshops in this series have been useful in providing motivation for the experiments and sharing strategies for how to go about them. While the theorists refused to give a firm prediction for θ13, the experimentalists in Niigata conducted a poll of their expectation of what θ13 might turn out to be. More than 80% of their values were within the sensitivity of the proposed new reactor experiments. Since no large civil construction is needed, the quickest opportunity is for the Double CHOOZ experiment, with a detector that could be taking data in 2008. Participants were also in agreement that another experiment beyond Double CHOOZ was necessary in order to cover the range of parameter space that is reasonably accessible. They left Niigata convinced that they needed to form the collaborations, get the experiments approved and find the value of θ13.

The international working group of 126 physicists from 40 institutions in nine countries has collaborated on writing a white paper entitled “A New Nuclear Reactor Neutrino Experiment to Measure θ13“, which was published in January 2004.

Theory and experiment peer across the frontier

cernbey1_6-04

New developments in extensions of the Standard Model, through supergravity, superstrings and extra dimensions, were among the highlights of “Beyond the Desert 03 – Accelerator, Non-accelerator and Space Approaches”, which was held last year in Castle Ringberg in Tegernsee, Germany. Supergravity had recently celebrated its 20th birthday and two of its “inventors” – Pran Nath and Richard Arnowitt – were among the participants at the conference.

Nath, of Northeastern University, Boston, summarized the developments of minimal supergravity grand unification (mSUGRA) and its extensions since the formulation of these models in 1982, while Arnowitt, from Texas A&M, highlighted the connection to dark matter and the value of g-2 of the muon. Focusing on quantum gravity, Alon Faraggi of Oxford argued that the experimental data of the past decade suggest that the quantum-gravity vacuum should possess two key ingredients – the existence of three generations and their embedding into SO(10) representations. He explained that the Z1 x Z2 orbifold of the heterotic string provides examples of vacua that accommodate these properties. He also showed that three generations require a non-perturbative breaking of the grand unification gauge group, and in this context examined the issue of mass and mixing in the neutrino versus the quark systems.

Fundamental physics, including fundamental symmetries, formed another important aspect of the meeting. Peter Herczeg from Los Alamos reviewed CPT-invariant, and CP- and P-violating electron-quark interactions in extensions of the Standard Model. Turning to fundamental constants, Harald Fritzsch of Munich discussed astrophysical indications that the fine structure constant has undergone a small time variation during the cosmological evolution, within the framework of the Standard Model and grand unification. The case where the variation is caused by a time variation of the unification scale is particularly interesting.

Interferometry

The potential of neutron interferometry for tests of fundamental physics was outlined by Helmut Rauch of Vienna. Recent experiments in neutron interferometry, based on post-selection methods, have renewed the discussion about quantum non-locality and the quantum measuring process. It has been shown that interference phenomena can be revived when the overall interference pattern has lost its contrast. This indicates a persistent coupling in phase space, even in cases of spatially separated Schroedinger-cat-like situations.

Interesting developments in general relativity and aspects of special relativity were also discussed at the conference. Mayeul Arminjon of Grenoble presented a new “scalar ether theory” of gravitation. One of the motivations for trying such an alternative approach is to solve problems that occur in general relativity and in most extensions of it – namely the existence of singularities and the interpretation of the gauge condition. Arminjon showed that this scalar theory fits nicely with observations on binary pulsars. Lorenzo Iorio of Bari reported on new perspectives in testing the general relativistic Lense-Thirring effect. Turning to experiment, the present status of the search for gravitational waves was outlined by Peter Aufmuth of Hannover. Only astrophysical events, such as supernovae, or compact objects, for example, black holes and neutron stars, produce detectable gravitational wave amplitudes. The current generation of resonant-mass antennas and laser interferometers has reached the sensitivity necessary to detect gravitational waves from sources in the Milky Way. Within a few years the next generation of detectors will open the field of gravitational astronomy.

Cosmological connections

Talks about the early universe included cosmological, quantum-gravitational and other possible violations of CPT symmetry. Nick Mavromatos of King’s College, London, discussed the various ways in which CPT symmetry may be violated, and reviewed their phenomenology in current or near-future experimental facilities, both terrestrial and astrophysical. First he outlined violations of CPT symmetry due to the impossibility of defining a scattering matrix as a consequence of the existence of microscopic or macroscopic space-time boundaries, such as Planck-scale black-hole event horizons or cosmological horizons due to the presence of a positive cosmological constant in the universe. Second he discussed CPT violation due to the breaking of Lorentz symmetry, which may characterize certain approaches to quantum gravity. He stressed that although most of the Lorentz-violating cases of CPT breaking are already excluded by experiment, there are some (stringy) models that can evade these constraints.

Trans-Planckian physics was discussed by Ulf Danielsson of Uppsala, who outlined how the cosmic microwave background radiation might probe physics at or near the Planck scale. Danielsson reviewed a potential modulation of the power spectrum of primordial density fluctuations generated through trans-Planckian (maybe stringy) effects during inflation.

cernbey2_6-04

Margarida Rebelo of Lisbon discussed CP violation in the leptonic sector at both low and high energies in the framework of the “seesaw” mechanism. She pointed out that leptogenesis is a possible and likely explanation for the observed baryon asymmetry of the universe. It seems to be one of the most promising scenarios, in view of the fact that several other alternative proposals are on the verge of being ruled out. The leptogenesis scenario implies constraints on both light and heavy neutrino masses, which, as she showed, are consistent with the present value obtained from the double beta decay of 76Ge.

Cosmoparticle physics was another major theme of the conference. Maxim Khlopov of Rome and Moscow gave a broad overview of the topic, calling it the “Challenge for the Millennium”, and results linking particle-physics experiments with cosmological problems, and vice versa, were among the experimental highlights.

The existence of dark matter in the universe has for many years been an intriguing problem. Rita Bernabei of Rome presented the final results of the DAMA dark-matter experiment, which confirm their first indications for the observation of cold dark matter at a 6 σ level. Measurements of the cosmic microwave background by the Wilkinson Microwave Anisotropy Probe (WMAP), which are revealing the proportions of dark matter – and dark energy – in the universe, were presented by Eiichiro Komatsu of Princeton. Neutrino parameters are also deducible from this experiment, as well as from current large-scale galaxy surveys, as Steen Hannestad of Odense described. However, the cosmic microwave background experiments cannot at present differentiate between the different neutrino-mass scenarios.

Neutrino highlights

Moving on to ground-based studies of neutrino properties, the Heidelberg-Moscow double beta decay experiment in the Gran Sasso Laboratory has results for the period 1990-2003, which were presented by Hans Volker Klapdor-Kleingrothaus of MPI Heidelberg. With three additional years of data included in this analysis, the evidence for neutrinoless double beta decay has now improved to a 4.2 σ level. For 10 years this experiment has been the most sensitive double beta experiment worldwide, and with the statistics now reached, it has essentially already achieved scientifically what was expected from the larger GENIUS project proposed in 1997. The conclusion from this result is that the total lepton number is not conserved (neutrino oscillations reveal only the violation of family lepton number). This has fundamental consequences for the early universe. Furthermore, according to the Schechter-Valle theorem, the existence of neutrinoless double beta decay implies that the neutrino is a Majorana particle. (The announcement of the start of the GENIUS Test Facility in Gran Sasso, in May 2003, was now of most interest in the context of the search for dark matter. The goal of the GENIUS Test Facility is to confirm the DAMA result by looking for the seasonal modulation signal.)

cernbey3_6-04

On the theoretical side Mariana Kirchbach of San Luis Potosi in Mexico stressed the importance of double beta decay for fixing the absolute scale of the neutrino mass spectrum. She showed that in the case of Majorana neutrinos, in single beta decay the mass might lead to unexpected results. In this scenario a sensitive tritium decay experiment should see no mass if the neutrino is a Majorana particle, while the dependence of the neutrinoless double beta decay rate. Ernest Ma of Irvine outlined how a rather precise knowledge of neutrino oscillation parameters, i.e. the correct form of the 3 x 3 neutrino mass matrix, may be obtained from symmetry principles. He showed that the latter predict three nearly degenerate Majorana neutrinos with masses in the 0.2 eV range. This theoretical result is of great interest, in view of the results from double beta decay, WMAP, etc.

Contributions to fundamental physics, obtained using Penning traps, were outlined by one of the pioneers of the field, Ingmar Bergstrom of Stockholm. A Penning trap is a storage device in which frequency measurements can be used to determine the mass of electrons and ions, as well as g-factors of electrons and positrons, with extremely high accuracy. Bergstrom has recently measured, for example, the Q value of the double beta decay of 76Ge with unprecedented precision.

Other experimental highlights on neutrinos included the results obtained for solar neutrinos by the Sudbury Neutrino Observatory (SNO). As George Ewan of Kingston, Canada, described, SNO now has strong evidence at a 5.3 σ level, and independently of the details of solar models, that neutrinos change flavour on their way from the Sun to the Earth. These results, together with those of other neutrino experiments, among them the Japanese 250 km long-baseline experiment that was presented by Takashi Kobayashi of KEK, mean that our knowledge of neutrino properties has improved considerably over the past few years. In this context, Oliver Manuel of Missouri gave a highly interesting, non-mainstream view of the structure of the solar core.

Supernova and relic neutrinos were the topic of another session. Irina Vladimirovna Krivosheina of Heidelberg and Nishnij-Novgorod, who was a member of the Baksan group that was one of three groups which observed neutrinos from the supernova SN1987A, gave a retrospective view of this exciting event and some insider details of its discovery. Mark Vagins of Irvine and Shinichiro Ando of Tokyo discussed further the observation of relic and supernova neutrinos, one of the future tasks of the Super-Kamiokande experiment in Japan.

Accelerator approaches

Turning to the physics of nuclei, results on superheavy elements have reached an exciting level. Dieter Ackermann showed that elements 107-112 have been synthesized and unambiguously identified at GSI, Darmstadt. The observation of elements 112, 116 and 118 by the Oganessian group at Dubna was also announced by Vladimir Utyonkov. At the interface between nuclear physics and particle physics, the status of the search for a phase transition between hadronic matter and a quark-gluon plasma at Brookhaven’s Relativistic Heavy Ion Collider was outlined by Raimond Snellings of Amsterdam, and compared with measurements at CERN’s Super Proton Synchrotron.

cernbey4_6-04

Several sessions were devoted to the search for new physics with colliders. The final analyses of the search for Higgs bosons, R-parity violation, leptoquarks and exotic couplings at CERN and Fermilab, presented by Rosy Nikolaidou of CEA Saclay, Silvia Costantini of Rome “La Sapienza”, Stefan Soeldner-Remboldt of Manchester and others, show no indication of physics beyond the Standard Model. This reinforces the observation that the only new physics to emerge recently is from underground experiments.

Particles from space

Nearly a century after the discovery of cosmic rays, their origins are still unknown. Eckart Lorenz of Munich reviewed the status and perspectives of ground-based gamma-ray astronomy, where new telescopes under construction, such as MAGIC, should lead to a big step in sensitivity. At gamma-ray energies of around 10-30 GeV the universe becomes basically transparent, so gamma-emitting objects as far as red-shifts of more than three should become visible, that is, up to a time where star and galaxy formation has been particularly strong. New projects like MAGIC will allow the gap to be closed between satellite-borne instruments and previous, ground-based telescopes. Exciting results from the CANGAROO experiment, an array of four imaging Cherenkov telescopes in Australia, were presented by Ken’ichi Tsuchiya of Tokyo. The team has observed TeV gamma rays from SNR SN1006 and from new types of objects, such as gamma rays from a normal spiral galaxy showing starburst activity, NGC253. This is the first detection of gamma rays from an extragalactic object other than active galactic nuclei, and is the largest structure ever detected.

The Auger Observatory is under construction and will look for cosmic rays at the highest energies. It will be the largest cosmic-ray detector ever built, covering 3000 square kilometres in both the southern and northern hemispheres in its final configuration. Johannes Bluemer of Karlsruhe described the present status of the construction at the southern site in Argentina, which began in 1999.

The highest cosmic energies, beyond the Greisen-Kuzmin-Zatsepin limit, find an interesting theoretical explanation in the Z-burst scenario, in which a large fraction of the cosmic rays are decay products of Z-bosons produced in the scattering of ultra-high-energy neutrinos on cosmological relic neutrinos. This was discussed by Daniel Fargion of Rome and Sandor Katz of DESY and Budapest. Interestingly, they find that neutrinos should have a mass in the range of 0.1-1 eV – which is consistent with the result of the HEIDELBERG-MOSCOW experiment – in order to make this explanation work properly.

Hunting for antimatter

The search for antimatter (and dark matter) with the Alpha Mass Spectrometer, which is planned to be installed on the International Space Station in 2005/2006 for a three-year mission, was discussed by Frank Raupach of Aachen. The existence of large domains of antimatter in the universe is still an open question. The observed uniformity of the cosmic microwave background indicates that no voids exist at all between matter and antimatter worlds, hence annihilation processes should be inevitable and the resulting diffuse gamma-ray spectrum might be observable.

Returning to neutrinos, but this time from space, Christian Spiering of Zeuthen gave an overview of results from AMANDA, the neutrino telescope at the South Pole, and Jan-Arys Dzhilkibaev reviewed the status and perspectives of the Baikal Neutrino Project. Finally, Yoshitaka Kuno from Osaka outlined the goals of future neutrino and muon factories. A neutrino factory would have great potential for examining the mass hierarchy of neutrinos, the matter effects, and CP violation in the neutrino sector. A rich physics programme would also be possible with a high-intensity muon beam at a muon factory, ranging from searches for muon processes that violate lepton flavour (such as µ to e conversion) and the muon electric dipole moment to further precision measurements of the muon magnetic moment (g-2). Lepton flavour violation in the charged sector will be studied also by the muon to electron conversion experiment, MECO, presented by Michael Herbert of Irvine.

In summary, the lively and highly stimulating atmosphere during this Beyond meeting reflected a splendid scientific future for particle physics. The proceedings of Beyond 03 are now available as a book, Beyond the Desert 2003, Springer Proceedings in Physics, vol 92.

Has HERA found a charmed pentaquark?

cernnews1_5-04

The H1 experiment studying electron-proton collisions at DESY’s HERA accelerator may have discovered a new five-quark particle. On 11 March the H1 collaboration reported clear evidence for a charmed pentaquark – a bound state of two up quarks, two down quarks and a charm antiquark (uuddcbar), with a mass close to 3100 MeV. However, in a preliminary investigation physicists from ZEUS, the other major experiment at HERA, found no evidence for such a particle in their data. The hunt is now on to discover whether the charmed pentaquark does indeed exist.

The search for the pentaquark became a hot topic in 2003, when several experiments reported evidence for a narrow five-quark state, the Θ+, containing a strange antiquark. This immediately raised the question of whether similar states exist, with the strange antiquark replaced by a charm antiquark corresponding to the charmed analogue Θ0c of the Θ+. Since DESY’s electron-proton collider HERA is a copious producer of charm quarks and antiquarks, H1 and ZEUS were quick to take up the search.

The evidence for the new pentaquark in the H1 data is a resonance in the invariant mass combinations of D* mesons (dcbar) with protons (uud) and the antimatter equivalent, D*+ mesons with antiprotons. The resonance is remarkably strong and narrow, sitting on a moderate background at a mass of 3099 ± 6 MeV. It is unlikely that it is produced by statistical background fluctuations. The peak contains roughly equal contributions from D*p and D*+pbar combinations. It survives all reasonable variations in the selection criteria and many other careful tests. A resonance with compatible mass and width is also observed in an independent photoproduction data sample from H1. This makes it more surprising that the ZEUS team could find no such resonance. Both HERA experiments are now carrying out further studies in an attempt to understand the results.

Other experiments now have to look in their data for such a state. Its decay to a D* and a proton implies that its minimal quark composition is uuddcbar. Not much more is yet known. The exact interpretation will depend on more detailed measurements and theoretical work. Is this the Θ0c, or an excited state with spin 3/2 instead of spin 1/2, or something completely different? If the state is confirmed, it is likely to be the first step towards a whole new spectroscopy of charmed pentaquarks, which could lead to an improved understanding of the forces that bind the quarks together.

Experiment catches third glimpse of ‘one in ten billion’ decay

cernnews2_5-04

The E949 collaboration at the Brookhaven National Laboratory has reported further evidence for the very rare kaon decay, K+→π+ννbar. The rate observed for this decay may indicate new forces beyond those in the Standard Model – which predicts the frequency of such decays to be half that observed – although it is still too soon to say if a deviation has occurred.

The decay K+→π+ννbar is extremely important due to its sensitivity to the mixing strength Vtd of the t quark to d quark, which is poorly known, and to many hypothetical new physical effects not accounted for in the Standard Model. |Vtd| can be extracted from the K+→π+ννbar branching ratio with minimal theoretical uncertainty, since the hadronic matrix element can be extracted from the well measured K+→π0e+ν decay and higher order corrections have been calculated by Andrzej Buras, Gerhard Buchalla and others.

The new result on K+→π+ννbar comes from the E949 experiment – an upgraded version of E787, which reported two earlier sightings of the same decay. The two experiments, run by a collaboration of some 70 scientists from Canada, Japan, Russia and the US, have taken place at Brookhaven’s Alternating Gradient Synchrotron (AGS), the world’s highest intensity proton synchrotron. The improved apparatus of E949 has exploited higher beam intensities and achieved greater detection efficiency than any previous experiment of this type.

Although a neutrino and an antineutrino are emitted in the process K+→π+ννbar, these particles interact too weakly to be detected. Thus, evidence that one positive pion – and only one positive pion – was produced by the kaon decay had to be proved beyond reasonable doubt, eliminating the possibility that other detectable particles were present. This required the most efficient particle detector system ever built, as well as analysis techniques capable of confirming the required suppression. For example, the detection efficiency achieved for neutral pions was such that fewer than one in a million were missed. To establish the validity of the observations, all backgrounds had to be suppressed by a factor of 1011. This was among the first modern analysis efforts to apply carefully “blind” or unbiased analysis techniques, which are now standard practice in high-energy physics.

Out of all the data analysed, involving nearly 1013 kaons, three events explicable by the decay K+→π+ννbar have now been seen by E787 and E949. This indicates that the K+→π+ννbar process occurs with a branching ratio of 1.47+1.30-0.89 x 10-10, making it one of the rarest particle decays ever observed. The result continues to suggest a possible discrepancy with the Standard Model, although with only three events it is still consistent with the prediction of (7.7±1.1) x 10-11.

The goal of E949 was to increase the experimental exposure of E787 by five times. If the E949 findings for K+→π+ννbar were to continue at the current pace, 20 or more events would be observed. Such a result could alter our current picture of particle physics, forcing an expanded view of the fundamental constituents of the universe and their interactions. The detector and collaboration are ready to complete the experiment; however, further running is currently not possible because the US Department of Energy discontinued high-energy physics operations at the AGS in 2002, before E949 was completed.

Future work on the related process K0L→π0ννbar, supported by the US National Science Foundation, is now getting going, with the construction of the KOPIO experiment due to begin at the AGS next year.

bright-rec iop pub iop-science physcis connect