Comsol -leaderboard other pages

Topics

Heavy-ion workshop looks to the future

When the LHC starts up, heavy-ion physics will enter an era where high transverse-momentum (pT) processes contribute significantly to the nucleus–nucleus cross-section. The LHC will produce very hard, strongly interacting probes – the attenuation of which can be used to study the properties of the quark–gluon plasma (QGP) – at sufficiently high rates to make detailed measurements. At the LHC, high rates are for the first time expected at energies at which jets can be fully reconstructed against the high background from the underlying nucleus–nucleus event.

CCfin1_07_07

To prepare for the new high pT and jet analysis challenges, the physics department at the University of Jyväskylä, Finland, organized the five-day Workshop on High pT Heavy-Ion Physics at the LHC. More than 60 participants attended the workshop, ranging from senior experts in heavy-ion physics to doctoral students. It brought together physicists from operating facilities – mainly RHIC at the Brookhaven National Laboratory (BNL) – as well as from future LHC experiments (ALICE, ATLAS and CMS), and included valuable contributions from theorists. Jyväskylä in early spring, coupled with reindeer-meat dinners and animated student lectures in the evening, created a superb atmosphere for many discussions of physics, even outside of the official programme.

Mike Tannenbaum of BNL gave an opening colloquium which looked back to the 1970s. He listed old results that raised the same questions that are the focus of today’s discussions. Many recent questions in high-pT physics can be traced back to the 1970s at CERN, with proton–proton (pp) collisions at the ISR, which were followed in the early 1980s by proton–antiproton collisions at the SPS. This was when jet physics was born and the first methods of jet analysis were developed. It was reassuring to learn that many CERN results remain valid and that recent thinking is really based on those early understandings. On the other hand, many ideas still remain in a premature state. Only the high-luminosity experiments at RHIC and the LHC are – or will be – able to investigate certain phenomena and measure their effects more precisely. These pp data are therefore very important, not merely because they serve as a baseline for understanding results in heavy-ion collisions.

Striking gold at RHIC

Several presentations at the workshop reviewed results from RHIC on single-particle spectra and two-particle correlations at high pT. Striking effects have been observed in central gold–gold collisions. Among the most prominent are the suppression of high-pT particles and the suppression of back-to-back correlations. These results show that the jet structure is strongly modified in dense matter consistent with perturbative QCD calculations of partonic energy loss via induced gluon radiation.

CCfin2_07_07
The first photon data have shown no nuclear effects up to 10–12 GeV/c, in line with the general expectation that photons (with no colour charge) have no final-state interaction with the deconfined matter that is produced. However, the recent measurement by the PHENIX experiment indicates unexpected suppression, by a factor of two, of photon production in the region above 15 GeV/c – this is almost as large as in the case of light mesons (figure 1). This surprising observation ignited great excitement at the workshop, leading to further discussion of what the possible consequences for LHC physics might be. Any detailed study, however, should await the release of the final data.

Data on heavy flavours from RHIC experiments have also provided puzzles. The measured suppression of heavy-flavour pT spectra, which is close to that of light flavours, cannot be explained by radiative energy loss alone and requires a contribution from elastic scattering. Further issues can be addressed by analysis of dijet topology or by the use of two- or multi-particle correlation techniques. Several experimental and theoretical presentations given at the meeting examined the possibility of using multi-particle correlation and photon–hadron correlations to study the partonic pT distributions, fragmentation functions, jet shape and other parton properties sensitive to the details of parton interactions with excited nuclear matter.

Another series of talks investigated the features of the parton coalescence process, which is supported by a large amount of experimental data on particle spectra and asymmetrical flow production. On the other hand, jet-orientated analysis of different data (for example, Ω-charged hadron correlations) does not show the behaviour expected from quark coalescence. Therefore, further work is needed to understand this puzzling situation before the new experiments begin at the LHC.

Towards the LHC

CCfin3_07_07

Among the four large LHC experiments, ALICE is the one that is optimized for heavy-ion physics. The CMS and ATLAS collaborations have also established a heavy-ion programme, which will certainly strengthen the field. The workshop heard about the capabilities of the three experiments for jet reconstruction and analysis of jet structure. The large background from the underlying event is a challenge for all experiments, requiring the development of new techniques for background subtraction. The strength of the ATLAS and CMS experiments is their full calorimetric coverage, and therefore large measured jet rates, which will allow them to measure jets in central lead–lead collisions up to 350 GeV and to perform Z0-jet correlation studies. ALICE will use the combination of its central tracking system and an electromagnetic calorimeter to measure jets. The smaller acceptance of the detector will limit the energy range to about 200 GeV. The strength of ALICE lies in its low-pT and particle identification capabilities. These allow ALICE to measure fragmentation functions down to small momentum fractions and to determine the particle composition of jets (figure 2).

A consistent theoretical approach to describe jet measurements in heavy-ion collisions can only be obtained through detailed Monte Carlo studies of jet production and in-medium modifications. They are needed to optimize the data analysis and to discriminate between different models. Some new event-generators adopted for the challenges of LHC physics (PyQuench, HydJet, HIJING-2) were also discussed during the meeting.

The workshop also examined the recent interest in understanding strongly interacting particles using conjectures from string and higher-dimensional physics. Stan Brodsky of SLAC gave a summary of his understanding of the many QCD effects that appear in kinematical regions not testable by perturbative QCD, where anti-de Sitter space/conformal field-theory models could come into consideration. In the duality picture, due to Juan Maldacena, the intensively interacting quark and gluon fields produced in heavy-ion collisions can be treated as a projection into the higher dimensional black-hole horizon. The equation of motion on the black-hole horizon could become analytically solvable in contrast to the vastly complicated numerical (lattice) approach in non-perturbative QCD theory. The new experiments at LHC energies May shed more light on the role of extra dimensions in curved space and could initiate a revolution in the description of strongly interacting matter.

The next workshop on this topic will be in Budapest in March 2008 and will offer the opportunity to display the latest theoretical results before the LHC is running with pp collisions at 14 TeV.

DIS 2007: physics at HERA and beyond

Exceptionally beautiful weather, Munich’s Holiday Inn hotel and the Gasteig, a modern cultural centre, combined to provide a pleasant and stimulating atmosphere for DIS 2007, the 15th International Workshop on Deep-Inelastic Scattering (DIS) and Related Subjects. Held on 16–20 April, the workshop united more than 300 physicists from around the world, including an encouraging number of students. The programme contained reviews of progress in DIS and QCD, as well as presentations of the latest results from HERA, the Tevatron, Jefferson Lab, RHIC and fixed-target experiments. It also covered related theoretical topics and future experimental opportunities.

With two full days of plenary sessions and six streams of parallel sessions on the other three days, the meeting followed the traditional style of DIS workshops. The parallel sessions covered structure functions and low-x physics, electroweak measurements and physics beyond the Standard Model, heavy flavours, hadronic final states, diffraction and spin physics. A special session that looked to the future of DIS was particularly topical in view of the shutdown at DESY of HERA, the world’s only electron–proton collider, at the end of June.

Yuri Dokshitzer, of the University of Paris VI and VII, opened the scientific programme with a review of recent developments in perturbative QCD (pQCD). He explained his motto “1-loop drill, 2-loop thrill, 3-loop chill” and expressed the hope that higher-order corrections can be calculated with the help of N = 4 super-Yang–Mills quantum field theory.

Latest results

CCdis1_07_07

Appetizing glimpses of the many new results from the two collider experiments at HERA featured in talks by Christinel Diaconu from the Centre for Particle Physics in Marseille, and by Massimo Corradi of INFN Bologna, for the H1 and ZEUS experiments, respectively. Both experiments have accumulated a total of 0.5 fb–1 at a proton beam energy of 920 GeV, and analyses of the entire data sample are in full swing. The first H1 and ZEUS combined analysis of xF3 was a clear highlight of the conference (figure 1). This is the structure function that is dominated by photon–Z interference and is sensitive to the valence quarks at low Bjorken-x.

Further highlighted results included new data on neutral-current and charged-current inclusive scattering, jets and heavy-flavour production. These data will serve as input for the next generation of more precise fits for parton distribution functions (PDFs) for the proton – essential for studying physics at the LHC at CERN.

Since mid-March the proton beam energy at HERA has been lowered to 460 GeV to enable, in conjunction with the high-energy data at 920 GeV, a model-free determination of the longitudinal structure function FL. This measurement is essential for a direct extraction of the gluon distribution within the proton and as a consistency check of DIS theory. Beyond the Standard Model, H1 continues to see, with the full statistics at high energy, the production of isolated leptons at a level of 3 σ above the expectation. In contrast, ZEUS sees no deviation from the Standard Model.

With the Tevatron proton–antiproton collider at Fermilab performing well, Giorgio Chiarelli of INFN Pisa was able to show a sample of beautiful new results from the CDF and DØ experiments. For this conference, he presented data corresponding to up to
2 fb–1, covering neutral B-meson oscillations, electroweak physics, jets, searches and results on the production of the top quark, with a new world average for its mass of 170.9 ± 1.8 GeV/c2. This new (low) value is interesting since, together with the mass of the W particle, it favours the minimal supersymmetric model.

William Zajc from Columbia University addressed current understanding of particle production in heavy-ion collisions, as studied at RHIC at Brookhaven National Laboratory (BNL). He highlighted several interesting experimental observations, such as “away-side” jet suppression, that cannot be described within current models, but which May be interpreted as a signal for the production of a nearly perfect, highly viscous quark–gluon fluid.

CCdis2_07_07

Turning to spin physics, Jörg Pretz from the University of Bonn gave an overview with emphasis on the nucleon spin puzzle. He presented recent data on helicity distributions for quarks (Δq) and gluons (ΔG), from the HERMES experiment at DESY and COMPASS at CERN, respectively, as well as direct measurements of ΔG from RHIC. He also showed the first combined results on transversity using data from both HERMES and COMPASS as well as from the BELLE experiment at KEK. In a related overview of the rich programme at Jefferson Lab, Zein-Eddine Mezani of Temple University in Philadelphia covered measurements of unpolarized and polarized structure functions and transversity, as well as deeply virtual Compton scattering and generalized parton distributions.

Theoretical input

Andreas Vogt of Liverpool University spoke about progress and challenges in determining and understanding the PDFs of the proton in next-to-leading order (NLO) and next-to-NLO. An important improvement in the extraction of PDFs, implemented by the Coordinated Theoretical–Experimental Project on QCD (CTEQ), is the inclusion of the effects of charm-mass suppression in DIS, which results in an increase in the PDFs for the u and d quark. A dramatic consequence is an increase by about 8% of the W/Z cross-sections expected at the LHC. Rates of W/Z events are foreseen to serve as precision “luminosity meters” for the LHC data-taking.

Gustav Kramer of Hamburg University discussed recent developments in heavy-flavour production and explained the various heavy-flavour schemes used for pQCD calculations. He stressed the importance of interpolating schemes with variable-flavour number and massive heavy quarks (like the general-mass variable-flavour-number scheme) and showed successful comparisons of calculations with data from HERA and the Tevatron.

CCdis3_07_07

To allow comparison with experiment, pQCD calculations usually need to be implemented in Monte Carlo generators. Zoltan Nagy from CERN covered this important subject and critically reviewed the various approximations of current implementations of parton showers and their matching to leading order or NLO matrix elements. Nagy expressed concern that current Monte Carlo tools might fail at the LHC and he argued for the development of a new shower concept that allows the shower to be matched to Born and NLO matrix elements.

Raju Venugopalan from BNL covered small-x physics and the expected non-linear effects beyond the conventional Dokshitzer–Gribov–Lipatov–Altarelli–Parisi evolution. He discussed the question of saturation in the context of various models (e.g. colour glass condensate) and data from HERA and RHIC. He also pointed to excellent opportunities at a possible future electron–ion collider (EIC) or even at a “super-HERA” collider such as a large hadron–electron collider (LHeC).

Peter Weisz and Johanna Erdmenger, both from MPI Munich, discussed non-perturbative aspects of QCD. Weisz presented recent algorithmic advances and various results in lattice QCD, indicating progress in the simulation of dynamical quarks beyond the quenched approximation. Erdmenger looked at new approaches that connect string theory and QCD by establishing a connection between a strong coupling (non-perturbative) theory, such as N = 4 SYM (“QCD”), and a “dual” weak coupling theory, such as supergravity. Such a relation – the anti-de Sitter/conformal field theory correspondence – can provide new tools to address problems within QCD.

The seven threads of parallel sessions contained a total of 260 talks. Despite the wonderful weather, the sessions had very good attendance, with many lively and fruitful discussions. The spontaneous formation of two additional topical sessions was very much in the spirit of the workshop. One of these was on αS measurements from HERA and LEP, and one was on the complications involved when dealing with a variable number of quark flavours in QCD fits. On the last day the convenors, usually a theorist and an experimentalist for each working group, summarized the parallel sessions.

Life after HERA

Concluding a special session on the future of DIS, Joel Feltesse of DAPNIA gave a detailed and critical view of future opportunities in DIS. In his opinion DIS will not stop with the end of data-taking at HERA. There is Jefferson Lab with its upgrade to 12 GeV and new machines, such as the EICs at Jefferson Lab and BNL, are on the horizon. An LHeC at CERN would offer an attractive physics programme, particularly if the LHC provides an additional physics case for it. The workshop itself concluded with a talk from Graham Ross from Oxford University. He discussed open questions beyond the Standard Model, which provide motivation for the next round of high-energy physics experiments at the LHC.

For the coming years, much careful analysis remains to be done with the data from HERA to achieve the best possible precision. This is expected to yield valuable information for the understanding of QCD and of the data to be produced at the LHC. HERA’s final legacy will be an important asset to high-energy physics. Although the LHC will, we hope, find the Higgs boson and “explain” the mass of gauge bosons, quarks and leptons, it remains the case that the mass of hadronic matter – about 99% of the mass of the visible universe – is entirely dominated by effects due to the strong interaction between gluons and quarks. Deep-inelastic scattering is the tool to study these interactions. It remains to be seen how much progress will be achieved in the future without new data from an electron–hadron collider.

VERITAS telescopes celebrate first light

CCver1_07_07

For three days in late April, collaboration members gathered with invited guests and the general public to celebrate the completion of the Very Energetic Radiation Imaging Telescope Array System (VERITAS). Located near Mount Hopkins in southern Arizona, the new array joins HESS in Namibia, MAGIC in the Canary Islands and CANGAROO-III in Australia in the exploration of the gamma-ray skies at energies from 100 GeV to beyond 10 TeV. The First Light Fiesta included a one-day scientific symposium, a well-attended public lecture, public tours of the new detector and a formal inauguration ceremony followed by an outdoor banquet.

Cherenkov astronomy

VERITAS is the latest stage in the evolution of very-high-energy (VHE) gamma-ray astronomy, a field where many aspects are closer to particle physics than to traditional astronomy. The basic idea is to use the Earth’s atmosphere as the “front end” of the detector, much like a calorimeter in a collider experiment. At high energies, gamma rays initiate extensive air showers in the upper atmosphere, and relativistic particles in these showers radiate Cherenkov photons that penetrate to ground level. An imaging detector located anywhere in the light pool can use the size and pattern of hits in its camera to reconstruct the energy and direction of the shower and, by extension, the primary particle that spawned it. This is the principle of the imaging atmospheric Cherenkov telescope (IACT). The effective area of the detector is the size of the light pool, which is of the order of 100,000 m2.

The main background comes from charged cosmic rays, energetic protons and light nuclei, which typically outnumber gamma rays by a factor of more than 1000. These can be rejected by using differences in the morphology of gamma-initiated and hadron-initiated showers that are manifest in the image at the camera’s focal plane. Indeed, it is the cosmic-ray rejection power afforded by multiple views of the shower that has motivated the construction of the modern arrays of IACTs.

The basic technique traces back to the early 1950s, when pioneering measurements were made using instruments built with war-surplus searchlight mirrors. Following a long learning curve, the Whipple collaboration announced the detection of the first VHE source, the Crab Nebula, in 1989. The detector used a 10 m mirror and a pixellated camera, allowing the use of imaging to improve the signal-to-noise ratio. The Crab Nebula, a strong and steady source with a spectrum extending to beyond 50 TeV, has since become the “standard candle” in the field.

During the 1990s, more imaging Cherenkov detectors were built as interest in high-energy gamma-ray astronomy intensified around the world. This was partly because the Compton Gamma-Ray Observatory had been placed in orbit and was discovering dozens of sources at giga-electron-volt energies. Notable among the second-generation detectors was HEGRA, an array of five small telescopes constructed on La Palma in the Canary Islands, which demonstrated the power of “stereo” observations.

CCver2_07_07

Towards the end of the decade, plans began for a third generation of detectors, exploiting the advantages of arrays from large reflectors viewed by fine-grained cameras. The VERITAS collaboration, with members from institutes in the US, Canada, Ireland and the UK, combined the original Whipple group with new collaborators from gamma-ray, cosmic-ray and particle-physics research. Together, they proposed a detector for southern Arizona, built a prototype at the Whipple Base Camp in the summer of 2003 and obtained funding for a four-telescope array later that year.

The final array consists of four IACTs, each of which uses a 12 m-diameter mirror to focus light onto a camera comprising 499 close-packed 29 mm photomultiplier tubes (PMTs). Each mirror is tessellated, with 350 identical hexagonal facets mounted on a steel frame. PMT pulses are digitized by custom-built 500 MS/s flash analogue-to-digital converters and readout is initiated by a three-level trigger, which starts with discriminators on each channel, proceeds to pattern recognition in each camera and finally makes an array-based decision.

First and future light

Although the First Light ceremony was held in April, VERITAS has been making observations in a variety of configurations since 2003, as each telescope has been commissioned. The first stereo observations were made in 2006 when the second telescope was completed and came in time to detect the blazar Mrk 421 in an active state. More importantly, VERITAS detected a similar source, Mrk 501, during a quiescent phase with a flux of only 0.8 gammas per minute. Such a measurement had not been possible with only one VERITAS telescope. During the 2006–2007 observing season, with two and then three telescopes, VERITAS has measured phase-dependent variable VHE flux from the micro-quasar candidate LSI +61303, and has detected VHE gamma rays from the giant radiogalaxy M87, as well as the distant active galaxy 1ES 1218+30.4. Analysis of these and other topics are well in hand for the summer conferences, and the collaboration presented its preliminary findings at the First Light symposium.

CCver3_07_07

The fourth telescope was completed in early 2007 and the array is now the most sensitive gamma-ray telescope in the northern hemisphere. It is able to make a 5 σ detection of a source with a flux level a tenth that of the Crab Nebula in under an hour (the original Whipple detection of the Crab Nebula required more than 50 hours). In the energy range from 100 GeV to 30 TeV, VERITAS’s effective area rises from around 30,000m2 to well over 100,000 m2 and its energy resolution is 10–20%. Single-event angular resolution is better than 0.14°, and sources with reasonable flux will be located to better than 100 arc-seconds. The 3.5° field of view, with off-axis acceptance above 65% out to 1° from the centre, will allow sky surveys as well as the mapping of extended sources.

In contrast to collider experiments, where data on different physics topics are accumulated simultaneously with different triggers, telescopes are pointed instruments and a scheduling committee decides where they point. For the first two years of observations, VERITAS will spend half of the available hours on four Key Science Projects (KSPs). The remaining time will be given over to observations proposed by groups within the collaboration.

One KSP is a survey of part of the Milky Way visible from the northern hemisphere, which will search for new sources with fluxes greater than about 5% of the Crab Nebula. Another KSP is an indirect search for dark matter. WIMPs could cluster in gravitational wells such as nearby dwarf galaxies or globular clusters and then annihilate, producing a continuum of gamma rays that May be strong enough to be seen by VERITAS. Although less direct than a search for supersymmetric particles at an accelerator, the gamma-ray technique targets a larger range of candidate masses.

CCver4_07_07

Another KSP concerns galactic sources such as pulsar-wind nebulae and supernova remnants (SNRs), while yet another deals with extragalactic sources known as active galactic nuclei (AGNs). SNRs are interesting because they could possibly be the source of most galactic cosmic rays. With the new-generation detectors, their morphologies can be resolved and this will aid in the understanding of particle acceleration models. Gamma rays from AGNs are thought to originate in their relativistic plasma jets, which are powered by accretion of host-galaxy material by a supermassive black hole. These sources are notoriously time-variable, so the plan is to conduct multi-wavelength campaigns using contemporaneous X-ray, optical and radio observations to uncover the physics processes at work in these high-energy objects.

All observations will be pre-empted when a gamma-ray burst (GRB) occurs in a visible part of the sky and VERITAS turns its attention to it. The telescope is connected to a network that relays GRB detections from spacecraft and the array can slew to any part of the sky at a rate of 1°/s.

Later this year the Gamma-ray Large Area Space Telescope (GLAST) should join the hunt for high-energy gamma rays from a vantage point in orbit around the Earth. With its wide field of view and sensitivity in energy from 20 MeV to more than 100 GeV, it will provide complementary data and increase the scientific reach of the new ground-based observatories. After many years of design, construction and commissioning, the VERITAS collaboration anticipates a rewarding future.

Keeping antihydrogen: the ALPHA trap

Suppose, as the villain of a story, you absolutely needed to transport a macroscopic amount of antimatter, for whatever sinister purpose. How would you go about it and could you smuggle it, for example, into the Vatican catacombs? The truth is that we will probably never have a macroscopic amount of antimatter for such a scenario to ever become reality.

According to the preface to the popular novel Angels and Demons (2000), author Dan Brown was apparently inspired by the imminent commissioning of CERN’s “antimatter factory”, the Antiproton Decelerator (AD). The real-life AD has now been fully operational for about five years, and the experiments there have produced some notable physics results. One of the big stories along the way was the synthesis in 2002 of antihydrogen atoms by the ATHENA and ATRAP collaborations.

This feat was an important step towards one of the ultimate goals of everyday antimatter science: precision comparisons of the spectra of hydrogen and antihydrogen. According to the CPT theorem, these spectra should be identical. To get an idea of what precision means in this context, take a look at the website of 2005 Nobel Laureate Theodor Hänsch, which has the following cryptic headline: f(1S–2S) = 2 466 061 102 474 851(34) Hz. This may look like a cryptic puzzle appearing in Brown’s fiction, but it simply means that the frequency of one of the n = 1 to n = 2 transitions in hydrogen has been measured with an absolute precision of about 1 part in 1014. This is impressive, but where do we stand with antihydrogen?

Storing antihydrogen

CCalp1_07_07

Both ATHENA and ATRAP produced antihydrogen by mixing antiprotons and positrons in electromagnetic “bottles” called Penning traps. Penning traps feature strong solenoidal magnetic fields and longitudinal electrostatic wells that confine charged particles. The antiprotons come from CERN’s AD, and the positrons come from the radioactive isotope 22Na. The whole process involves cleverly slowing, trapping, and cooling both species of particles (Amoretti et al.. 2002 and Gabrielse et al.. 2002). But here’s the rub: when the charged antiproton and positron combine, the neutral antihydrogen is no longer confined by the fields of the Penning trap, and the precious anti-atom is lost. The ATHENA experiment demonstrated antihydrogen production because it could detect the annihilation of the anti-atoms when they escaped the Penning trap volume and annihilated on the walls.

To study antihydrogen using laser spectroscopy, anti-atoms need to be sustained for longer. In the 1s–2s transition mentioned above, the excited state (2s) has a lifetime of about a seventh of a second; while in ATHENA, an anti-atom would annihilate on the walls of the Penning trap within a few microseconds of its creation. Thus, the next-generation antihydrogen experiments include the provision for trapping the neutral anti-atoms that are produced in a mixture of charged constituents.

The Antihydrogen Laser Physics Apparatus (ALPHA) collaboration has recently commissioned a new device designed to trap the neutral anti-atoms. ALPHA takes the place of ATHENA at the AD and features five of the original groups from ATHENA (Aarhus, Swansea, Tokyo, RIKEN and Rio de Janeiro) plus new contributors from Canada (TRIUMF, Calgary, UBC and Simon Fraser), the US (Berkeley and Auburn), the UK (Liverpool) and Israel (Nuclear Research Center, Negev).

CCalp2_07_07

Neutral atoms – or anti-atoms – can be trapped because they have a magnetic moment, which can interact with an external magnetic field. If we build a field configuration that has a minimum magnetic field strength, from which the field grows in all directions, some quantum states of the atom will be attracted to the field minimum. This is how hydrogen atoms are trapped for studies in Bose–Einstein condensation (BEC). The usual geometry is known as an Ioffe–Pritchard trap. A quadrupole winding and two solenoidal “mirror coils” produce the field to provide transverse and longitudinal confinement, respectively. The image above also shows the electrodes that provide the axial confinement in the Penning trap for the charged antiprotons and positrons. The idea is that the antihydrogen produced in the Penning trap is “born” trapped within the Ioffe–Pritchard trap – if its kinetic energy does not exceed the depth of the trapping potential.

This is a big “if”. A ground-state hydrogen atom has a magnetic moment that gives us a trap depth of only about 0.7 K for a magnetic well depth of 1 T. The superconducting magnetic traps that we can build and squeeze into our experiments will give 1–2 T of well-depth for neutral atoms. All antihydrogen experiments to date occur in devices cooled by liquid helium at 4.2 K, but there are strong indications that the antihydrogen produced by direct mixing of antiprotons and positrons is warmer than this, with temperatures of at least hundreds of kelvin. ATRAP has devised a laser-assisted method of producing antihydrogen that May give colder atoms, but their temperature has not yet been measured. (Note that the highly excited antihydrogen atoms produced in both experiments can have significantly larger magnetic moments, thus experiencing higher trapping potentials. The trick, then, is to keep them around while they decay to the ground state.) Both groups are investigating new ways to produce colder anti-atoms, and the 2007 run at the AD (June–October) promises to be revealing.

Designer magnets

A second important issue facing both collaborations is the effect on the charged particles of adding the highly asymmetric Ioffe–Pritchard field to the Penning trap. Penning traps depend on the rotational symmetry of the solenoidal field for their stability. As ALPHA collaborator Joel Fajans of Berkeley initially pointed out, the addition of transverse magnetic fields to a Penning trap can be a recipe for disaster, leading either to immediate particle loss, or to a slower, but equally fatal, loss due to diffusion. Fajans’ solution, adopted by the ALPHA collaboration, is to use a higher-order magnetic multi-pole field for the transverse confinement. A higher-order field can, in principle, provide the same well-depth as a quadrupole while generating significantly less field at the axis of the trap, where the charged particles are confined.

CCalp3_07_07

To construct such a magnet, the ALPHA collaboration surveyed the experts in fabrication of superconducting magnets for accelerator applications. It turns out that the Superconducting Magnet Division at the Brookhaven National Laboratory (BNL) had previously developed a technique that is almost tailor-made to our needs. The key here is to use the proper materials in the construction of the magnet. To detect antiproton annihilations, ALPHA incorporates a three-layer silicon vertex detector similar to those used in high-energy experiments. However, the annihilation products (pions) must travel through the magnets of the atom trap before reaching the silicon. Therefore, it is highly desirable to minimize the amount of material used in the magnet construction to minimize multiple scattering between the vertex and the detector. So bulky stainless-steel collars for containing the magnetic forces, as used in the Tevatron or the LHC, cannot be used.

CCalp4_07_07

The Brookhaven process uses composite materials to constrain the superconducting cable that forms the basis of the magnet. Using a specially developed 3D winding machine, the team at BNL was able to wind an eight-layer octupole and the mirror coils directly onto the outside of the ALPHA vacuum chamber. The mechanical strength is provided by pre-tensioned glass fibres in an epoxy substrate. Only the superconducting cable is metal.

CCalp5_07_07

The new ALPHA device was designed and constructed during the AD shutdown of November 2004 to July 2006 and commissioned during the physics run at the AD in July–November 2006. The Brookhaven magnets performed beautifully, demonstrating that charged antiprotons and positrons can be stored in the full octupole field for times far exceeding those necessary to synthesize antihydrogen. We even made the first preliminary attempt to produce and trap antihydrogen in the full field configuration; but we have yet to observe evidence for trapping.

Meanwhile, the ATRAP collaboration worked hard to commission a new quadrupole trap for antihydrogen and succeeded in storing clouds of antiprotons and electrons in their new device. The 2007 physics run at the AD promises to be an exciting one for antihydrogen physics. Both ALPHA and ATRAP should have operational devices that are capable – in theory – of trapping neutral antimatter for the first time.

Back to Dan Brown

So let’s look at what is possible in experiments with antimatter today, leaving the speculation to aficionados of sci-fi and NASA. If you wanted to take antimatter to the offices of your national funding agency, you might consider taking some antiprotons, since most of the mass-energy of an antihydrogen atom is in the nucleus. This might be tempting, since our charged-particle traps are certainly deeper than those for neutral matter or antimatter. ATRAP and ALPHA initially capture antiprotons in traps with depths of a few kilo-electron-volts, corresponding to tens of millions of kelvin. But, density is an issue. A good charged-particle trap for cold positrons has a particle density of about 109 cm–3. Antiproton density is much smaller, but we’ll be optimistic and use this number. So to transport a milligram of antiprotons – of the order of 1021 particles – you would need a trap volume of 1012 cm3, or 106 m3. That means a cube 100 m wide, which will not fit in your luggage. Incidentally, a milligram of antimatter, annihilating on matter, would yield an energy equivalent to about 50 tonnes of TNT.

So, what about transporting some neutral antimatter? Neutral atom traps certainly have higher densities. The first BEC result for hydrogen at MIT reported a density in the order of 1015 cm–3 for about 109 atoms in the condensate. This is better, but still far less than a milligram, even if you can get the atoms from a gas bottle. The size of the trap is now down to 105 cm3, which is more manageable. Note, however, that the BEC transition in this experiment was at 50 μK – far below the 4.2 K that we hope to achieve with antihydrogen. Unfortunately, to get really cold and dense atomic hydrogen requires using evaporative cooling – throwing hot atoms away to cool the remaining ones in the trap. This implies damaging your lab before you send the surviving, trapped anti-atoms to their final, cataclysmic fate. And don’t forget that the total history of antiproton production here on Earth amounts to perhaps a few tens of nanograms in the past 25 years or so. Unfortunately, the antiproton production cross-section is unlikely to change.

How many anti-atoms can we trap? The Japanese-led ASACUSA experiment, using an extra stage of deceleration after the AD, can trap around a million of the 30 million decelerated anti-protons that the AD delivers every 100 s or so. Suppose we could make all of these into antihydrogen (in comparison, ATHENA achieved about 15%). The trapping efficiency for neutral anti-hydrogen is anybody’s guess at this point – we would be grateful for 1%. This is why the very notion of having a dense cloud of interacting antihydrogen atoms will bring a weary smile to the face of anyone working in the AD zone. Using the above figures, it would take us 1019 s – about 300 billion years – to accumulate just one milligram. One might also question if anyone could engineer a device reliable enough to safely contain an explosive quantity of anti-matter – not in my lab, thanks.

Back down to the sober reality here at CERN, we would be happy just to demonstrate trapping of antihydrogen in principle. This means initially trapping just a few anti-atoms – not making a BEC or antihydrogen ice. The future of our emerging field seems to depend on this, although ASACUSA is developing a plan to do spectroscopy on antihydrogen in flight. Time will tell which approach proves more promising. Two things are certain: the real technology of antimatter production and trapping lags far behind Dan Brown’s imagination; and the Vatican is safe from us.

DØ and CDF find same new baryon

After several years of independently gathering and analyzing data at Fermilab’s Tevatron, the DØ and CDF collaborations have reported the observation of the same new baryon within days of each other.

The DØ collaboration announced the first direct observation of the strange b baryon, Ξb, in a paper submitted to Physical Review Letters on 12 June. Then, at a packed Fermilab seminar on 15 June, no sooner had Eduard De La Cruz Burelo reported on DØ’s discovery than Dmitry Litvintsev from the CDF collaboration rose to present independent evidence for the very same particle. Consisting of three quarks – d, s and b – the Ξb is the first observed particle to be formed of quarks from all three generations.

CCnew2_07_07

The ALEPH and DELPHI experiments at LEP had previously found indirect evidence for the Ξb in the form of an excess of events with a Ξ and a lepton of the same sign. Now the two experiments at the Tevatron have been able to reconstruct fully the specific decay, Ξb → J/ΨΞ, where the products decay in their turn: J/Ψ → μ+μ, Ξ → Λ0π and Λ0 → pπ. While the Λ and Ξ have decay lengths of a few centimetres, the Ξb travels only a millimetre or so before it decays.

The analysis basically consists of searching for events with muon pairs that correspond to a J/Ψ together with a proton and two pions of the same sign. One pion and the proton must have a mass equivalent to the Λ and come from a vertex that is an appropriate distance from the origin of the second pion. The Λ and this pion should then have a mass equivalent to a Ξ and a common vertex corresponding to the Ξ’s decay point. The next step is to match the origins of the Ξ candidates with those of the J/Ψs to reconstruct the decay of the Ξb.

In an analysis of 1.3 fb–1 of data collected during 2002–2006, the DØ collaboration found 19 candidate events for Ξb while the CDF collaboration found 17 candidates in 1.9 fb–1. Both experiments measured the mass of the new particle, with consistent results. In their submitted paper, DØ gives the measured mass as 5.774 ± 0.011 (stat.) ± 0.015 (syst.) GeV/c2, and quotes a significance of 5.5 σ for the observed signal. At the seminar, CDF presented a preliminary mass value of 5.7929 ± 0.0024 (stat.) ± 0.0017 (syst.) GeV/c2, with a significance of 7.8 σ. DØ has also measured the ratio of the cross-section multiplied by the branching ratio of their observed Ξb events relative to that for the well-known Λb baryons. The measured ratio is 0.28 ± 0.13.

The discovery of the Ξb is the latest in a chain of discoveries made by CDF and DØ over the past few years. Last October, the CDF collaboration reported the observation of Σb particles, related to the Ξb. As the Tevatron delivers more and more data, the possibilities increase for the observation of even rarer processes.

Harnessing the power of the plasma wakefield

Experiments at accelerators have produced many key breakthroughs in particle physics during the past 50 years. Today, as exploration begins of physics at the “terascale”, the machines needed are extremely large, costly and time-consuming to build. In 1982, however, recognizing that this is how the field would evolve, the US Department of Energy (DOE) began a programme to develop new ideas for particle acceleration, which has now become extremely active. From the outset it was clear that developing an entirely new concept for accelerating charged particles would be a multi-disciplinary endeavour, requiring a sustained research effort of several decades to come to fruition (HEPAP 1980). Here I would like to examine just how far one advanced concept – plasma-based particle accelerators – has come after 20 or so years of research, and to indicate how it is likely to develop in the next decade.

Historical background

The first suggestions for using “collective fields” generated by a medium-energy electron beam to accelerate ions to high energies can be traced to Gersh Budker and Vladimir Veksler. However, plasma-based accelerators did not take off until John Dawson and his co-workers at the University of California, Los Angeles (UCLA) proposed the use of a space-charge disturbance, or a “wakefield”, to accelerate electrons (Joshi 2006; Joshi and Katsouleas 2003). Serendipitously, the ideas that Dawson developed between 1978 and 1985 coincided with the DOE’s initiative on advanced accelerator techniques and were supported first in the US and then in other countries.

CCpla2_06_07

Wakefields in a plasma can be driven by an intense laser pulse (the laser-wakefield accelerator) or an electron-beam pulse (the plasma-wakefield accelerator) that is about half a plasma wavelength long. In the former it is the radiation pressure of the laser pulse that pushes away the plasma electrons, whereas in the latter this is achieved by the space-charge force of the (highly relativistic and therefore stiff) electron beam. The plasma electrons are predominantly blown out radially, but because of the space-charge attraction of the plasma ions, they are attracted back towards the rear of the laser (or the particle) beam where they overshoot the beam axis and set up a wakefield oscillation. In a 1D picture the wake resembles a series of capacitors where the mostly transverse electric field of the laser (particle) beam has been transformed into a longitudinal electric field of the wake. Charged particles in an appropriately phased trailing pulse can then extract energy from the wakefield (figure 1a).

The mixture of physics disciplines involved meant that even proof-of-concept experiments on plasma accelerators required an expertise in plasma physics, lasers and beam physics. Since such expertise resided in universities, most of the early work was carried out by small university groups. By the 1990s many teams around the world had confirmed that plasma wakes did indeed have accelerating gradients of the order of 100 GeV/m and could accelerate electrons, often trapped from the plasma itself, with a continuous energy spectrum up to 100 MeV. However, there remained two important goals for the proponents of plasma-based accelerators to provide beams of interest to the end user of this technology – the high-energy physics community. They needed to show that plasma accelerators could produce a “monoenergetic beam” of electrons and that the high-gradient acceleration could be maintained over scales of a metre. There has been significant progress in achieving both of these goals in the past couple of years.

The “plasma bubble” accelerator

Most laser-driven and particle-driven plasma-wakefield accelerators now operate in the “bubble regime”. Here the drive pulse is so intense that it expels all of the plasma electrons for which subsequent trajectories enclose a “bubble” of ions. The resulting wakefield structure is 3D and the longitudinal wakefield is highly nonlinear (figure 1b). The phase velocity of the wakefield is tied to the group velocity of the drive beam, which is approximately the velocity of light, c.

In present laser-wakefield accelerator experiments, even though the phase velocity is relativistic, the accelerating particles eventually outrun the wave in a relatively short distance, of the order of a few millimetres to a centimetre – this is called the dephasing limit. While this dephasing limits the maximum energy gain, it has the benefit of generating a monoenergetic electron beam. How does this happen? First, as the radially blown-out plasma electrons rush back toward the axis, a significant number of them are trapped by the longitudinal field of the wake. Second, this self-trapping is severe enough to load the wake with so many electrons that the energy they extract reduces its amplitude, thereby turning off any further trapping – an effect known as beam loading. As the trapped electrons are accelerated their energy initially increases monotonically. However, eventually the electrons in the front dephase and begin to lose energy, while the electrons behind them continue to gain energy (phase-space rotation). This produces a quasi-monoenergetic bunch.

Research groups have now seen such monoenergetic bunches in at least half a dozen laser-wakefield accelerator experiments around the world. Recently the group at Lawrence Berkeley National Laboratory, in collaboration with Oxford University, has used a plasma discharge in a 3.3 cm long capillary tube to produce a hydrogen plasma channel. When the team guided a 40 TW laser pulse through this channel, they produced a monoenergetic beam with an energy up to 1 GeV (Leemans et al. 2006). To go to higher particle energies, laser pulses of even higher power need to be propagated over longer distances in plasma channels. In the next few years we will see if 100 TW class pulses can be guided through plasma channels 10–30 cm long with a plasma density in the range of 1017 cm–3, to produce 10 GeV pulses of high beam quality.

Plasma-wakefield accelerator

There are fewer particle-beam-driven plasma acceleration experiments compared with laser-accelerator experiments. This is because there are fewer suitable beam facilities in the world compared with facilities that can deliver ultra-short laser pulses. The first beam-driven plasma-wakefield experiments were carried out at the Argonne Wakefield Accelerator Facility in the 1980s. Now however, a series of elegant experiments done at SLAC by the UCLA/USC/SLAC collaboration has mapped the physics of electron and positron beam-driven wakes and shown acceleration gradients of 40 GeV/m using electron beams with metre-scale plasmas.

CCpla3_06_07

In the SLAC experiments only one electron pulse was used to excite the wakefield (Blumenfeld et al. 2007). Since the energy of the drive pulse is nominally 42 GeV, both the electrons and the wake are moving at a velocity close to c, so there is no relative motion between the electrons and the wakefield. Most of the electrons in the drive pulse lose energy in exciting the wake, but some electrons in the back of the same pulse can gain energy from the wakefield as the wakefield changes its sign.

When the 42 GeV SLAC electron beam passed through a column of lithium vapour 85 cm long, the head of the beam created a fully ionized plasma and the remainder of the beam excited a strong wakefield. Figure 2a (p29) shows the energy spectrum of the beam measured after the plasma. The electrons in the bulk of the pulse that lost energy in driving the wake are mostly dispersed out of the field of view of the spectrometer camera and so are not seen in the spectrum. However, electrons in the back of the same pulse are accelerated and reach energies up to 85 GeV. The measured spectrum of the accelerated particles was in good agreement with the spectrum obtained from computer simulations of the experiment, as figure 2b shows. This is a remarkable result when one realises that while it takes the full 3 km length of the SLAC linac to accelerate electrons to 42 GeV, some of these electrons can be made to double their energy in less than a metre.

Where next?

CCpla4_06_07

Over the past 25 years, a relatively small number of dedicated researchers have solved many technical problems to reach a point where plasma-based accelerators are producing energy gains of interest to high-energy physics, but there are still many challenges ahead of us. The one that is often brought up is the energy spread and emittance of the accelerated electrons. Laser experiments have already shown self-trapped electron beams with an energy spread of a few per cent. In a beam-driven plasma accelerator a different plasma-electron trapping mechanism, called ionization trapping, could generate a perfectly phased sub-micrometre beam suitable for multi-stage acceleration, with an extremely low emittance and a narrow energy spread. Then there is the issue of the possible degradation of the beam quality because of collisions and, possibly, ion motion. If these are shown to be important effects then, like beam “hosing” and beam “head erosion” they will represent a design constraint on a plasma accelerator (Blumenfeld et al. 2007).

The next key challenge for plasma-based acceleration is to realise high-gradient acceleration of positrons. Positron acceleration is different from electron acceleration in the sense that the focusing forces of positron pulse-generated wakes have nonlinear longitudinal and transverse variation. It may be worthwhile accelerating positrons in linear plasma wakes generated by an electron pulse or by wakefields induced in a hollow channel, but this needs to be demonstrated.

CCpla5_06_07

Once electron and positron acceleration issues, including energy spread and emittance, have been addressed, the next key development is the “staging” of two plasma-accelerator modules. Again, for high-energy physics applications each module should be designed to add the order of 100 GeV energy to the accelerating beam. Given the microscopic physical size of the accelerating structure (the wavelength is about 100 μm), it is probably wise to minimize the number of plasma acceleration stages. In fact in the proposed energy doubler for the SLAC linac, only a single plasma-wakefield accelerator module was deemed necessary (Lee et al. 2002). In scaling this concept to 1 TeV centre-of-mass energy, one can envision a superconducting linac producing a train of five 100 GeV drive pulses, separated by about 1 μs, but containing three times the charge of the beam pulse that is being accelerated (figure 4). The drive pulses are first separated from one another and subsequently brought back to be colinear with the accelerating beam. Each pulse drives one stage of the plasma-wakefield accelerator from which the accelerating beam gains 100 GeV energy (Yakimenko and Ischebeck 2006). Both electrons and positrons can be accelerated in this manner. Alternatively, one can imagine an e–e or a γ–γ collider instead of an e–e+ collider, which could greatly reduce the cost of such a machine.

Key challenges

I have described the many fine accomplishments of the advanced acceleration-research community by using the example of plasma-based accelerators. How will this and other concepts for advanced acceleration progress in the next decade? Will they continue to make progress to stay on track for a prototype demonstration of a new accelerator technology in the early 21st century? The answer depends on the availability of one or more suitable experimental facilities to do the next phase of research that I have outlined.

The time is now ripe to invest in appropriate facilities to take this field to the next level

There are several 100 TW class laser facilities in Europe, the US and Asia that should advance the laser-wakefield accelerator to give multi-giga-electron-volt beams. To go beyond this, a high repetition rate, 10 PW class laser facility is needed to demonstrate a 100 GeV prototype of a laser-driven plasma accelerator.

All advanced acceleration schemes will eventually have to face positron acceleration. How and where will experiments on high-gradient positron acceleration be done? The plasma-wakefield accelerator experiments that led to the energy doubling of 42 GeV electrons were carried out at the Final Focus Test Beam (FFTB) at SLAC, which has been recently decommissioned to make way for the Linac Coherent Light Source. SLAC has proposed a “replacement FFTB” beam line called SABER, which will provide experimenters with 30 GeV electron and positron beams. If adequately supported, SABER could become the premier facility not only for plasma-acceleration research but also for other advanced acceleration concepts.

There are about 40 groups worldwide working in plasma-based acceleration with a critical mass of trained scientists and students who are attracted to the field because it offers many chances to make unexpected discoveries. The time is now ripe to invest in appropriate facilities to take this field to the next level. It could be the critical factor that makes the difference to the future of high-energy physics in the 21st century.

Symmetry breaking on a supercomputer

The Japan Lattice QCD Collaboration has used numerical simulations to reproduce spontaneous chiral symmetry breaking (SCSB) in quantum chromodynamics (QCD). This idea underlies the widely accepted explanation for the masses of particles made from the lighter quarks, but it has not yet been proven theoretically starting from QCD. Now using a new supercomputer and an appropriate formulation of lattice QCD, Shoji Hashimoto from KEK and colleagues have realized an exact chiral symmetry on the lattice, and observe the effects of symmetry breaking.

Chiral symmetry distinguishes right-hand spinning quarks from left-handed and is exact only if the quarks move at c and are therefore massless. In 1961 Yoichiro Nambu and Giovanni Jona-Lasinio proposed the idea of SCSB, inspired by the Bardeen–Cooper–Schrieffer mechanism of superconductivity in which spin-up and spin-down electrons pair up and condense into a lower energy level. In QCD a quark and an antiquark pair up, leading to a vacuum full of condensed quark–antiquark pairs. The result is that chiral symmetry is broken, so that the quarks – and the particles they form – acquire masses.

In their simulation the group employed the overlap fermion formulation for quarks on the lattice, proposed by Herbert Neuberger in 1998. While this is an ideal formulation theoretically, it is numerically difficult to implement, requiring more than 100 times the computer power of other fermion formulations. However, the group used the new IBM System BlueGene Solution supercomputer installed at KEK in March 2006, as well as steady improvements of numerical algorithms

CCnew11_06_07

The group’s simulation included extremely light quarks to give eigenvalues of the quarks. The results reproduce predictions (see figure) indicating that chiral symmetry breaking gives rise to light pions that behave as expected.

Bent silicon crystal deflects 400 GeV proton beam at the Super Proton Synchrotron

A team working at CERN has detected the phenomenon of volume reflection using bent silicon crystals with a 400 GeV proton beam at the Super Proton Synchrotron. The efficiency achieved was greater than 95%, over a much wider angular acceptance than is possible with particle channelling in bent crystals. This effect could prove valuable in manipulating beams at the next generation of high-energy particle accelerators.

CCnew9_06_07

Using the ordered structure of a crystal lattice to guide high-energy particle beams is already finding applications through the effect of particle channelling. In channelling a charged particle becomes confined in the potential well between planes of the crystal lattice, and if the crystal is bent, the effect can be used to change the particle’s direction (figure 1). However, to be channelled in this way, the particle must have a small transverse energy, less than that of the confining potential well. In a bent crystal, a particle with higher transverse energy may also change direction: it may lose some transverse energy and then become captured, or it may have its transverse direction reversed in an elastic interaction with the potential barrier. This latter process, which changes the particle’s direction, is known as volume reflection – and it is this effect that dominates, and therefore becomes more interesting, at higher energies.

In the research at CERN, a team from institutes in Italy, Russia and the US mounted a silicon-strip crystal on a high-precision goniometer. A specially designed holder kept the (110) crystal planes bent at an angle of 162 μrad along the crystal’s 3 mm length in the beam direction. Various detectors mapped the trajectory of the particles along the beam line and measured their fluxes.

CCnew10_06_07

Figure 2 shows the horizontal deflection of particles, as measured 64.8 m downstream, for a range of crystal orientations. The effect of channelling is clearly visible when the crystal orientation is about 0.06 mrad, giving a deflection of 165 μrad, which corresponds to the bending angle of the crystal. Here about 55% of the particles were deflected. At larger orientations, this effect disappears as the beam can no longer enter the silicon between the crystal planes. Instead a smaller beam deflection, in the opposite direction, is seen. Here the measured deflection angle of 13.9 ± 0.2 (stat.) ± 1.5 (syst.) μrad agrees well with the calculated prediction for volume reflection of 14.5 μrad. This deflection occurs over a wide range of crystal orientations, corresponding to the bending angle of the crystal; beyond this the crystal appears amorphous and the beam no longer “sees” the (110) layers.

A preliminary analysis indicates an efficiency greater than 95% for volume reflection, which occurs over a far greater range of angles than channelling. This, the team says, suggests new perspectives for the manipulation of high-energy beams, for example for collimation and extraction in high-energy hadron colliders, such as the LHC. For example, a short bent crystal could be used as a “smart” deflector to aid halo collimation in a high-intensity hadron collider, or as a device to separate low-angle scattering events in diffractive physics, close the beam line.

MiniBooNE solves neutrino mystery

Phototubes at MiniBooNE

The MiniBooNE Collaboration at Fermilab has revealed its first findings. The results announced on 11 April resolve questions that were raised in the 1990s by observations of the LSND experiment at Los Alamos, which appeared to contradict findings of other neutrino experiments. MiniBooNE now shows conclusively that the LSND results could not be due to simple neutrino oscillation.

The observations made by LSND suggested the presence of neutrino oscillation, but in a region of neutrino mass vastly different from other experiments. Reconciling the LSND observations with the other oscillation results would have required the presence of a fourth, or “sterile” type of neutrino, with properties different from the three standard neutrinos. The existence of sterile neutrinos would indicate physics beyond the Standard Model, so it became crucial to have some independent verification of the LSND results.

The MiniBooNE experiment took data for this analysis from 2002 until the end of 2005 using muon neutrinos produced by the Booster accelerator at Fermilab. The detector consists of a 250,000 gallon tank filled with ultrapure mineral oil, located about 500 m from the point at which the muon neutrinos were produced. A layer of 1280 light-sensitive photomultiplier tubes, mounted inside the tank, detects collisions between neutrinos and carbon nuclei in the oil.

Data from MiniBooNE

For this analysis the collaboration looked for electron neutrinos created by the muon neutrinos in the region indicated by the LSND observations, using a blind-experiment technique to ensure the credibility of their analysis and the results. While collecting the data, the researchers did not permit themselves access to data in the region, or “box,” where they would expect to see the same signature of oscillations as LSND. When the team opened the box and “unblinded” its data, the telltale oscillation signature was absent.

Although this work has decisively ruled out the interpretation of the LSND results as being due to oscillation between two types of neutrinos, the collaboration has more work ahead. Since January 2006, the MiniBooNE experiment has been collecting data using beams of antineutrinos instead of neutrinos and expects further results from these new data.

Future studies also include a detailed analysis of an apparent discrepancy in data observed at low energy, for which the source is currently unknown, together with investigations of more exotic neutrino-oscillation models.

Galaxy centre may harbour super accelerator

Although the super-massive black hole at the centre of our galaxy seems very quiet compared with those seen as quasars in remote galaxies, it might be a giant proton accelerator more powerful than CERN’s Large Hadron Collider. This at least is what a group of theorists at the University of Arizona suggests to explain the very high-energy gamma-ray source at the centre of the Milky Way.

CCast1_04_07

The galactic centre is a complex region with a large density of both compact and diffuse energetic sources. At the very heart of the galaxy individual stars have been observed to orbit an invisible object with an inferred mass about 3 million times that of the Sun. There is almost no doubt now that this object is a super-massive black hole. It remains a mystery, however, why the output of this black hole is so dim compared with the tremendous energy released by black holes of comparable mass in active galactic nuclei.

Another puzzle is that this quiet object is apparently a strong source of gamma rays at tera-electron-volt energies. The HESS (High-Energy Stereoscopic System) array of Cherenkov telescopes in Namibia finds the position of the gamma-ray source to be coincident with that of the black hole to a relatively high accuracy. If the gamma rays – as suggested by the data – do indeed originate from the black hole rather than from the nearby supernova remnant Sagittarius A East, understanding their production mechanism is a theoretical challenge.

Direct generation of tera-electron-volt photons around the black hole seems unrealistic, so theorists have explored indirect processes. The most likely scenario is that relativistic protons are accelerated in the vicinity of the black hole, diffuse along magnetic field lines and eventually collide with ambient hydrogen nuclei. Such proton–proton scatterings would produce pions, which would rapidly decay into pairs of photons. Several recent studies attempted to explain how these protons could be accelerated to energies of up to hundreds of tera-electron-volts close to the black hole’s event horizon.

To address this question further, one group of theorists has now tried instead to figure out whether such relativistic protons could be at the origin of the gamma-ray emission observed by HESS. David Ballantyne and colleagues from the University of Arizona, Los Alamos National Laboratory and the University of Adelaide, model the diffusion of relativistic protons in a cube with sides 20 light-years long centred on the Milky Way’s super-massive black hole. Using a realistic density distribution, they study the random-walk trajectories of 222,000 simulated protons as they interact with the turbulent magnetic field in the volume.

Assuming the magnetic field intensity to be proportional to the gas density, the team finds that about a third of the protons will produce gamma rays in the circumnuclear torus around the black hole. These scatterings at only several light-years from the galactic centre could be responsible for the point-like gamma-ray source found by HESS, but only if the initial proton spectrum is very hard, with a power-law index of 0.75. The majority of relativistic protons would travel much longer distances before interacting with interstellar gas and could be responsible for the diffuse glow of the central galactic ridge that HESS also sees. That these two sources of tera-electon-volt photons with very different spatial distributions could have the same origin gives strength to this model.

Further reading

D R Ballantyne et al. 2007 Astroph. Journal 657 L13.

bright-rec iop pub iop-science physcis connect