Comsol -leaderboard other pages

Topics

INTEGRAL prepares for lift-off

cernastrow1_10-02

The launch of the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) is scheduled for 17 October. Gamma-ray astronomy explores the most energetic phenomena in the universe and addresses some of the most fundamental problems in physics and astrophysics. INTEGRAL will study hard X-ray and gamma-ray sources in the energy range of 15 keV to 10 MeV.

The launch is awaited with much anticipation. Many of the sources discovered by INTEGRAL’s predecessor, the Compton Gamma-Ray Observatory, remain unidentified. Indeed, the new instruments on board INTEGRAL will provide a big leap forward in both fine spectroscopy and imaging, enabling sources to be positioned accurately.

In particular, INTEGRAL will be used to study the radiation from compact objects such as neutron stars and black holes, and to help pinpoint the origins of gamma-ray bursts. The mechanisms fuelling gamma-ray bursts are still unknown, and they are by far the most powerful events known to occur since the Big Bang itself.

Indeed, some of the universe’s most energetic processes are the least understood, another example being the highly relativistic jets seen streaming from active galactic nuclei. Gamma-ray observations are essential for understanding the particle interactions and the acceleration processes taking place.

INTEGRAL will also map the diffuse gamma-ray background and, on a smaller scale, galactic structure and the elements making up the interstellar medium. The spectrometer will be used to study the production of elements by stellar nucleosynthesis by observing, among other things, the radioactive elements ejected into space by supernovae.

Of course, astronomers also hope for many unexpected discoveries, mirroring the huge leap forward made since the launch of new X-ray satellites, Chandra and XMM Newton.

The countries participating in INTEGRAL are Switzerland, Germany, Denmark, France, Italy, Ireland, Poland and the US. The science data centre is located in Versoix, Switzerland.

Looking further into the future, the Gamma-ray Large Area Space Telescope (GLAST) is due for launch in 2006. Funded by the US, France, Germany, Italy, Japan and Sweden, its energy range will be from 10 keV to 300 GeV. Several ground-based gamma-ray facilities are also under construction.

Physicists and statisticians get technical in Durham

cernstats1_10-02

Durham University’s Institute for Particle Physics Phenomenology (IPPP) hosted a conference on advanced statistical techniques in particle physics on 18-22 March this year. Building on the success of workshops held at CERN and Fermilab in early 2000 covering the extraction of limits from the non-observation of sought-for signals, the meeting covered a wider range of statistical issues relevant to analysing data and extracting results in particle physics. Astroparticle physics was also included, since many of the analysis problems encountered in this emerging field are similar to those in traditional accelerator experiments.

The IPPP provided an excellent venue for both formal sessions and animated informal discussions, and the only complaint seemed to be that no time was set aside for the participants to visit Durham’s impressive cathedral. Almost 100 physicists attended the conference, joined by two professional statisticians whose presence was invaluable, both in terms of the talks that they gave, and for their incisive comments and advice.

The meeting began with a morning of introductory lectures by Fred James of CERN. Although these lectures were aimed primarily at those who felt the need to be reminded of some statistical principles before the conference proper began, they were attended and enjoyed by most of the participants. James emphasized the five separate statistical activities employed by physicists analysing data: estimating the best value of a parameter; interval estimation; hypothesis testing; goodness of fit; and decision-making. He stressed the importance of knowing which of these activities one is engaged in at any given time.

James also discussed the two different philosophies of statistics – Bayesianism and frequentism. Bayesians are prepared to ascribe a probability distribution to the different possible values of a physical parameter, such as the mass of the muon neutrino. To a frequentist this is anathema, since the mass presumably has a particular value, even if not much is currently known about it. The frequentist would therefore argue that it is meaningless to talk about the probability that it lies in a specified range. Instead, a frequentist would be prepared to use probabilities only for obtaining different experimental results, for any particular value of the parameter of interest. The frequentist restricts himself to the probability of data, given the value of the parameter, while the Bayesian also discusses the probability of parameter values, given the data. Arguments about the relative merits of the two approaches tend to be vigorous.

Michael Goldstein, a statistician from Durham, delivered the first talk of the main conference. On the last day, he also gave his impressions of the meeting. He is a Bayesian, and described particle physics as the last bastion of out-and-out frequentism.

Durham is one of the world’s major centres for the study of parton distributions (describing the way that the momentum of a fast-moving nucleon is shared among its various constituents). Because of this, special attention was given to the statistical problems involved in analysing data to extract these distributions, and to the errors to be assigned to the results. There were talks by Robert Thorne, a phenomenologist from Cambridge, and Mandy Cooper-Sarkar, an Oxford experimentalist working on DESY’s ZEUS experiment. This was followed by a full-day parallel session in which the parton experts continued their detailed discussions. Finally there was an evening gathering over wine and cheese, at which Thorne summarized the various approaches adopted, including the different methods the analyses used for incorporating systematic errors.

Confidence limits

Confidence limits – the subject of the earlier meetings in the series – of course came up again. Alex Read of Oslo University had some beautiful comparisons of the CLs method and the Feldman-Cousins unified technique. CLs is the method used by the LEP experiments at CERN to set exclusion limits on the mass of the postulated Higgs particle that might have been produced at LEP if it had been light enough. The special feature of CLs is that it provides protection against the possibility, arising from a statistical fluctuation of the background, of excluding Higgs masses that are so large that the experiments would be insensitive to them. The main features of the Feldman-Cousins technique are that it reduces the possibility in the standard frequentist approach of ending up with an interval of zero length for the parameter of interest, and it provides a smooth transition between an upper limit on a production rate when the supposed signal is absent or weak, to a two-sided range when it is stronger. Read’s conclusion was that CLs is preferable for exclusion regions, and Feldman-Cousins is better for estimating two-sided intervals.

Staying with limits, Rajendran Raja of Fermilab thought it was important not only to set the magnitude of a limit from a given experiment, but also to give an idea of what the uncertainty in the limit was. Dean Karlen of Ottawa’s Carleton University suggested that as well as specifying the frequentist confidence level at which an interval or limit was calculated, one should also inform the reader of its Bayesian credibility level. This again was an attempt to downplay the emphasis on very small frequentist intervals, which can occur when expected background rates are larger than the observed rate. Another contribution by Carlo Giunti of Turin emphasized that it may be worth using a biased method for obtaining limits, as this could give better power against excluding alternative parameter values.

Discovery significance

cernstats2_10-02

Hopefully, not all experiments searching for new effects will obtain null results, and simply set limits. When an effect appears, it is important to assess its significance. This was the subject of a talk by Pekka Sinervo of Toronto, which included several examples from the recent past, such as the discoveries of the top quark and of oscillations in neutral mesons containing bottom quarks.

Several talks dealt with the subject of systematic effects. Roger Barlow of Manchester gave a general review. One particularly tricky subject is how to incorporate systematic effects in the calculation of limits. For example, in an experiment with no expected background and with detection efficiency e, the 95% confidence level upper on the signal rate is 3.0/e, from both frequentist and Bayesian approaches. However, what happens when the efficiency has an uncertainty? From an ideological point of view, it is desirable to use the same type of method (Bayesian or frequentist) both for the incorporation of the systematic and for the evaluation of the limit. Some interesting problems with the Bayesian approach were discussed by Luc Demortier of Rockefeller University, while Giovanni Punzi and Giovanni Signorelli, both of Pisa, spoke about features of the frequentist method, with illustrations from a neutrino oscillations experiment.

The question of how to separate signal from background was another popular topic. It was reviewed by Harrison Prosper of Florida State University, who is active in Fermilab’s Run 2 advanced analysis group, where the topic is actively studied. His talk was complemented by that of Rudy Bock of CERN, who compared the performance of various techniques used for separating cosmic-ray air showers initiated by photons or by hadrons. The relatively new method of support vector machines was described by Tony Vaiciulis of Rochester, while Sherry Towers of Stony Brook made the point that including useless variables could well have the effect of degrading the performance of the signal-to-background separation technique. She also spoke about using kernel methods for turning multidimensional Monte Carlo distributions into probability density estimates. Monte Carlo methods were the subject of two talks by CERN’s Niels Kjaer. With the need for large Monte Carlo samples, it is important to understand how to use the generators efficiently.

In searching for new or rare effects, it is important not to use the data to tune the analysis procedure so as to maximize (or minimize) the sought-for effect. One way of avoiding this is to perform a blind analysis. Paul Harrison of Queen Mary, University of London, described the psychological, sociological and physics issues in using this increasingly popular procedure.

Whatever the method used to extract a parameter from the data, it is important to check whether the data is consistent with the assumed model. When there is enough data, the well known c2 method can be employed, but this is less useful with sparse data, especially in many dimensions. James of CERN and Berkan Aslan of Siegen both spoke about this topic, with the latter showing interesting comparisons of the performance of a variety of methods.

A mundane but very important topic is the understanding of the alignment of the different components of one’s detector. Hamburg’s Volker Blobel explained how this can be done using real tracks in the detector, without the need to invert enormous matrices. In a separate talk, also involving the clever use of matrices, he described his method for unfolding the smearing effects of a detector. This enables one to reconstruct a good approximation to the true distribution from the observed smeared one, without encountering numerical instabilities. Glen Cowan of Royal Holloway, University of London, gave a more general review of deconvolution techniques.

Although textbook statistics can give neat solutions to data analysis problems, real-life situations often involve many complications and require semi-arbitrary decisions. There were several contributions on work at the pit-face. Chris Parkes of Glasgow described how the LEP experiments combined information on W bosons. Their mass is fairly precisely determined, and the well known best linear unbiased estimate (BLUE) method has been used for combining the different results. Nevertheless, problems do arise from different experiments making different assumptions, for example about systematic effects. These difficulties multiply for determining such quantities as triple gauge couplings where the errors are large, where likelihood functions are non-Gaussian and can have more than one maximum, and where the experiments use different analysis procedures. Here again, sociology plays an important role.

Nigel Smith of the UK Rutherford Appleton Laboratory and Daniel Tovey of Sheffield University presented interesting contributions on dark matter searches. Fabrizio Parodi of Genoa talked about Bs oscillations, and a whole variety of results from the Belle experiment at Japan’s KEK laboratory were also discussed. An interesting point from the last talk, by Bruce Yabsley of Virginia Tech, related to the determination of two parameters (App and Spp) in an analysis of B decays to p+p, to look for CP violation. Physically the parameters are forced to lie within the unit circle. In the absence of CP violation, they are both zero, while if there is no direct CP violation, App is zero. The Belle estimate (figure 1) lies outside the unit circle. This sounds like a case for taking the physical region into account from first principles (as in Feldman-Cousins), but there are complicated details that make this difficult to implement.

Many discussions started at the conference have continued, and some issues will undoubtedly resurface at the next meeting, scheduled to be held at SLAC in California, US, on 8-12 September 2003.

University hosts premier rare-isotope facility

cernprofile1_10-02

The National Superconducting Cyclotron Laboratory (NSCL) is a rare-isotope facility at Michigan State University (MSU) in the US. Since it started operating as a user facility in the late 1980s, the NSCL has established a successful history of research in nuclear physics. The facility has been extended and upgraded several times including a major upgrade from 1999 to 2001. This has dramatically improved the number and intensity of rare-isotope beams that the facility can provide, and has made the NSCL the premier rare-isotope user facility in the US.

The upgraded NSCL facility is based on two coupled superconducting cyclotrons and can produce intense energetic beams of primary heavy ions from hydrogen to uranium. A high-acceptance fragment separator allows efficient production and the in-flight separation of a broad range of secondary rare-isotope beams produced by projectile fragmentation or fission reactions. These beams are sent to various experimental devices that serve a community of researchers from the US and abroad.

Research at the NSCL is devoted to experimental and theoretical nuclear physics, nuclear astrophysics, accelerator physics and the development of related instrumentation. A key activity is investigating the properties of rare isotopes that are far away from stability. This includes measuring the structural and decay properties of nuclei near the drip lines; determining astrophysically important data on neutron- and proton-rich nuclei that participate in the r and rp (rapid neutron and proton capture) nucleosynthesis processes; and measuring mass precisely. Furthermore, beams at the NSCL allow the creation of nuclei at temperatures and densities commensurate with the liquid-gas phase transition in the phase diagram of nuclear matter. Here, the NSCL’s research addresses questions concerning the thermodynamics of strongly interacting quantum many-body systems, and especially the determination of the equation of state of asymmetric nuclear matter.

Technical innovation

cernprofile2_10-02

The NSCL has a history of technical innovation. It has been a pioneer in applying superconducting magnet technology to the design and construction of cyclotrons, spectrographs and beam-transport systems. The NSCL designed, built and commissioned the first superconducting cyclotron for cancer therapy by neutron irradiation at Harper Hospital in Detroit. Today, the NSCL’s conceptual design for a superconducting 250 MeV proton cyclotron is the basis of the proton-therapy facility at the Paul Scherrer Institut which is being built by ACCEL with technical assistance from the NSCL. Perhaps the most important new initiative is the development of superconducting radiofrequency structures for use in linear accelerators. These structures may be used in the next-generation rare-isotope beam facility Rare Isotope Accelerator (RIA). RIA is now the highest new-construction priority of the US nuclear-physics community. NSCL played a key role in developing the RIA concept and MSU is naturally one of the prime candidate sites.

Rare-isotope production at the NSCL starts with a set of electron cyclotron resonance (ECR) ion sources (figure 1) that can ionize essentially any chemical element. Multiply-charged ions are injected into the first of NSCL’s two cyclotrons, the K500. Here the ions are accelerated to an energy of about 10 MeV per nucleon and sent to the K1200 cyclotron. Inside the K1200 the ions pass a stripper foil that removes most – and, in the case of light elements, all – electrons. In this way maximum beam energies of 200 MeV per nucleon for lighter elements and 90 MeV per nucleon for uranium are achieved after final acceleration. These energetic primary beams can be used directly in experiments, or they can be converted into a broad range of radioactive ions by impinging them onto a thin target, where the choice of material and thickness optimizes the production of the desired isotopes.

To become useful for experiments the rare isotopes produced by projectile fragmentation or fission reactions have to be mass separated. This happens in the A1900 fragment separator/beam analysis system. The techniques used in the A1900 are so sensitive that one nucleus out of 1018 can be selected and studied. The A1900 can also be used as a monochromator to define the energy and emittance of the primary beam. Downstream from the A1900 is a beam switchyard that allows all the radioactive ion beams to be transported to any experimental station at the NSCL.

cernprofile3_10-02

The facility’s scientific reach is determined by the predicted intensities after separation in flight with the A1900 (figure 2). Far away from the valley of stability such predictions can be very uncertain. Nevertheless, the figure gives an idea about what will be in reach. For orientation, the approximate paths of nucleosynthesis via the astrophysical rapid proton (rp) and rapid neutron (r) processes are indicated. With beams from the CCF it may be possible to extend our knowledge of the neutron drip line (nucleons with a large number of neutrons) from oxygen to silicon or even sulphur. It will be possible to study a large number of rp-process nuclei and r-process nuclei up to A = 140, and at the same time to explore how nuclear structure evolves as one recedes to the borderlines of nuclear stability.

Experimental devices

cernprofile4_10-02

The largest experimental device connected to the NSCL beam-line system is the S800 (figure 3). This is a superconducting magnetic spectrograph well matched to reaction studies with rare isotopes. It offers high-energy resolution (E/DE = 104), large momentum acceptance (Dp/p = 5%) and a large solid angle (Dq = 20 msr). The S800 enables various types of reaction studies to be performed with high resolution and high sensitivity. It is a key instrument for a large part of the experimental programme carried out at the NSCL.

The 4p array is a low-threshold “logarithmic” 4p detector that has been successfully used and will be used for intermediate-energy heavy-ion collision experiments. It consists of 32 position-sensitive, parallel-plate, multiwire detectors, backed by segmented Bragg ionization chambers, backed in turn by an array of 170 so-called phoswich detectors, each consisting of a fast-slow plastic scintillator combination. Upgraded with higher-granularity forward arrays it will allow studies of collisions of very heavy systems at the higher energies now available (for example, Au + Au at E/A * 40 MeV).

In addition to this fixed major equipment, a number of special-purpose detector arrays exist for the coincident detection of g-rays, neutrons and charged particles. An important new instrument is a set of 18 segmented germanium detectors. This system has been designed for the efficient high-resolution detection of g-rays emitted in flight from fast rare isotopes, but can also be used for online decay studies. For example, it will be used in intermediate-energy Coulomb-excitation studies, which investigate the collective properties of very-neutron-rich isotopes.

The lab has a pair of large area neutron detectors, or “neutron walls”, and a third, MONA, is being constructed by a collaboration between several universities and undergraduate colleges. MONA is a modular system with a total area of 4 ms and an efficiency of about 70% for neutron energies above 50 MeV. It will be used with a large-gap superconducting “sweeper” magnet (4 Tm) that is being constructed at Florida State University’s High Magnetic Field Laboratory. The magnet will serve as a high-acceptance magnetic spectrometer and, combined with neutron detectors, will be used for neutron time-of-flight spectroscopy.

cernprofile6_10-02

A number of experiments require high-quality low-energy beams, such as those available at CERN’s ISOLDE facility. This type of beam is not usually available at projectile-fragmentation facilities, which on the other hand do provide beams of all elements and have advantages in the production of the most short-lived isotopes. Providing such low-energy beams and making them available for experiments is the task of the Low Energy Beam and Ion Trap (LEBIT) facility (figure 4).

The key element of the LEBIT facility is a high-pressure (up to 1 bar) helium gas cell for slowing down and collecting energetic rare isotopes from the A1900 fragment separator. Ions slowed down in the gas cell remain singly charged and can be extracted with high efficiency. Electric fields guide the ions from the stopping volume through an exit nozzle into a radiofrequency quadrupole (RFQ) system that directs the ions through a differential pumping system. The continuous ion beam is then transported into a linear RFQ ion trap, which acts as a beam accumulator, cooler and buncher. The energy of the extracted ion bunches can be varied between 5 and 60 keV by means of a pulsed drift tube to satisfy the requirements of a variety of experiments. The system is in an advanced stage of construction and gas-stopping tests are under way. Experience gained in gas stopping and beam manipulation in LEBIT will provide valuable insight for the design and construction of a similar facility at RIA.

cernprofile5_10-02

The first experiment to be set up is a Penning-trap system. A superconducting solenoid magnet with a field of 9.4 T will allow the extension of high-precision Penning-trap mass measurements to isotopes with half-lives as short as 10 ms. Later there are plans to add laser spectroscopy for isotopic-shift and nuclear-moment measurements, and atom-trap experiments for fundamental tests of weak interactions. Finally, beams from the LEBIT facility are well suited to post-acceleration to medium energies, as featured at CERN’s REX-ISOLDE. A design study for such a scenario has already been carried out.

PAMELA set to take particle physics into orbit

cernpamela1_10-02
cernpamela2_10-02

The PAMELA experiment (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) will lift off aboard a Soyuz TM2 rocket in 2003, hitching a ride on the Russian Resurs-DK1 earth-observation satellite. While one end of the satellite will look down towards the earth, PAMELA will enjoy a clear view into space from the other. Data taking is expected to last for three years, and will result in better understanding of the antimatter component of the cosmic radiation.

The primary objective of PAMELA is to measure the energy spectrum of antiprotons and positrons in the cosmic radiation. At least 105 positrons and 104 antiprotons are expected per year. All existing antiproton measurements originate from balloon-borne experiments operating at altitudes around 40 km for approximately 24 h. There is still a residual amount of the earth’s atmosphere above the detecting apparatus at this altitude, with which cosmic rays can interact. A satellite-borne experiment benefits from a lack of atmospheric interactions and a much longer data-taking time. In the figures above, the PAMELA expectation after three years of data taking is shown. These data sets exceed what is available today by several orders of magnitude, and will allow significant comparisons between competing models of antimatter production in our galaxy. Distortions to the energy spectra could originate from exotic sources, such as the annihilation of supersymmetric neutralino particles – candidates for the dark matter in the universe. Sensitivity to the low-energy part of the spectrum is a unique capability of PAMELA, and arises because the semi-polar Resurs-DK1 orbit overcomes the earth’s geomagnetic cut-off. Another PAMELA goal is to measure the antihelium to helium ratio with a sensitivity of the order of 10-8 – a 50-fold improvement on the current limits. An observation of antihelium would be a significant discovery, as it would be the first sign of primordial antimatter left over from the Big Bang.

WiZard collaboration

PAMELA is being constructed by the WiZard collaboration, which was originally formed around Robert Golden, who first observed antiprotons in space. There are now 14 institutions involved. Italian INFN groups in Bari, Florence, Frascati, Naples, Rome and Trieste, and groups from CNR, Florence and the Moscow Engineering and Physics Institute form the core. They are joined by groups from The Royal Institute of Technology (KTH) in Sweden, Siegen University in Germany, Russian groups from the Lebedev Institute, Moscow, and the Ioffe Institute, St Petersburg, and American groups from New Mexico State University and NASA’s Goddard Spaceflight Centre.

The WiZard collaboration has a long history of performing cosmic-ray experiments. It ran six balloon flights between 1989 and 1998 using instrumentation novel for space, such as multisense drift chambers in the strong magnetic field of a superconducting magnet; imaging streamer tubes and silicon-tungsten calorimeters; a transition radiation detector (TRD); and solid and gas ring-imaging Cerenkov detectors. Many important results were obtained during studies of antiprotons, positrons and light nuclei. In particular, the last balloon flight experiment of the WiZard collaboration, CAPRICE98, was the first to mass-resolve high-energy (above 20 GeV) antiprotons in cosmic rays. A subset of the collaboration has also built several small space experiments: the NINA-1 and NINA-2 satellite experiments (silicon detector systems used to investigate cosmic-ray nuclei); and SILEYE-1, -2 and -3 (silicon sensor telescopes used to study the radiation environment inside the MIR and the ISS space stations). These experiments were also used to study the nature of particles producing the light flashes seen by astronauts.

cernpamela3_10-02

PAMELA is built around a 0.48 T permanent magnet spectrometer tracker equipped with double-sided silicon detectors, which will be used to measure the sign, absolute value of charge and momentum of particles. The tracker is surrounded by a scintillator veto shield (anticounters) that will reject particles that do not pass cleanly through the acceptance of the tracker. Above the tracker is a TRD based around proportional straw tubes and carbon fibre radiators. This will allow electron-hadron separation through threshold velocity measurements. Mounted below the tracker is a very compact and deep silicon-tungsten calorimeter, to measure the energies of incident electrons and allow topological discrimination between electromagnetic and hadronic showers (or non-interacting particles). A scintillator telescope system will provide the primary experimental trigger and time-of-flight particle identification. A scintillator mounted beneath the calorimeter will provide an additional trigger for high-energy electrons. This is followed by a neutron detection system (3He-filled tubes within a polyethylene moderator) for the selection of very high-energy electrons and positrons (up to 3 TeV), which will shower in the calorimeter but will not necessarily pass through the spectrometer.

cernpamela4_10-02

Final versions of the anticounters, calorimeter and tracker and a final prototype of the TRD were successfully tested with proton and electron beams at CERN in June. Integration and final tests of the other subdetectors continue in Rome and will be completed by the end of the year. PAMELA will then be shipped to Samara in Russia for integration with the Resurs-DK1 satellite. After this, the satellite will move to the Baikonur cosmodrome in Kazakhstan for launch preparations.

Accelerators for nano- and biosciences

cernview1_10-02

From a historical perspective, large particle-accelerator facilities entered the scientific arena as grand instruments that enabled us to understand the fundamental workings at the heart of matter. Ever since Ernest Orlando Lawrence’s invention of the cyclotron in 1930, we have witnessed the scientists’ obsession with increasingly higher-energy particle beams to probe deeper into the nucleus, the nucleons and the elementary particles to understand the fundamental forces and processes at work. The result has been a scientific culture and sociology that defined the so-called “big science”, with numerous spin-off benefits to society at large (such as large international collaborations and information networking via the creation of the World Wide Web). On the flip side, however, the economics, sociology and politics relevant to the envisioned next big accelerator facility addressing the frontier of particle physics are daunting to the point of paralyzing the field and driving its artisans – especially accelerator scientists – to extinction.

The value of accelerator science and technology is not limited to high-energy physics; witness the flourishing of accelerator-based synchrotron radiation sources worldwide that serve a much broader scientific community. While the energy of speed-of-light particles is an important parameter that determines the resolution with which we can see things in the microscopic world (whether using the particles directly or using the synchrotron radiation they generate when bent in a magnetic field), the intrinsic value of a particle beam goes far beyond its mere energy. It provides bursts of energy in suitably packaged pulses in space and time that have critical applications in today’s emerging sciences of the nano- and bioworld. Such critical characteristics as the brightness, time structure, spatial dimensions, polarization, coherence, simultaneous and concurrent use of synchronized multiple light and particle beams are all important factors that can be tailored to address many relevant fundamental scientific issues of our times. And a careful examination shows that indeed it is possible to conceive affordable mezzo-scale unique accelerator facilities that can produce creative space-time patterns of particle and/or wave energy to address specific issues that cannot be done otherwise.

Different worlds

What are some of the critical issues in nano- and biosciences today? The nanoworld is concerned with designing microscopic structures on a nanometre scale atom-by-atom and understanding the properties of these intermediate structures – made naturally or in the laboratory – which exhibit classical and quantum behaviour in a special and peculiar way. The relevant space dimensions are micrometres to nanometres, and the timescales for fundamental processes in the nanoworld range from picoseconds for vibrational electronic phenomena to femtoseconds for collective surface atomic nucleus motion and attoseconds for truly quantum single atomic phenomena. The bioworld is concerned with larger biomolecules where the energy transfer and topological deformations within the longer, functional biomolecules, such as proteins, demand suitable bursts of energy to initiate the energy-transfer mechanisms and ultrashort pulses to probe and image the molecules while still in a functional state, before being destroyed by the pulsed energy of the beam. An electron beam of up to a few giga-electron-volts in particle energy can be manipulated to produce pulses of electromagnetic waves from a picosecond to an attosecond in duration, and focused from a few micrometres to a few nanometres with wavelengths that can probe atomic motion. Such modest practical accelerator facilities are clearly possible for accelerator science and technology today. As an example I can only point to the exciting possibilities now opening up with various energy-recovered linear-accelerator concepts, short wavelength, high-power and high-brightness free-electron lasers, and various ultrashort/ultrafast pulse-production and slicing techniques actively pursued at major laboratories (such as DESY, PSI, Daresbury, Spring-8, Jefferson Lab, Cornell, Berkeley, SLAC, BNL and ANL).

Today’s scientists working with light and speed-of-light particles grapple with classical power electromagnetics; microwave superconductivity; surface physics of metals and dielectrics; laser physics and technology; atomic physics of semiconductors; atomic and surface phenomena under extreme high fields (1-100 GV/m); precise detection of near-field and far-field radiation; nonlinear phenomena; studies of controlled high-density plasma waves; and the whole spectrum of space-time phenomena ranging from milliseconds to attoseconds and centimetres to nanometres. The transition from electronics (GHz) to optronics (THz) to photonics (PHz) is visible on the horizon – it is no longer only the domain of traditional physics and/or electrical engineering. We need to recognize this situation and seek to engage experts from all these disciplines to make a difference in the world. Let’s extend our vision outwards from the world at femtometres to embrace the nano- and bioworld, where we have much to contribute. The accelerator community should take an active role in understanding the needs of the nano- and biosciences, and in educating the scientific community, government agencies and society via proper articulation of the tremendous hidden potential for bringing these capabilities to fruition.

Meeting heralds global GW detector network

cernnews4_9-02

As the curtain is being raised on today’s generation of gravitational wave (GW) detectors, the individual project teams met to consider future detectors and the goal of running the global ensemble of instruments as a single array. For this purpose, the annual Aspen Meeting on Advanced Gravitational Wave Detectors descended from its traditional Colorado, US, venue and was held at La Biodola on the Island of Elba, close to the Virgo interferometer site. The Gravitational Wave International Committee sponsors the Aspen meetings, which are usually organized by the US LIGO observatory. On this occasion the meeting moved to Europe to acknowledge the growing international collaboration between all the individual efforts. The meeting drew about 100 scientists from all continents, including representatives from the Germany-UK GEO project, and strong participation from Japan and Australia.

The theme of the meeting was operating the interferometers as a single machine, echoing an idea from Adalberto Giazotto, one of the fathers of the GW interferometric detector field. The GW interferometers are recognizing the more mature GW bar-detector community, which has already coordinated its data-taking and observations. Bar-community participants at the meeting offered concrete examples of how to build a global collaboration that will include interferometers and bars. No less important was the participation of the nascent space-based interferometric detectors for the detection of ultra-low-frequency gravitational waves.

cernnews5_9-02

Although the main emphasis was on future developments, the meeting took place as all of the world’s present-generation GW interferometers are reaching maturity. This was an occasion to review the extraordinary recent advances of interferometer commissioning by LIGO, GEO, Virgo and Japan’s TAMA, and early coincident operation by LIGO, GEO and the Allegro bar in Louisiana. The rapid commissioning and initial data-taking of the new interferometers leads to the challenge of effectively networking them in a single global data acquisition and analysis system.

TAMA, so far the groundbreaker, reported on a coincidence run between itself and LISM, a pilot underground interferometer in the Kamioka mine, the future site of the projected LCGT kilometre-class cryogenic interferometer. TAMA also announced coincidence data collection with LIGO and GEO that will occur this summer as those two instruments perform their first scientific observation periods. Japanese teams also presented impressive advances on cryogenic techniques for third-generation GW interferometers.

LIGO reported on its successful commissioning and rapid progression in sensitivity. The three LIGO interferometers, already exercised as an integrated network, are now, together with GEO, at the TAMA sensitivity. This will make the forthcoming GEO-LIGO-TAMA common data-taking even more interesting. GEO presented its advances and reported on the installation of futuristic all-fused silica and low-thermal-noise mirror suspensions.

Virgo, just finishing the construction of its 3 km vacuum arms, reported on the successes of its Central Interferometer with its advanced low-frequency seismic attenuation chains and its hierarchical mirror control system. Virgo plans to commission its long arms as early as the end of this year. Once Virgo operates as a complete detector it will lead in sensitivity below 50 Hz. Virgo and LIGO are already exchanging environmental data and preparing to integrate the Virgo data in the global network as soon as the complete Virgo is operational.

All the groups are gearing up to treat the data that starts being produced by the interferometers. All the groups are attaching growing importance to simulations for understanding the instruments and the data. Several challenges ahead were discussed, ranging from the development of advanced suspension and seismic isolation systems, and sensors for Newtonian noise estimation, to theoretical thermal-noise issues. Everybody left Elba feeling they had participated in an extremely productive event.

DESY turns storage ring into light source

Hamburg’s DESY laboratory is to convert its PETRA storage ring into a third-generation synchrotron radiation source following a € 1.4 million grant from the German Federal Ministry of Education and Research to cover the design phase. A formal proposal will be submitted in 2004, allowing reconstruction to begin in January 2007. The new light source, PETRA III, will run at 6 GeV with a current of more than 100 mA. DESY expects the 13-15 planned undulator beam lines to provide the highest brilliance of any storage ring-based source at start-up. PETRA was used for particle physics research from 1978 to 1986. Since then, as PETRA II, it has formed part of the injector chain for DESY’s HERA collider.

Polarized photocathodes make the grade

cernphoto1_9-02

A polarized electron source for future electron-positron linear colliders must have at least 80% polarization and high operational efficiency. The source must also meet the collider pulse profile requirements (charge, charge distribution and repetition rate). Recent results from the Stanford Linear Accelerator Center (SLAC) have demonstrated for the first time that the profile required for a high-polarization beam can be produced.

Since the introduction in 1978 of semiconductor photocathodes for accelerator applications, there has been significant progress in improving their performance. Currently, all polarized electron sources used for accelerated beams share several common design features – the use of negative-electron-affinity semiconductor photocathodes excited by a laser matched to the semiconductor band gap, the cathode biased at between -60 and -120 kV DC, and a carefully designed vacuum system. While the earliest polarizations achieved were much less than 50%, several accelerator centres, including Jefferson Lab, MIT Bates and SLAC in the US, along with Bonn and Mainz in Germany, now routinely achieve polarizations of around 80%. Source efficiencies have shown similar dramatic improvement. The Stanford Linear Collider (SLC) achieved more than 95% overall availability of the polarized beam across nearly seven years of continuous operation. These achievements clearly point to the viability of polarized beams for future colliders.

Peak currents of up to 10 A were routinely produced in 1991 in the SLC Gun Test Laboratory by using the 2 ns pulse from a doubled Nd:YAG laser to fully illuminate the 14 mm diameter active area of a GaAs photocathode. However, when the photocathode gun was moved to the linac injector, where a high-peak-energy pulsed laser was available that could be tuned to the band-gap energy as required for high polarization, the current extracted from the cathode was found to saturate at much less than 5 A unless the cathode quantum efficiency (QE) was very high.

cernphoto2_9-02

The SLC required a pulse structure of about 8 nC in each of two bunches separated by some 60 ns at a maximum repetition rate of 120 Hz. These requirements were met by doubling the cathode area and by using a vacuum load-lock to ensure a high QE when installing newly activated cathodes. In contrast, designs for the Next Linear Collider and Japan Linear Collider, being pursued by SLAC and the KEK laboratory in Japan, call for a train of 190 microbunches separated by 1.4 ns, with each bunch having a 2.2 nC charge at the source, for a total of 420 nC for the 266 ns macropulse. This is about 25 times the SLC maximum charge. Both the macrobunch and microbunch current requirements for CERN’s CLIC concept are somewhat higher, while the 337 ns spacing between microbunches insures that charge will not be a limitation for the TESLA collider being spearheaded by Germany’s DESY laboratory.

The limitation in peak current density, which has become known as the surface charge limit (SCL), proved difficult to overcome. Simply designing a semiconductor structure with a high quantum yield was not a solution because the polarization tended to vary inversely with the maximum yield.

Gradient doping

As early as 1992, a group from KEK, Nagoya University and the NEC company designed a GaAs-AlGaAs superlattice with a thin, very-highly-doped surface layer and a lower density doping in the remaining active layer – a technique called gradient doping. The very high doping aids the recombination of the minority carriers trapped at the surface that increase the surface barrier in proportion to the arrival rate of photoexcited conduction band (CB) electrons. Because CB electrons depolarize as they diffuse to the surface of heavily doped materials, the highly doped layer must be very thin, typically no more than a few nanometres. When tested at Nagoya and SLAC, this cathode design yielded promising results in which a charge of 32 nC in a 2 ns bunch was extracted from a 14 mm diameter area, limited by the space charge limit of the 120 kV gun at SLAC.

cernphoto3_9-02

In 1998 a group from KEK, Nagoya, NEC and Osaka University applied the gradient-doping technique to a strained InGaAs-AlGaAs superlattice structure. They retained 73% polarization while demonstrating the absence of the SCL in a string of four 12 ns microbunches, spaced 25 ns apart, up to the 20 nC space charge limit of the 70 kV gun. In a more recent experiment using a gradient-doped GaAs-GaAsP superlattice, they extracted 1 nC for each of a pair of 0.7 ns bunches separated by 2.8 ns without any sign of the SCL, before reaching the space charge limit of the 50 kV gun. The polarization and QE were 80 and 0.4%, respectively. Other groups, notably at Stanford University, St Petersburg Technical University and the Institute for Semiconductor Physics at Novosibirsk, have also made significant contributions to solving the SCL problem.

A group at SLAC has recently applied the gradient-doping technique to a single strained-layer GaAs-GaAsP structure with results that substantially exceed current collider requirements. These results both complement and extend the 1998 Japanese results. The highly doped surface layer was estimated to be 10 nm thick. To compensate for an increase in the band gap that resulted from the increased dopant concentration, 5% phosphorus (P) was added to the active layer and the percentage of P in the base layer was increased to maintain the desired degree of lattice strain at the interface. Adding P in the active layer shifts the bandgap by about 50 meV towards the blue, reaching 1.55 eV (800 nm). In combination with the reduction of the surface barrier, this ensured a high QE of about 0.3% at the polarization peak. This is similar to the QE of the standard SLC strained GaAs-GaAsP cathodes.

Two laser systems were used to determine the peak charge. A flashlamp-pumped Ti:sapphire (flash-Ti) system provided flat pulses of up to several hundred nanoseconds long with a maximum energy of about 2 mJ/ns. In addition, up to 20 mJ in a 4 ns pulse was available from a Q-switched, cavity-dumped, YAG-pumped Ti:sapphire (YAG-Ti) laser. With the flash-Ti alone, the charge increased linearly with laser energy up to the maximum available laser energy. Because of the finite relaxation time of the SCL, a flat pulse is a more stringent test of the SCL than if it contained a microstructure. The peak charge per unit time (see graph) is only slightly lower than the NLC requirement for each microbunch when assuming a 0.5 ns full bunch-width. By extending the laser pulse to 370 ns, a charge of 1280 nC was extracted, far exceeding the NLC macropulse requirement.

cernphoto4_9-02

To determine if the peak charge required for a microbunch would be charge-limited, the YAG-Ti laser pulse was superimposed on the flash-Ti pulse. The resulting charge increment was consistent with the charge obtained using the YAG-Ti alone. The charge increment was independent of the relative temporal positions of the two laser pulses indicating that the massive total charge of an NLC, JLC or CLIC macropulse will not inhibit the peak charge required for each microbunch. The maximum charge produced by the YAG-Ti alone was 37 nC, which is more than 15 times the NLC requirement for a single microbunch.

To increase the charge density the laser spot on the cathode was reduced to 14 mm, below which the bunch is space-charge-limited for the maximum laser energy. Again, the charge increased linearly with the laser energy. The linearity remained when the quantum yield was allowed to decrease although, of course, the maximum charge also decreased. Thus it is clear that if sufficient laser energy is available, the linearity of the charge increase will be maintained for total charge and peak charge per unit time when using the new SLAC cathode design and will exceed NLC, JLC and CLIC requirements.

The new SLAC cathode was used in the polarized source for a recent high-energy physics experiment requiring 80 nC at the source in a 300 ns pulse. The improved charge performance provided the headroom necessary for temporal shaping of the laser pulse to allow adequate compensation for energy beam loading effects in the 50 GeV linac. The polarization measured at 50 GeV confirmed the greater than 80% polarization measured in the source development laboratory at 120 keV.

The international effort to improve polarized photocathodes will continue. For instance, tests for the surface charge limit at the very high current densities required by low-emittance guns have yet to be performed. On a broader front, the superlattice structure – in part because of the large number of parameters that the designer can vary – appears to be the best candidate for achieving a significantly higher polarization while maintaining a QE above 0.1%.

MiniBooNE detector is complete

cernnews1_7-02

The MiniBooNE experiment at the US Fermilab achieved two major milestones recently, as the tank was filled with the last drops of mineral oil and the first trickle of beam was delivered to the temporary absorber. The MiniBooNE experiment is designed to be a definitive investigation of the Los Alamos LSND experiment’s evidence for anti-muon-neutrino to anti-electron-neutrino oscillations, which is the first accelerator-based evidence for oscillations. The detector consists of a 12 m diameter spherical tank covered on the inside by 1280 phototubes (each 20 cm in diameter) in the detector region and by 240 phototubes in the veto region. The tank is filled with 800 tonnes of pure mineral oil, giving a fiducial volume mass of 440 tonnes. The detector is located 500 m downstream of a new neutrino source that is fed by Fermilab’s 8 GeV proton Booster. A 50 m decay pipe following the beryllium target and magnetic focusing horn allows secondary pions to decay into muon-neutrinos with an average energy of about 1 GeV. By switching the horn polarity, a predominately anti-muon-neutrino beam can be produced. An intermediate absorber can be moved into and out of the beam at a distance of 25 m, allowing a systematic check of the neutrino backgrounds.

The MiniBooNE detector oil fill finished on 3 May, and the detector is now complete and taking data with cosmic rays and laser calibration flasks. The beamline commissioning is under way, with the first beam delivered to the temporary absorber; data-taking with neutrinos will begin this summer after the magnetic focusing horn is installed and the neutrino beamline is completed. With 5 ¥ 1020 protons on target (about a year at design intensity), MiniBooNE will be able to cover the entire LSND allowed region with high sensitivity (>5 sigma). If the LSND oscillation signal is verified, then a second detector (BooNE) will be proposed to be built at the appropriate distance from the neutrino source, which will allow a precision measurement of the oscillation parameters and a search for CP and CPT violation.

Daresbury’s proposed 4G light source moves forward

cernnews7_7-02

Following extensive peer review, the fourth-generation light source (4GLS) proposed for the Daresbury laboratory, UK, has been given the green light to proceed to the next stage of planning. In a further development, the US Jefferson Laboratory (JLab) has shown its support for the project by making available key equipment and technical advice on basic concepts and techniques pioneered at JLab.

The 4GLS team will now prepare a detailed business case and undertake initial design and feasibility studies aimed at full exploitation of the energy recovery linac and free-electron laser (FEL) technical capabilities central to the 4GLS project.

These are both techniques in which JLab has considerable expertise, making an international collaboration a logical step forward. A wiggler magnet array from JLab will form the central part of a test facility that will be established at Daresbury during the initial phase of the project, and will be the first FEL facility to be established at a UK national laboratory. A formal agreement between the two laboratories lays the ground for future scientific and technical exchanges.

bright-rec iop pub iop-science physcis connect