Bluefors – leaderboard other pages

Topics

New double-ring design for Chinese machine will be more competitive

cernnews2-10-01

A second ring is now being planned for the Beijing Electron-Positron Collider (BEPC) at the Institute of High Energy Physics (IHEP). The precision measurements and successful completion of a run collecting 50 million J/psi particles at BEPC, as well as the planned upgrade to BEPC II, have attracted a lot of attention.

The earlier upgrade design was based on multiple bunches and bunch trains in a “pretzel” orbit at the existing BEPC storage ring, which would have increased the luminosity ten-fold. (The existing BEPC contains adjacent counter-rotating electron and positron beams in a single ring, which is the standard approach.)

In parallel, an upgrade of the Beijing Spectrometer, BES III, was designed to handle high event rates and to reduce systematic errors. The Chinese funding agency approved these upgrades.

Many recent physics results have underlined the importance of high-precision measurements. In particular, precision measurements in the tau-charm region have a unique advantage for many interesting physics studies, such as searches for glueballs (particles without quarks) and quark-gluon hybrids, light hadron spectroscopy, the J/psi family, and excited baryons.

Special workshops were held recently at SLAC and Cornell in the US to discuss this physics, and there are proposals to build a new machine at SLAC (PEP-N) and to lower the beam energy of Cornell’s CESR ring to run in this energy region. The interest of a number of other laboratories in this physics underlines its importance.

To extend its physics potential and to be more competitive, IHEP recently modified the BEPC II design to a double ring. This significantly improves the expected performance with a calculated luminosity of 1033 cm-2 s-1 at a beam energy of 1.55 GeV.

An international review on the feasibility of the new design was held on 2-6 April in Beijing. It was conducted in two partially overlapping segments, the first dealing with the accelerator collider programme and the second with the detector for the upgraded facility. Two separate reports were prepared under the chairmanship of Alex Chao of SLAC and Michel Davier of Orsay. Former SLAC director W K H Panofsky of SLAC summarized: “After much excellent work leading to the feasibility study report, there is no basic reason why a luminosity greater than 3 x 1032 cm-2 s-1 or even 1033 could not be reached. The review committee recommended strongly the double-ring option.”

The double-ring design requires a new storage ring of slightly smaller radius to be built adjacent to the existing storage ring. The two halves of the new ring and of the old ring will be linked at two interaction points to form two identical rings, each made up of one half of the old ring and one half of the new. Each new ring can be filled with up to 93 bunches with maximum beam current of 1.1 A at a beam energy of 1.55 GeV.

cernnews3-10-01

The beams will collide at the south interaction point with a horizontal crossing angle of 11 mrad. To reduce the beam length, superconducting micro-beta quadrupoles will be installed near the interaction region. Superconducting cavities with a 499.8 MHz radiofrequency system will further reduce the bunch length and also provide much higher power. Low impedance vacuum chambers will be used. The upgrade of the linac injector allows full-energy injection up to 1.89 GeV and a positron injection rate of 50 mA/min. The instrumentation and control system will also be upgraded. The calculated luminosity at a beam energy of 1.55 GeV is 1033 cm-2 s-1, which is an improvement of two orders of magnitude.

To retain dedicated synchrotron radiation running in the existing outer ring, a bridge will connect the two half outer rings at the north interaction point. At the collision point of the south interaction region, special dipole coils will be installed in the superconducting quadrupoles to keep the beam in the outer ring during dedicated synchrotron radiation running. The beam current during dedicated synchrotron radiation running could be higher than 150 mA at 2.8 GeV.

Compared with the pretzel design using a single ring, the double-ring design is much more competitive in performance, has fewer technical risks, and costs only about 50% more.

The proposed design of the upgraded Beijing Spectrometer (BES III) has also been improved significantly to match the high performance of the double-ring design. Starting from the interaction point, the BES III detector consists of a scintillating fibre detector; a main drift chamber; time-of-flight counters; a barrel electromagnetic calorimeter; an end-cap electromagnetic calorimeter; a superconducting magnet; and a muon detector. The superconducting magnet, providing 1.2 Tesla, has a length of 3.2 m, an inner radius of 1.05 m and an outer radius of 1.45 m.

The scintillating fibre detector provides trigger signals and reduces cosmic-ray background. It consists of two superlayers, with two layers of scintillating fibres in each, that are read out at both ends by avalanche photodiodes via clear fibres. The position resolution per superlayer is expected to be 80 µm radially and 1 mm along the beam axis.

The main drift chamber (length 1906 mm, inner radius 70 mm and outer radius 660 mm) consists of 36 layers with small cells and with aluminium-filled (1 µm) wire and helium-based gas to reduce multiple scattering. The single-wire resolution is expected to be better than 130 µm. The solid angle coverage for tracks going through all layers is 93%. To provide space for the superconducting micro-beta quadrupoles and to reduce the background, the end plates of the inner part will be step-shaped. The expected energy loss resolution is about 7%.

The time-of-flight counters consist of two layers of plastic scintillator with 72 pieces in azimuth per layer. The time resolution should be better than 65 ps. This will provide good kaon/pion differentiation up to 1.1 GeV. The barrel electromagnetic calorimeter is made of crystals with an energy resolution better than 2.5% at 1 GeV. The endcaps of the electromagnetic calorimeter are made of lead-scintillating fibres with an energy resolution of about 6% at 1 GeV.

An international workshop on the BES III detector on 13-15 October at IHEP, Beijing, will discuss the design and possible collaboration. New design ideas are welcome, as are international participation and contributions to the new detector.

The feasibility study report on BEPC II has been submitted to the Chinese funding agency. R&D work of key technologies is in progress. The design report should be finished by next spring, and construction should begin soon afterwards. BEPC will continue to run until spring 2005, after which there will be a long shutdown (about 9-10 months) for installation. The tuning of the new machine should start by spring 2006, and physics should be running by the end of 2006.

RHIC collider running at full collision energy

cernnews6-10-01

The 4 km circumference Relativistic Heavy Ion Collider (RHIC) at the Brookhaven Laboratory is now running at its full design nucleon collision energy of 200 GeV and physics expectations are high.

The machine began running for physics last year at a modest energy but soon ramped up to a nucleon collision energy of 130 GeV.

In addition to running with heavy ions at full energy, the latest RHIC run will explore collisions of spin-oriented (polarized) protons.

Earlier this year the first physics results from the RHIC 2000 run were announced. These results showed that nucleus-nucleus collisions have attained a region where the distributions of the numbers of produced particles display new behaviour.

The objective is to produce a “Little Bang”, recreating the quark-gluon plasma – the primeval soup that existed from about 1 µm after the Big Bang of creation, and then survived for about a ten-thousandth of a second until quarks had a chance to cool down sufficiently to group together and form the nuclei that we now know.

The new RHIC run will also push for high luminosity, which is a measure of the nuclear collision rate.

CERN sells its internal transaction management software to UK firm

cernnews7-10-01

CERN has sold its Internal Transaction Management system to UK internal transaction management concern Transacsys for 1 million Swiss francs (EURO 660,000). The system, which has been cited by software giant Oracle as the blueprint for building large-scale e-business systems, is being launched commercially. Transacsys is co-operating with Oracle in the marketing of the software.

Internal transactions are the actions that people take and the processes that they use in the course of their job. Internal transactions need managing because organizations need to know and to control how people commit and expend corporate resources.

The CERN software on the one hand empowers individuals to transact and on the other hand controls such transactions in accordance with corporate rules. It has been designed to be totally flexible so that users themselves can create new processes, and implement and change them at will, with no programming required.

CERN, with an annual budget of more than EURO 600 million and more than 6000 regular users working in 500 institutes in 50 different countries, can support the software internally using just two people.

Permissioning is the name that Transacsys has given to this enterprise-wide process, which enables people to have speedy authorization to execute tasks and organizations to control these processes without the need for extensive administrative resources.

In 1990 CERN developed the World Wide Web to help to empower its user community of more than 6000 physicists around the world to share information across remote locations. Soon after, CERN began to develop what became the Permissioning system, when an advanced informatics support project was launched. In this project the system, which is known as Electronic Document Handling (EDH) at CERN, was to provide an electronic replacement for a rickety system of paper administration forms that had accumulated over the years. Functionality has been progressively extended over more than eight years of constant development.

Transacsys and CERN have formed a long-term joint steering group to co-operate on further development of the system. CERN will, of course, continue to use the system and it will be free for use by other particle physics laboratories associated with CERN.

Take a deep breath of nuclear spin

cernmed1-10-01

Lungs are full of gas, which is normally invisible, making lung ventilation examinations difficult.While X-ray and other techniques can reveal anatomical anomalies, it is difficult to follow directly the actual functioning of the lung.

Some 30 years ago, in the so-called “golden age” of optical pumping, physicists at Mainz University started to develop techniques for polarizing nuclear spins for nuclear physics studies and experiments at CERN’s Isotope Separator Online (ISOLDE). These experiments revealed interesting insights into the behaviour of exotic isotopes.

For helium-3, optical pumping has shown its potential for magnetic resonance tomography when powerful lasers in the near-infrared region were available to produce large quantities of high-grade, spin-polarized gas.

Inhaled helium-3 can be visualized on a magnetic resonance tomogram that gives unprecedentedly detailed images of a breathing lung.

The team that developed the technique has been awarded several prestigious prizes, including the Körber prize for European science and a nomination for the German president’s “Future” prize.

Helium-3: a fascinating tool

Although the helium-3 isotope is extremely rare in nature, it has become more widely available via the beta-decay of synthetic tritium. Since the early 1970s, helium-3 in its superfluid state has become a fascinating laboratory tool for the study of quantum mechanical phenomena that turned out to be an ideal testing ground for fundamental concepts of modern theoretical physics.

In nuclear physics, experiments involving the neutron’s spin are hampered by the fact that a target of free, polarized neutrons is not available.

However, helium-3 is a good approximation for a target of polarized neutrons, because its nuclear spin of 1/2 is due to its unpaired neutron (the two protons have opposing spins).

To force helium-3 nuclear spins to align in one direction, the gas is exposed to a resonant, circularly polarized laser beam directed along the axis of an external magnetic field. By means of the absorption and spontaneous emission of fluorescent light, the spin of the light quanta can be transferred to the atomic electrons that in turn transmit their spin direction to the nucleus via magnetic coupling. Repeated absorption and emission accumulate the nuclei in a specific spin state. The Mainz laser pumping techniques were developed in collaboration with the specialist team of Michele Leduc at the Ecole Normale Superieure in Paris.

Scattering a polarized, high-energy electron beam by the neutron in polarized helium-3 allows the contribution due to the electromagnetic interaction of the probe with the internal charge distribution of the neutron – the effect of interested – to be separated off and even enhanced. First precise values can now provide a test for different theoretical approaches to this fundamental property.

Lung tomography

In 1994 Will Happer’s group at Princeton, in collaboration with magnetic resonance imaging specialists from Duke University, Durham, North Carolina, demonstrated in a seminal paper how highly polarized xenon gas could be used to examine the lungs of a guinea pig.

cernmed2-10-01

On learning of this, Mainz physicists W Heil and E W Otten, together with their colleague M Thelen from the department of radiology, saw the potential usefulness of their well established helium-3 physics-laboratory techniques for human lung imaging, and they soon carried out very satisfactory trials, beginning in 1995.

Optical pumping of metastable helium-3 atoms, the method they use, can supply relatively large amounts of gas with relatively high polarization – up to 50%. These magnetic signals are a thousand times as large as those normally encountered in magnetic resonance imaging. Under these conditions, lung imaging becomes straightforward. Patients simply inhale a whiff of gas and the whole procedure is carried out at room temperature.

cernmed3-10-01

One obstacle, however, was the storage and transport of the carefully prepared polarized gas, which has to be taken from the laboratory to the clinics. Collisions with the walls of a normal container would quickly destroy the spin orientation. This is overcome by storing polarized gas in glass vessels, the inner surfaces of which are coated with a few monolayers of caesium. In this way, the polarized gas can be stored at pressures of up to 10 bar and kept ready for use for more than 100 h.

Rather than just giving a single lung image while the patients hold their breath, helium-3 imaging can provide ultrafast sequences with a time resolution of less than a tenth of a second – a “movie” of lung ventilation during the breathing cycle.

There is another advantage: helium-3 in contact with paramagnetic oxygen soon loses its polarization. The rate of depolarization is related to the oxygen partial pressure in regions of interest of the lung and, moreover, enables the oxygen uptake in the blood to be measured. For the first time, normal lung functioning can be quantified, and disorders in respiratory distribution can be recognized before any signs or symptoms have become manifest.

The technique is still undergoing trials at Mainz University Hospital and selected European clinics, but helium-3 tomography appears to have a bright future for visualizing and assessing pulmonary ventilation. Only a few accessory tools are needed to perform helium-3 imaging with standard magnetic resonance imaging equipment, so the technique could become widely available within a relatively short time.

Close encounters with clusters of computers

cerncomp1-10-01

Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of off-the-shelf commodity computers to address problems that were previously the domain of tightly coupled multiprocessor computers. Near-term projects within high-energy physics and other computing communities will deploy clusters of some thousands of processors serving hundreds or even thousands of independent users. This will expand the reach in both dimensions by an order of magnitude from the current, successful production facilities.

A Large-Scale Cluster Computing Workshop held at Fermilab earlier this year examined these issues. The goals of the workshop were:

  • to determine what tools exist that can scale up to the cluster sizes foreseen for the next generation of high energy physics experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed;
  • to compare and record experiences gained with such tools;
  • to produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HEP;
  • to identify and connect groups with similar interest within HEP and the larger clustering community.

Thousands of nodes

cerncomp2-10-01

Computing experts with responsibility and/or experience of such large clusters were invited. The clusters of interest were those equipping centres of the sizes of Tier 0 (thousands of nodes) for CERN’s LHC project, or Tier 1 (at least 200-1000 nodes) as described in the MONARC (Models of Networked Analysis at Regional Centres for LHC Experiments). The attendees came not only from various particle physics sites worldwide but also from other branches of science, including biomedicine and various Grid computing projects, as well as from industry.

The attendees shared freely their experiences and ideas, and proceedings are currently being edited from material collected by the convenors and offered by attendees. In addition the convenors, again with the help of material offered by the attendees, are in the process of producing a guide to building and operating a large cluster. This is intended to describe all phases in the life of a cluster and the tools used or planned to be used. This guide should then be publicized (made available on the Web and presented at appropriate meetings and conferences) and regularly kept up to date as more experience is gained. It is planned to hold a similar workshop in 18-24 months to update the guide. All of the workshop material is available via http://conferences.fnal.gov/lccws.

The meeting began with an overview of the challenge facing high-energy physics. Matthias Kasemann, head of Fermilab’s Computing Division, described the laboratory’s current and near-term scientific programme, including participation in CERN’s future LHC programme, notably in the CMS experiment. He described Fermilab’s current and future computing needs for its Tevatron collider Run II experiments, pointing out where clusters, or computing “farms” as they are sometimes known, are used already. He noted that the overwhelming importance of data in current and future generations of high-energy physics experiments had prompted the interest in Data Grids. He posed some questions for the workshop to consider:

  • Should or could a cluster emulate a mainframe?
  • How much could particle physics computer models be adjusted to make most efficient use of clusters?
  • Where do clusters not make sense?
  • What is the real total cost of ownership of clusters?
  • Could we harness the unused power of desktops?
  • How can we use clusters for high I/O applications?
  • How can we design clusters for high availability?

LHC computing needs

Wolfgang von Rueden, head of the Physics Data Processing group in CERN’s Information Technology Division, presented the LHC computing needs. He described CERN’s role in the project, displayed the relative event sizes and data rates expected from Fermilab Run II and from LHC experiments, and presented a table of their main characteristics, pointing out in particular the huge increases in data expected and consequently the huge increase in computing power that must be installed and operated.

The other problem posed by modern experiments is their geographical spread, with collaborators throughout the world requiring access to data and computer power. Von Rueden noted that typical particle physics computing is more appropriately characterized as high throughput computing as opposed to high performance computing.

cerncomp3-10-01

The need to exploit national resources and to reduce the dependence on links to CERN has produced the MONARC multilayered model. This is based on a large central site to collect and store raw data (Tier 0 at CERN) and multitiers (for example National Computing Centres, Tier 1 – examples of these are Fermilab for the US part of the CMS experiment at the LHC and Brookhaven for the US part of the ATLAS experiment), down to individual user’s desks (Tier 4), each with data extracts and/or data copies and each one performing different stages of physics analysis.

Von Rueden showed where Grid Computing will be applied. He ended by expressing the hope that the workshop could provide answers to a number of topical problem questions, such as cluster scaling and making efficient use of resources, and some good ideas to make progress in the domain of the management of large clusters.

The remainder of the meeting was given over to some formal presentations of clustering as seen by some large sites (CERN, Fermilab and SLAC) and also from small sites without on-site accelerators of their own (NIKHEF in Amsterdam and CCIN2P3 in Lyon). However, the largest part of the workshop was a series of interactive panel sessions, each seeded with questions and topics to discuss, and each introduced by a few short talks. Full details of these and most of the overheads presented during the workshop can be seen on the workshop Web site.

cerncomp5-10-01

Many tools were highlighted: some commercial, some developed locally and some adopted from the open source community. In choosing whether to use commercial tools or develop one’s own, it should be noted that so-called “enterprise packages” are typically priced for commercial sites where downtime is expensive and has quantifiable cost. They usually have considerable initial installation and integration costs. However, one must not forget the often high ongoing costs for home-built tools as well as vulnerability to personnel loss/reallocation.

Discussing the G word

There were discussions on how various institutes and groups performed monitoring, resource allocation, system upgrades, problem debugging and all of the other tasks associated with running clusters. Some highlighted lessons learned and how to improve a given procedure next time. According to Chuck Boeheim of SLAC, “A cluster is a very good error amplifier.”

Different sites described their methods for installing, operating and administering their clusters. The G word (for Grid) cropped up often, but everyone agreed that it was not a magic word and that it would need lots of work to implement something of general use. One of the panels described the three Grid projects of most relevance to high-energy physics, namely the European DataGrid project and two US projects – PPDG (Particle Physics Data Grid) and GriPhyN (Grid Physics Network).

cerncomp4-10-01

A number of sites described how they access data. Within an individual experiment, a number of collaborations have worldwide “pseudo-grids” operational today. In this context, Kors Bos of NIKHEF, Amsterdam, referred to the existing SAM database for the D0 experiment at Fermilab as an “early-generation Grid”. These already point toward issues of reliability, allocation, scalability and optimization for the more general Grid.

Delegates agreed that the meeting had been useful and that it should be repeated in approximately 18 months. There was no summary made of the Large-Scale Cluster Computing Workshop, the primary goal being to share experiences, but returning to the questions posed at the start by Matthias Kasemann, it is clear that clusters have replaced mainframes in virtually all of the high-energy physics world, but that the administration of them is particularly far from simple and poses increasing problems as cluster sizes scale. In-house support costs must be balanced against bought-in solutions, not only for hardware and software but also for operations and management. Finally, delegates attending the workshop agreed that there are several solutions for, and a number of practical examples of, the use of desktop machines to increase the overall computing power available.

The Grid: crossing borders and boundaries

The World Wide Web was invented at CERN to exchange information among particle physicists, but particle physics experiments now generate more data than the Web can handle. So physicists often put data on tapes and ship the tapes from one place to another – an anachronism in the Internet era. However, that is changing, and the US Department of Energy’s new Scientific discovery through advanced computing program (SciDAC) will accelerate the change.

Fermilab is receiving additional funds through SciDAC, some of which will be channelled into Fermilab contributions to the Compact Muon Solenoid Detector (CMS) being built for CERN. A major element in this is the formulation of a distributed computing system for widespread access to data when CERN’s LHC Large Hadron Collider begins operation in 2006. Fermilab’s D0 experiment has established its own computing grid called SAM, which is used to offer access for experiment collaborators at six sites in Europe.

With SciDAC support, the nine-institution Particle Physics DataGrid collaboration (Fermilab, SLAC, Lawrence Berkeley, Argonne, Brookhaven, Jefferson, CalTech, Wisconsin and UC San Diego) will develop the distributed computing concept for particle physics experiments at the major US high-energy physics research facilities. Both D0 and US participation in the CMS experiment for the LHC are member experiments. The goal is to offer access to the worldwide research community, developing “middleware” to make maximum use of the bandwidths available on the network.

The DataGrid collaboration will serve high-energy physics experiments with large-scale computing needs, such as D0 at Fermilab, BaBar at SLAC and the CMS experiment, now under construction to operate at CERN, by making the experiments’ data available to scientists at widespread locations.

Astronomers benefit from particle physics detectors

cernins1_9-01

In astronomy there are basically four types of observation that can be made using electromagnetic radiation (photons): measuring the photon direction (imaging), measuring their energy and frequency (spectroscopy), measuring their polarization (polarimetry) and counting the numbers of photons (photometry). These techniques provide complementary information and so are vital for exploring different wavelengths of radiation.

So far, astronomers have not been able to detect efficiently the polarization of photons at X-ray wavelengths, but this should be changed with a new polarimeter for space-borne observations that has been developed in Rome CNR and Pisa INFN by two teams led by Enrico Costa and Ronaldo Bellazzini respectively (Cash 2001).

cernins2_9-01

X-ray astronomy has revealed some of the most violent and compact spots in the universe, such as the surfaces of pulsars, close orbits around giant black holes and the blast waves of supernova explosions. The current flagship of X-ray astronomy is NASA’s Chandra observatory (see “http://chandra.harvard.edu”).

By making efficient use of the few photons emitted by discs around black holes and other objects, X-ray astronomers have successfully applied photometry, imaging and spectroscopy to these hot, energetic and often variable sources. Polarimetry has been largely ignored at X-ray wavelengths because of the inefficiency of existing instruments. Yet such a technique could provide a direct picture of the state of matter in extreme magnetic and gravitational fields, and it has the potential to resolve the internal structures of compact sources that would otherwise remain inaccessible. The new X-ray polarimeter, which was developed by the teams in Rome and Pisa, promises to revolutionize space-based observations.

Classic measurements

The first and only generally-accepted measurement of polarized X-rays from an astronomical object dates from more than a quarter of a century ago when a Bragg crystal polarimeter in orbit around the Earth was used to observe the Crab nebula (Weisskopf et al. 1976). This is a remnant of a supernova and is unusual because it has a bright pulsar owing to a neutron star at its core. Spinning at about 30 times per second, the intense magnetic field of this star leaves an indelible imprint on high-energy particles and X-rays.

High-energy electrons that are forced to follow a curved path by a magnetic field emit synchrotron radiation as they change direction. The vibrating electric and magnetic fields of the X-rays are characterized by a polarization angle, which describes the extent to which the fields in the individual photons line up. Synchrotron emission is the source of the polarization detected in the Crab nebula. Being able to measure the polarization as a function of its position across the Crab nebula would reveal a much better picture of the geometry of
the magnetic field in the nebula and its central pulsar.

There has been no unambiguous detection of X-ray polarization from a celestial source since the Crab discovery – all other sources are too faint and/or not sufficiently polarized. To capture the polarization of these faint sources requires a device capable of measuring the polarization angle of every photon collected by an X-ray telescope.

Italian device

The new instrument developed by the Italian teams functions mainly as a photon-counting detector, its overall architecture being similar to a radiation Geiger counter or a proportional counter used in particle physics. The main difference is that traditional position-sensitive X-ray gas detectors typically see only the centroid of the charge cloud produced by the photoelectron, the extent of which is the ultimate limit of the space resolution – a sort of noise to be kept as small as possible. The new concept reverses this approach, trying to resolve the track to measure the interaction point and the prime direction of the photoelectron.

The direction of emission of the photoelectron is a very sensitive indicator of the polarization of the parent photon. The motion of the electron is driven by the direction of the electromagnetic field in the original photon, thereby recording the linear polarization of the X-ray. The polarimeter then has to measure not only the presence of the electron, but also the microscopic path that it has taken.

cernins3_9-01

For the first time, photoelectrons of only a few kilo-electronvolts (2-10) are reconstructed not as an indistinct blob of charge but as real tracks. From the detailed study of their momenta it is possible to determine with high efficiency the direction of emission of the photoelectron, which represents the “memory” of the polarization of the incident photon.

When enough events have been detected, the polarization of the emitting source will have been measured. This device makes maximal use of the available information and, when placed at the focus of a large X-ray telescope in orbit, will be able to detect as little as 1% polarization in sources a thousandth of the intensity of the Crab nebula. This will open a new window on the geometry of X-ray-emitting sources.

The instrument used to measure the photoelectron distribution uses micropattern electrode structures of the type developed for particle physics studies. Specifically, the techniques used are the gas electron multiplier (GEM) introduced by Fabio Sauli at CERN in 1996, in which micropores in a thin foil provide a strong field for electron amplification, and a pixel read-out structure using advanced multilayer PCB technology as a collecting anode. The new instrument, being truly two-dimensional (pixel), reveals any polarization direction without having to rotate the detector. The thin GEM foils and the thick film pixel boards used are manufactured at CERN.

cernins4_9-01

Polarizing mechanisms

Synchrotron radiation is not the only polarizing phenomenon in space; equally important is X-ray scattering. When an X-ray bounces off an electron, as well as losing (or gaining) energy and thereby changing its wavelength, the emerging radiation is also linearly polarized, with its electric field perpendicular to the plane that contains both the incident and the scattered photons. The degree of polarization depends on the angle through which the X-ray scattered, reaching 100% for a deflection of 90°.

This effect can be used to determine the basic geometry of a number of compact X-ray sources.

For a start, the X-ray-emitting regions of active galactic nuclei and quasars demand a closer look. The hot gas swirling into the giant black hole that powers these sources emits copious X-rays through both thermal and non-thermal processes. Narrow jets of particles travelling very close to the speed of light can form by means that are only partially understood. The X-ray spectrum that emerges is complex, featuring emission from several parts of the source (Mushotzky, Done, & Pounds 1993). Radiation passing close to the black hole will be curved and its polarization direction twisted. Furthermore, radiation from one part of the source can scatter off another, creating more features in the spectrum.

cernins5_9-01

The measurement of polarization as a function of the energy of the X-rays could reveal the history of the scattered radiation and provide a unique test of the physics of these fascinating objects. Significant improvements in observing power usually lead to important new discoveries.

These new precision polarimeters could detect polarization in unexpected places – maybe from the surfaces of thermal neutron stars, or even from interstellar shocks that arise when high-speed plasma collides with quieter regions of cooler gas.

Gamma-ray telescopes take shape

cerngamm2_9-01

Now looming on the Namibian skyline is the High Energy Stereoscopic System (HESS) – a next-generation system of imaging atmospheric Cherenkov telescopes that is aimed at the study of cosmic gamma rays in the energy range from about 100 GeV to several TeV, with the goal of identifying the sources of the cosmic rays in our galaxy in particular, and of studying non-thermal particle populations in the universe in general.

Cosmic rays have played an important role in early particle physics, and they continue to provide the highest-energy particles available to physicists. Even after decades of cosmic-ray research, the sources and acceleration mechanisms of cosmic rays are still the subject of intense discussion. The dominant component of the cosmic radiation – charged atomic nuclei – cannot be used to pinpoint the accelerator sites directly, because, except for ultrahigh energies, the nuclei are deflected in the interstellar and intergalactic magnetic fields and their propagation resembles diffusion.

However, in almost all scenarios for cosmic-ray acceleration, reactions at or near the source result in the production of neutral secondary particles – gamma rays and neutrinos – which can be used to generate a genuine image of the sky at highest energies, and which can also be used to study the propagation of cosmic rays.

cerngamm1_9-01

A brief history of gamma-ray astronomy

High-energy gamma-ray astronomy from space has a long history, via NASA’s SAS 2 satellite, which was launched in November 1972, and the European COS-B satellite, which was launched in August 1975, culminating in results such as the gamma-ray sky map (see above) produced by the EGRET instrument on NASA’s big Compton Gamma Ray Observatory.

This map illustrates the key features mentioned above – the bright gamma-ray continuum tracing the Milky Way results from cosmic rays interacting with interstellar gas. As the distribution of gamma rays follows the column density of gas closely, one concludes that cosmic rays pervade the Milky Way more or less uniformly.

Superimposed onto the continuum are well over 200 point sources, which is indicative of cosmic particle accelerators. About a third of these sources can be identified with known galactic and extragalactic objects, such as pulsars or quasars. The character of the remaining ones is open.

Owing to their small (less than a square metre) detection areas, combined with the steeply falling energy spectrum of gamma rays, satellite instruments are, however, limited in their energy range and cannot reach the TeV (1012 eV) or PeV (1015 eV) energy range relevant for the study of very-high-energy cosmic-ray sources. Only ground-based instruments can currently provide the required large detection areas.

Ground-based detectors

Many attempts to detect high-energy gamma rays with air shower arrays failed to provide convincing results. The breakthrough finally came with the imaging atmospheric Cherenkov telescopes, which were pioneered with the Whipple telescope in Arizona. These instruments see the air showers initialized by high-energy gamma rays using the Cherenkov radiation generated by the shower particles in the air. The intensity of the shower image provides a measure for the energy of the primary; the orientation of the image is used to determine the direction of the primary; and the shape of the image can be used to separate gamma-induced showers from the showers generated by nucleonic cosmic rays.

In 1989 the Whipple group succeeded in establishing the Crab nebula as a strong galactic source of TeV gamma rays, and then discovered two extragalactic sources – the active galactic nuclei Markarian 421 and 501. More recently, high-energy gamma-ray astronomy has attracted additional interest because it both permits novel investigations in observational cosmology and provides a means with which to search, via their annihilation radiation, for neutralino dark matter candidates accumulating in the centres of galaxies.

Imaging techniques

The imaging atmospheric Cherenkov technique has progressed significantly since the days of the first Whipple telescope.

Stereoscopic observation of air showers by multiple telescopes, as was implemented in the five-telescope HEGRA system (the High Energy Gamma Ray Astronomy telescope in La Palma built by a German-Spanish-Armenian collaboration) and the fine-grained imaging achieved by the CAT telescope camera (Cherenkov Array at Themis, in the French Pyrenees), provide improved shower reconstruction and sensitivity.

The next-generation HESS telescopes build on these developments. They are designed and constructed by an international collaboration of Armenian, Czech, English, French, German, Irish, Namibian and South African researchers. The telescope system combines the stereoscopic imaging of air showers pioneered by the HEGRA telescope system, the fast trigger electronics used in the CAT telescope, and monitoring techniques developed for the Durham and Potchefstroom telescopes to provide an overall ten-fold increase in sensitivity over current instruments.

cerngamm3_9-01

In Namibia

The HESS telescopes are located in the Khomas Highland of Namibia, near the Gamsberg, which was once considered as a site for the ESO optical telescopes and is renowned among astronomers for its excellent observing conditions. The location near the tropic of Capricorn provides an optimal view of sources in the central part of our galaxy and of the galactic centre region, in addition to the many extragalactic sources that should be discovered.

The mild climate allows the telescopes to be operated without protective enclosures. At 100 km from the capital of Namibia, Windhoek, access to the site is easy. In a vast plain, the site provides ample space for the future expansion of the system.

In its first stage, HESS will comprise four telescopes of the 12 m diameter class. These have a mirror area of 105 m2with a focal length of 15 m. The mirror is composed of 382 round mirror elements of 60 cm diameter. An alt-az (horizon-based co-ordinate system) mount allows objects to be tracked across the sky. Cherenkov shower images are viewed by a camera of 960 photomultiplier tubes, each subtending an angle of 0.16°.

cerngamm4_9-01

The large field of view of the camera (5°) is optimal for the investigation of extended sources such as supernova remnants and allows extensive sky surveys. The photomultipler signals are sampled at a rate of 1 GHz, and they are digitized and read out if a minimum number of photomultipliers (three to five) show coincident signals of several photoelectrons.

The effective detection area of the HESS instrument is determined by the diameter of the Cherenkov light pool, and it varies between 70 000 m2 at 100 GeV, up to almost 300,000 m2 at TeV energies, to be compared with less than a square metre of satellite instruments. The directions of individual gamma rays can be reconstructed with 0.1° precision. One expects that strong gamma-ray point sources can be located with an error of a few arc-seconds.

Construction on the site began in August 2000 with the telescope foundations. The steel structures for the telescopes are built in Namibia. The mount of the first telescope was assembled in May, and work on the next telescope is in progress. In Europe, the cameras and the mirrors for the telescopes are being prepared for installation. The first telescope is expected to go into operation late in 2001, the next three up to 2003.

Together with major new Cherenkov telescope projects under construction in the US (VERITAS), in Australia (CANGAROO) and on the Canary Island of La Palma (MAGIC), HESS will provide excellent sky coverage at TeV energies.

It is a fitting acronym – Viktor Hess received the 1936 Nobel Prize for his discovery of cosmic rays.

cerngamm5_9-01

STACEE in New Mexico

A novel gamma-ray telescope has recently been commissioned and is observing powerful sources of GeV/TeV gamma rays, such as active galactic nuclei. The Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE) uses an array of heliostats (solar mirrors) at the National Solar Thermal Test Facility of Sandia National Laboratories in Albuquerque, New Mexico.

The heliostats are used to collect Cherenkov light from the shower of secondary particles produced by a high-energy gamma ray of galactic or extragalactic origin as it enters the atmosphere. The Cherenkov light collected by the heliostats is concentrated onto an array of photomultiplier tubes. STACEE uses many experimental techniques first developed for subatomic physics experiments. Observations in the gamma-ray energy range from 50 to 250 GeV are important for understanding many high-energy astrophysical objects, especially pulsars, supernova remnants and gamma-ray bursts.

STACEE is designed to study astrophysical sources of gamma radiation in this energy range, which has not yet been explored by previous imaging Cherenkov telescopes on the ground or by previous satellite experiments in space.

The STACEE collaboration consists of 16 scientists from seven institutes in Canada and the US.

The experiment has been built in stages, with each stage using more heliostats, and improved optics and electronics. In 1999, with half of the 64 heliostats and two cameras, STACEE observed the Crab nebula at an energy threshold unprecedented by other sampling detectors.

This year, with two-thirds of the heliostats, STACEE pointed to two BL Lacertae blazars, Markarian 421 and 501. During the period January to May, enhanced X-ray and TeV gamma-ray emission from Markarian 421 was reported. A significant signal from Markarian 421 has been observed and analysis of the Markarian 501 data is ongoing. The low-energy threshold allows extension of the gamma-ray spectrum of Markarian 421, thereby placing additional constraints on emission models during outburst.

Japan’s KEKB offers unprecedented luminosity

cernnews3_9-01

The KEKB Japanese B-factory collider is delivering unprecedented luminosity (a measure of the machine’s electron-positron collision rate) to the international collaboration running the Belle experiment. Since the commissioning of the machine in November 1998, the KEK machine team has solved many difficulties and has recently made major progress – it has achieved the highest luminosity in collider history: 4.49 x 1033 cm-2s-1.

Integrated luminosities (a measure of the collision “dose” administered) are 232 pb-1per day, 1.50 fb-1per week and 4.83 fb-1per month. These are all numbers recorded by the Belle detector. Total data so far collected by Belle had reached 33.1 fb-1by mid-July.

KEKB luminosity has been nearly doubled this year, as seen in the figure above. This was brought about by several machine improvements. First, 1300 out of 1800 m of field-free region of the arcs in the low-energy ring (LER) has been covered with solenoid windings. This suppressed the vertical blow-up of the beam due to the photoelectron cloud up to about 900 mA. The second improvement came from the installation of new movable masks on the moving chamber in the high-energy ring (HER). This replacement, already verified at LER in the previous year, has raised the HER stored current limit HER 580 to 770 mA.

Third, a state-of-the-art setting was achieved in the betatron tunes – very close to the half-integer resonance. The vertical tunes were raised beyond half-integer resonance lines in both rings to gain stability of the orbits as well as a wider high-luminosity area in the tune spaces. The horizontal tunes, especially in LER, were set even closer to the half-integer resonance to gain the dynamic focusing effect of the beam-beam interaction without sacrificing the machine aperture. Other notable improvements are in the orbit control, betatron tune monitor and control, beam-size control, the beam-abort system, the logging system and the injectors.

Though the peak luminosity and integrated luminosity per month are a little higher than those of PEP-II/BaBar at SLAC, Stanford, KEKB needs further improvements to continue to be competitive with its rival in the long run (even assuming the present luminosity of PEP-II). KEKB has significant obstacles in running more than nine months a year due to the periodic inspection of the refrigeration system required by law, expensive summer electricity, a weak cooling system incapable of handling summer heat and so on.

Several improvements are planned during this summer’s shutdown. More solenoid windings are planned in the LER. Currently the electron-cloud effect still looks to be the dominant restriction on the number of bunches circulating in the LER.

Currently, the collision is carried out with four-bucket spacing. A shorter spacing is essential to achieve higher luminosity, because the bunch current is limited in both rings for various reasons.

The replacement of the vacuum chamber is planned at the interaction region (IR), where a tentative limit of the total current is given by the heating of the IR chambers. A new chamber with a taller aperture will be installed in the HER downstream of the IR, as well as additional cooling systems in LER upstream.

Current movable masks of absorber type are going to be replaced with spoiler-type masks to reduce damage by beam loss.

Damage to the movable masks has been a serious obstacle in beam operation. A solution will be thinner masks to reduce significant heating damage.

Other related improvements planned for this summer are a mask protection system with the beam-loss monitor and the addition of a few more radiofrequency cavities to give a margin for high-current operation.

The Belle detector at the Japanese KEKB is studying the decays of B-mesons (particles containing the fifth “b” quark), in particular the delicate violation of charge-parity (CP) symmetry (see above).

The impact of the machine’s performance on this measurement was eagerly awaited.

CP-violation parameter

In time for the summer conference season, both of the big experiments measuring the charge-parity (CP) asymmetry in the decays of B-mesons reported impressive results.

The BaBar experiment at the PEP-II electron-positron collider at SLAC, Stanford, based on a sample of 32 million B pairs, reported a value for the sin2ß CP-violation parameter of 0.59 ± 0.14. The BELLE experiment at the KEKB electron-positron collider at the Japanese KEK laboratory, with 31.3 million B pairs, measures the parameter as 0.99 ± 0.14 ± 0.06.

These are the most precise measurements of this parameter so far. For statistical sticklers, the results are now clearly non-zero – physicists can say with confidence that CP violation happens in B decays. Using earlier measurements, the world average becomes 0.79 ± 0.12 compared to the theoretically predicted value of 0.70 ± 0.12.

New CMS visitor centre proves a star attraction

The Compact Muon Solenoid (CMS) experiment, which is being carried out in preparation for the installation of CERN’s forthcoming Large Hadron Collider (LHC), became one of the laboratory’s star visitor attractions during the inauguration of a new visitor centre on 14 June.

cernnews5_9-01

Until recently, CERN’s 20 000 annual visitors were taken on a guided tour of one of the experiments at the Large Electron-Positron collider (LEP), the laboratory’s flagship research facility. With the closure of LEP last year, underground visits are no longer possible, and a new series of itineraries has been put in place, including preparations for LHC experiments.

cernnews4-7-01

The CMS experiment is particularly well suited for visits, because it will be constructed almost entirely on the surface.

Information about CMS, including animations and live Web-cams, can be found at “http://cmsinfo.cern.ch”.

Last LHC magnets from Siberia reach CERN

cernnews9_9-01

The delivery of Russian magnets to equip transfer lines to feed CERN’s new LHC collider is now complete.

Over the past two years, magnets have been steadily arriving at CERN from Novosibirsk’s Budker Institute.

Some 360, 6 m dipoles and 180, 1.4 m quadrupoles, now safely at CERN, will be installed in two new underground transfer tunnels, each about 3 km long, connecting the SPS and LHC/LEP tunnels. One of these tunnels recently linked with the 27 km LHC ring.

Each month some 10 magnet consignments travelled the 6000 km from Siberia, each bearing two dipoles and a quadrupole. Unlike the LHC’s main magnets, these are not superconducting. The Budker Institute supplies them under the 1993 Co-operation Agreement, which covers Russian participation in the LHC. Preliminary work on dipole elements is handled by the Efremov Institute, St Petersburg, and on quadrupole elements by the ZVI factory in Moscow. The additional manufacture and the final assembly of the magnets is done at Novosibirsk.

bright-rec iop pub iop-science physcis connect