Comsol -leaderboard other pages

Topics

Model suggests dark energy is an illusion

Arguably the most fascinating question in modern cosmology is why the universe is expanding at an accelerating rate. An original solution to this puzzle has been put forward by four theoretical physicists: Edward Kolb of Fermilab, Sabino Matarrese of the University of Padova, Alessio Notari of the University of Montreal, and Antonio Riotto of the Italian National Institute for Research in Nuclear and Subnuclear Physics (INFN)/Padova. Their study has been submitted to the journal Physical Review Letters.

CCEnew3_05-05

In 1998, observations of distant supernovae provided detailed information about the expansion rate of the universe, demonstrating that it is accelerating. This can be interpreted as evidence of “dark energy”, a new component of the universe, representing some 70% of its total mass. (Of the rest, about 25% appears to be another mysterious component, dark matter, while only about 5% consists of the ordinary “baryonic” matter.) Other explanations include a modification of gravity at large distances and more exotic ideas, such as the presence of a dynamic scalar field referred to as “quintessence”.

Although the hypothesis of dark energy is fascinating and more appealing than the other explanations, it faces a serious problem. Attempts to calculate the amount of dark energy give answers much larger than its measured magnitude: more than 100 orders of magnitude larger, in fact.

Kolb and colleagues offer an alternative explanation, which they say is rather conservative. They propose no new ingredient for the universe; instead, their explanation is firmly rooted in inflation, an essential concept of modern cosmology, according to which the universe experienced an incredibly rapid expansion at a very early stage.

The new explanation, which the researchers refer to as the Super-Hubble Cold Dark Matter (SHCDM) model, considers what would happen if there were cosmological perturbations with very long wavelengths (“super-Hubble”) larger than the size of the observable universe. They show that a local observer would infer an expansion history of the universe that would depend on the time evolution of the perturbations, which in certain cases would lead to the observation of accelerated expansion. The origin of the long-wavelength perturbations is inflation, as, effectively, the visible universe is only a tiny part of the pre-inflation-era universe. The accelerating universe is therefore simply an impression due to our inability to see the full picture.

Of course, observation is the ultimate arbiter between theories. The SHCDM model predicts a different relationship between luminosity-distance and redshift than the dark-energy models do. While the two models are indistinguishable within current experimental precision, more precise cosmological observations in the future should be able to distinguish between them.

RHIC groups serve up ‘perfect’ liquid

The four detector groups conducting research at the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory have announced results indicating that they have observed a state of hot, dense matter that is more remarkable than had been predicted. In papers summarizing the first three years of RHIC findings, to be published simultaneously by the journal Nuclear Physics A, the four collaborations (BRAHMS, PHENIX, PHOBOS and STAR) say that instead of behaving like a gas of free quarks and gluons, as was expected, the matter created in RHIC’s heavy-ion collisions appears to be more like a liquid.

The evidence comes from measurements of unexpected patterns in the trajectories of the thousands of particles produced in individual collisions. The primordial particles produced tend to move collectively in response to variations of pressure across the volume formed by the colliding nuclei – an effect known as “flow”, since it is analogous to the properties of fluid motion.

However, unlike ordinary liquids, in which individual molecules move about randomly, the hot matter at RHIC seems to move in a pattern exhibiting a high degree of coordination among the particles.

This flow is consistent with that of a theoretically “perfect” fluid with extremely low viscosity and the ability to reach thermal equilibrium very rapidly because of the high degree of interaction among the particles. The physicists at RHIC do not have a direct measure of the viscosity, but they can infer from the flow pattern that, qualitatively, the viscosity is very low, approaching the quantum mechanical limit.

Belle discovers yet more new particles

The record performance of the KEK B-factory is currently supplying Belle with about 1 million B Bbar meson pairs per day. While data analyses on charge-parity (CP) violation and searches for new physics beyond the Standard Model continue, the vast amounts of accumulated data have helped another important aspect of Belle’s physics programme: the discovery of new particle states in the charm sector.

CCEnew4_05-05

Recent additions to Belle’s new particle list are the Y(3940) and the strange charmed baryon Σc(2800), to be added to the ηc(2S), the D0*(2308), the D1(2427) and the X(3872) already discovered. This brings Belle’s total of new particles discovered to six.

Now it seems, however, that Belle’s new-particle tally may be seven. Last summer the collaboration reported strong evidence for a mass peak in the spectrum of particles recoiling against a J/Ψ in electron-positron collisions with a similar mass to the Y(3940). Although the mass of this new peak is the same as that of the Y(3940) within errors (measurement errors are about 10 MeV for both observations), the Belle team is not yet convinced that these two states are the same and, for the time being at least, are referring to the new object as the X(3940).

The X(3940) mainly decays into D plus anti-D* mesons, as expected for charmonium states with this mass. The Y(3940), on the other hand, does not seem to follow this pattern and its preference to decay into an w and a J/Ψ is difficult to understand in the context of heavy quark potential models, which have explained the charmonium spectrum up to now. The Y(3940) may not therefore be an ordinary quark-antiquark meson, but rather a “hybrid state” – a meson comprising a charmed quark, an anti-charmed quark and a gluon.

Belle’s particle hunters have their work cut out as they try to pin down the identity of the new particles they have already observed, while more data – and opportunities for more discoveries – pour in faster and faster.

TWIST tests the Standard Model

Normal muon decay (μ+ → e+ νe νμbar) is an ideal process to investigate the electroweak interaction in the Standard Model. The reaction involves only leptons, obviating the need for uncertain strong-interaction corrections, thus making it a clean probe of the theory’s purely left-handed (V-A) structure. A high-precision determination of the parameters describing the muon-decay spectral shape explores physics possibilities beyond the Standard Model, for example involving right-handed interactions. The world’s most precise determination of these parameters has been the goal of the TRIUMF Weak Interaction Symmetry Test (TWIST) experiment. The collaboration has recently completed its first phase by directly measuring the muon-decay parameters ρ and δ, improving the Particle Data Group (PDG) values by factors of 2.5 and 2.9 respectively.

CCEnew5_05-05

The distribution in energy and angle of positrons from polarized muon decay is described by the four “Michel parameters”. The spectrum’s isotropic part has a momentum dependence determined by ρ plus an additional small term that is proportional to a second parameter, η. The asymmetry is proportional to a third parameter ξ multiplied by the muon polarization, Ρμ, while a fourth parameter, δ, determines its momentum dependence. Within the Standard Model, these parameters are predicted to be ρ = ¾, δ= ¾,ξ = 1 and η = 0.
TWIST uses beams of positive muons as they can be produced with high polarization. The high-intensity TRIUMF proton beam produces π+, some of which decay at rest at the surface of a production target to create a highly polarized “surface” muon beam with momentum 29.6 MeV/c, which is subsequently transported into a 2 T superconducting solenoid.

Most of the muon beam stops in a thin target, located at the centre of a symmetric array of 56 low-mass, high-precision planar drift-chambers. Limitations on final errors are dominated by systematic effects since the statistical precision is very high. The measured momentum and angular distribution of the decay positrons are shown in the figure. The drop in acceptance near | cos (θ) | = 0 is because of the poor reconstruction efficiency in that region. To extract the muon-decay parameters, a 2D fit is made to a fiducial region where the detector acceptance is uniform, using a blind-analysis technique. The results are based on 6 x 109 muon decays, spread over 16 data sets. Four sets were analysed for both ρ and δ. A fifth set of low-polarized muons from pion decays in flight was also analysed for ρ. The remaining data sets, combined with further Monte Carlo simulations, were used to estimate the sensitivities to various systematic effects.

TWIST’s new measurement of ρ = 0.75080 ± 0.00032(stat.) ± 0.00097(syst.) ± 0.00023 (last uncertainty due to the current PDG error in η) sets an upper limit on the mixing angle of a possible heavier right-handed partner to the W boson, |ζ| <0.03 at 90% confidence level (c.l.). Combining ρ with the new measurement of δ = 0.74964 ± 0.00066(stat.) ± 0.00112(syst.), and the PDG value of Ρμξδ/ρ, an indirect limit is set on Ρμξ: 0.996 <Ρμξ< 1.004 with 90% c.l. The lower limit slightly improves the limit on the mass of the possible right-handed boson, WR ≥ 420 GeV/c2. Finally, an upper limit is found for the muon right-handed coupling probability, QμR< 0.00184 at 90% c.l.

Muon decay, combined with measurements from experiments at higher energies and in nuclear beta decay, helps our understanding of the asymmetry in the weak interaction’s handedness. In the future phases of the experiment, TWIST aims to produce a direct measurement of Ρμξ with a precision of a few parts in 104 and to increase its sensitivity to ρ and δ by approximately another factor of five.

Electrons reveal secrets of neutrinos

The US Department of Energy’s Thomas Jefferson National Accelerator Facility (JLab) is well known for its Continuous Electron Beam Accelerator Facility (CEBAF), where experiments with a 6 GeV electron beam probe nuclear structure. Now it turns out the same beam may also be helpful for neutrino research. Physicists from several neutrino projects around the world recently visited JLab to take electron-scattering data on carbon, hydrogen, deuterium and iron targets.

Precise knowledge of neutrino beams and neutrino interactions with atomic nuclei helps neutrino researchers analyse the results of their experiments. They gather this by participating in nuclear and high-energy physics experiments, a practice known as “neutrino engineering”. Examples are the HARP experiment at CERN, which measures pion-production cross-sections of protons on nuclear targets, and experiment E04-001 at JLab, which measures electron-nucleus cross-sections.

Electrons at CEBAF energies interact with nuclei predominantly via the electromagnetic force, while neutrinos interact via the weak force. However, precise information about the electron interaction provides information about the neutrino interaction, since the two forces are actually different aspects of the electroweak force. Electrons probe the vector structure of the nucleon, whereas neutrinos probe both the vector and axial-vector structure. So both probes are needed to understand the full electroweak structure of the nucleon and the nucleus.

The nuclear targets studied in the JLab experiment are the same as, or closely resemble, the production targets and detectors commonly used in neutrino experiments. Thus electron-scattering studies with nucleons and nuclei at low momentum-transfer-squared Q2, such as the data taken at JLab in the Q2 range of 0.01-2 (GeV/c)2, can provide information about how neutrinos interact in neutrino experiments. For example, since experiments such as K2K (KEK to Kamioka) in Japan and MiniBooNE at Fermilab use 1 GeV neutrino beams to study neutrino oscillations, electroweak analysis of 1 GeV electron-scattering data from E04-001 can be used as a first step to provide constraints on neutrino cross-sections needed in the study of neutrino oscillations.
In the long term, JLab and JLab’s own researchers are collaborating in a future experiment, MINERvA (Main Injector Experiment v-A), which is dedicated to measuring neutrino cross-sections in Fermilab’s NuMI (Neutrinos at the Main Injector) beam line. Combining the high-precision electron cross-section data from E04-001 with precise data on neutrino cross-sections from MINERvA should allow the axial structure of the nucleon to be extracted.

HESS detects mysterious high-energy sources in the Milky Way

The first detailed image of the central part of our galaxy at very-high-energy gamma rays shows several sources. Surprisingly, some of them do not have a known counterpart at radio, optical or X-ray wavelengths, so their nature is a complete mystery.

Gamma rays at tera-electron-volt energies are detected using the Earth’s atmosphere as a detector. The passage of such a photon through the upper atmosphere triggers a shower of relativistic electrons and positrons moving faster than the speed of light in the air, thus emitting Cherenkov radiation. This faint bluish light-flash can be detected at night by dedicated ground-based telescopes.

Currently, the most sensitive Cherenkov telescope array is the European-African High Energy Stereoscopic System (HESS) located in the Namibian desert. It consists of four mirror telescopes 13 m in diameter placed at the corners of a square of side 120 m. Its image resolution of a few arc-minutes has enabled for the first time a map to be made at tera-electron-volt energies of the central part of our galaxy, the Milky Way.

The image published in the journal Science by Felix Aharonian and an international team of scientists reveals eight new sources of very-high-energy gamma rays in the central 60° of the disc of our galaxy. This essentially doubles the number of sources known at these energies. Three of the newly discovered sources could be associated with supernova remnants, two with giga-electron-volt gamma-ray sources discovered by the Energetic Gamma-Ray Experiment Telescope (EGRET) aboard the Compton Gamma-Ray Observatory, and in three cases an association with pulsar-powered nebulae such as the Crab Nebula is not excluded.

However, at least two of the sources discovered by HESS are not at a position where there is a possible counterpart. These could be members of a new class of “dark” particle accelerators.

Cosmic particle accelerators are believed to accelerate charged particles in strong shockwaves such as those produced when the gas expelled from a supernova hits the ambient interstellar medium. High-energy gamma rays are secondary products, which have probably been boosted to tera-electron-volt energies by ultra-relativistic electrons through the inverse Compton process. Gamma rays are easier to detect because they travel in straight lines from their source – unlike charged particles, which are deflected by magnetic fields in the galaxy. The discovery of new sources in the HESS survey of the galaxy therefore helps to solve the long-standing question of the origin of cosmic rays.

Further reading

F Aharonian et al. 2005 Science 307 1938.

The ice cube at the end of the world

The IceCube observatory is being built to detect extraterrestrial neutrinos with energies above 100 GeV. Neutrinos are attractive for high-energy astronomy because they are not absorbed in dense sources like other probe particles, and they travel in straight lines from their source. Protons or nuclear cosmic rays are also bent by interstellar magnetic fields, and while photons fly straight, at energies above 10 TeV, their interactions (by e+e pair creation) with interstellar background photons limit their range.

CCEice1_05-05

The interaction cross-sections of neutrinos are tiny, so a huge detector is required. Calculations of neutrino production in many different types of sources show that a 1 km3 (1 Gt) detector is required to observe astrophysical signals. IceCube will observe neutrinos that interact in Antarctic ice at the South Pole, producing muons, electrons or tau particles. These leptons interact with the ice (and the tau also decays), producing additional charged particles. High-energy (peta-electron-volts) muons travel many kilometres in the ice, and IceCube will observe muons that traverse the detector. The charged particles emit optical Cherenkov radiation, which can travel hundreds of metres before being detected by IceCube’s digital optical modules (DOMs). The type of neutrino, its direction and its energy can then be reconstructed by measuring the intensity and arrival time of the light at many DOMs.

Work at the South Pole

The South Pole might seem like an unusual place to build a huge detector, but the Antarctic ice is very clear and very stable. Deep below the surface, the light-absorption length can exceed 250 m. Compared with seawater, which is another active medium, the ice has much lower levels of background radiation and a longer attenuation length, but more light is scattered.

Using the US South Pole station as a base for operations, deployment of the IceCube detector in the ice began on 15 January, when the first hole was started. A jet of water heated to 90 °C was used to melt the hole. The drill pumped 750 l/min of water from a 5 MW heater to reach a drilling speed of slightly over 1 m/min. Drilling this first hole, 2450 m deep and 60 cm in diameter, took about 52 h. Once the drilling was complete, a string of 60 DOMs was lowered into the hole, which took another 20 h. The DOMs are attached to the string every 17 m, between depths of 1450 and 2450 m. The water that remained in the hole took about two weeks to freeze.

CCEice2_05-05

The South Pole is very different from an accelerator laboratory, and logistics is a key issue for IceCube. Environmental conditions are rough, and manpower and working time are limited, so everything must be carefully engineered and tested before being shipped to the Pole. Everything must be flown in from the Antarctic coast on LC-130 turboprop aeroplanes equipped with skis. The drilling rig alone filled 30 flights, about an eighth of the annual capacity; fuel for the drill required another 25 flights. Many of the components were transported in pieces to the Pole. The reassembly time limited this inaugural drilling campaign to about 10 days.

The image below shows an early result of this hard work, a cosmic-ray muon in IceCube, in coincidence with an air shower observed by eight surface tanks that form part of the IceTop array above IceCube. These data were taken less than two weeks after deployment, showing that everything works “right out of the box”. At the time, many of the DOMs were not yet turned on. More recent tests have verified that all 60 DOMs are working.

CCEice3_05-05

This success owed much to the Antarctic Muon and Neutrino Detector Array (AMANDA), which preceded IceCube and had more than 650 modules. The AMANDA optical modules contained only a photomultiplier tube (PMT) and analogue signals were transmitted to the surface on the power cable; later versions used fibre-optic cables to transmit analogue signals. These schemes worked in AMANDA, which observed several thousand atmospheric neutrinos, but the approach required manpower-intensive calibrations and could not be scaled up for the much larger IceCube. The solution was “String 18”, a string of DOMs that, in addition to the AMANDA fibre readout, included electronics for locally digitizing the signals, and sending digital signals to the surface. The digital readout worked, and the DOM approach was adopted by IceCube. This advance was a key to reaching the 1 km3 scale.

Each DOM functions independently. Data collection starts when a photon is detected. The PMT output is collected with a custom waveform-digitizer chip, which samples the signal 128 times at 300 megasamples per second. Three parallel 10 bit digitizers combine to provide a dynamic range of 16 bits. Late-arriving light is recorded with a 40 MHz, 10 bit analogue-to-digital converter, which stores 256 samples (6.4 μs). These waveforms enable IceCube to reconstruct the arrival time of most detected photons. A large field-programmable gate array with an embedded processor controls the system, compresses the data and forms it into packets.

The entire DOM uses only 5 W of power. Adjacent DOMs communicate via local-coincidence cables, allowing for possible coincidence triggers. Data are transmitted to the surface over the DOM power cables, and at the surface trigger conditions for the strings and array-wide are applied. The data are stored locally and selected samples transmitted to the Northern Hemisphere via satellite link.

CCEice4_05-05

A surface cosmic-ray air-shower detector array, IceTop, forms part of IceCube. IceTop will eventually consist of 160 ice-filled tanks 2 m in diameter, distributed over 1 km2. The tanks are similar to the water tanks used in other air-shower arrays – such as at Haverah Park in the UK, the Milagro Gamma Ray Observatory in the US and the Pierre Auger Observatory in Argentina. Each tank contains two DOMs frozen in the ice. DOMs in each pair of tanks are connected via local coincidence signals, providing a simple local trigger. Part of IceTop was the first piece of IceCube to be deployed, with eight tanks installed in December 2004.

IceTop will serve several functions: tagging IceCube events that are accompanied by cosmic-ray air showers; studying the cosmic-ray composition up to around 1018 eV (correlating IceTop showers with IceCube muons); and as a calibration source to tag directionally the cosmic-ray muons that reach IceCube.

One big problem in large arrays is measuring the relative timing between separated detector elements. IceCube solved this with “RapCal”, a timing calibration whereby signals are sent down the cables from the surface then retransmitted to the surface. Accuracy is maintained by using identical electronics at the two ends of the cables. In IceCube, laboratory measurements and early data show that the local DOM clocks are kept calibrated to about 2 ns.

CCEice5_05-05

The environment at the South Pole motivated extensive reliability engineering and pre-deployment testing. The extended temperature range – down to -55 °C – was a challenge for the selection of parts and for design verification. Indeed, IceTop may reach temperatures below -55 °C, beyond the design range of any electronic components. Reliability estimates were also challenging. Conventional models predict that the failure rate halves for each 10 °C drop in temperature; according to these models, IceCube will last forever.

The physics of IceCube

IceCube will study many physics topics, but the major objective is high-energy neutrino astronomy. Any source that accelerates protons or heavier ions (cosmic rays) also produces neutrinos. The accelerated particles will collide with other nuclei, producing hadronic showers. Pions and kaons in the shower will decay, emitting neutrinos. Cosmic rays have been observed with energies up to 3 x 1020 eV and the neutrino spectrum should extend to a few per cent of this energy. The neutrino flux is model-dependent, but most calculations predict that a 1 km3 detector should see at least a handful of events each year.

There are several likely astrophysical sources of neutrinos. These include active galactic nuclei (AGNs), gamma-ray bursters (GRBs) and supernova remnants. AGNs are galaxies with massive black holes at their centres, which can power a jet of relativistic particles. Calculations based on the observed flux of photons at energies of tera-electron-volts show that IceCube should observe neutrinos from AGNs. GRBs are mysterious objects that produce bursts of high-energy gamma rays. They are associated with objects in galaxies, including hypernovae (very large supernovae) and colliding neutron stars. Some calculations suggest that IceCube should see a handful of neutrinos from a single GRB, which would be a striking result. Supernova remnants such as the Crab Nebula are the likely source of most cosmic rays of moderate energy in our galaxy. If this is correct, they must also produce neutrinos.

Neutrinos also probe cosmic rays more directly. Ultra-high-energy (above 5 x 1019 eV) protons interact with relic microwave photons from the big bang. These protons are excited into Δ resonances, which decay to lower-energy nucleons and pions. Subsequent pion and neutron decays then produce neutrinos. The proton energy-loss limits the range of very energetic protons; this is known as the Greisen-Kuzmin-Zatsepin cutoff. Photo-dissociation plays a similar role for heavier ions, limiting their range and producing neutrinos. By measuring the ultra-high-energy neutrino spectrum, IceCube will probe the cosmic-ray composition and possible evolution (with redshift) of energetic cosmic-ray sources.

Besides being two orders of magnitude larger, IceCube has several advantages over experiments such as AMANDA and the array in Lake Baikal. IceCube is optimized for higher-energy neutrinos (especially above 1 TeV), where the atmospheric neutrino background is lower. The high detector granularity will allow IceCube to study electron-neutrinos and tau-neutrinos as well as muon-neutrinos. The electron-neutrinos produce blob-like electromagnetic showers, which contrast strongly with long muon tracks; the latter can extend for many kilometres. Above 1015 eV, tau-neutrinos are identifiable through their distinctive “double-bang” signature – an initial shower from the tau-neutrino interaction and the single track of a tau particle, which eventually decays, producing a second shower.

IceCube will study many other physics topics. Over a decade, it will observe about 1 million atmospheric neutrinos, enough to search for deviations from the standard three-flavour scenario for neutrino oscillations. The IceCube collaboration will also look for violations of the Lorentz and equivalence principles, and will search for neutrinos produced by the annihilation of weakly interacting massive particles that have been gravitationally captured by the Earth or the Sun. Because of the very low dark-noise rates (about 800 Hz per DOM), IceCube can detect bursts of low-energy neutrinos from collapsing supernovae. The detector will also contribute to glaciology, studying the dust layers that record the Earth’s weather over the past 200,000 years.

• The IceCube collaboration consists of more than 150 scientists, engineers and computer scientists from the US, Belgium, Germany, Japan, the Netherlands, New Zealand, Sweden and the UK. IceCube is funded by a $242 million Major Research Equipment Grant from the US National Science Foundation, plus approximately $30 million from European funding agencies.

Further reading

J Ahrens et al. 2004 Astropart. Phys. 20 507.
E Andres et al. 2001 Nature 410 6827.
See also www.icecube.wisc.edu.

CERN’s innovations mean real benefits for industry

Technology transfer promotes the injection of science into all levels of daily life in many different ways. For example, nobody would ever have thought that a phenomenon based on quantum theory – quantum entanglement – would find practical applications in cryptography, computing and teleportation, and lead to the creation of companies to safeguard the sharing of information. High-energy particle physics stimulates innovative technological developments. In the quest to find out what matter is made of and how its different components interact, high-energy physics needs highly sophisticated instruments in which the technology and required performance often exceed the available industrial know-how. Thanks to the technologies developed for its research activities, CERN has produced improvements in a variety of fields, many of which are described in a new publication that illustrates the effectiveness of technology transfer between the organization and industry.

CCEinn1_05-05

Since its creation in 1954 CERN has had a tradition of partnership with industry and making its technologies available to third parties. Many of CERN’s users come from distant locations and would like as much as possible to analyse data from their experiments in their home institutions. This led to the development of data networks between CERN and these institutes. As a result CERN became one of the major hubs of the European scientific data network, and with hindsight it is in a way natural that it was the birthplace of the World Wide Web. Furthermore, the major technology conferences and exhibitions that CERN has often organized – the first took place in 1974 – have been important occasions for establishing relationships between CERN and industry. However, up to the 1980s, except for the protection of computer software through a copyright statement, there was no structure in the laboratory to support an innovation policy.

CCEinn2_05-05

During the first 30 years of its life, CERN did not use intellectual- property protection, such as patents. Its policy was “publish or perish”, rather than “protect, publish and flourish”. Furthermore, the conventional model of technology transfer was via purchasing contracts, which required frequent interaction between industry and CERN owing to the highly innovative equipment concerned. The contracts and the financial rules required competitive bidding, with the award going to the lowest offer – a process that is not well adapted to collaborative agreements aimed at technology transfer. Then in 1984, when planning for the Large Hadron Collider (LHC) began, CERN recognized the need for strong involvement of industry even at the initial R&D stage, given the magnitude and technical complexity of the project.

CCEinn3_05-05

In 1986 the relations between CERN and industry were analysed and two years later its member states encouraged the organization to take a more proactive attitude towards technology transfer. This was formalized with the establishment of the Industrial Technology Liaison Office – the beginning of a technology-transfer strategy at CERN. The call for technology for the development of the LHC detectors, launched in 1991, was another opportunity to reinforce the relationships between CERN and industry. At the same time value was given to the protection of intellectual property generated by the laboratory’s activities and endorsed by the creation of a Technology Transfer Group.

This means that CERN now has another way to fuel technical innovations in the industries of its member states, beyond the conventional method of procurement. The proactive model, facilitated by the endorsement of a technology-transfer policy in 2000, enables CERN to identify, protect, promote, transfer and disseminate innovative technologies in the European scientific and industrial environment. Once the technology and intellectual property have been properly identified and adequately channelled (that is to say, protected by the appropriate means), they enter a promotional step intended to attract external interest and to prepare the ground for targeted dissemination and implementation.

CCEinn4_05-05

The dissemination and exploitation of CERN’s technologies are at the heart of the technology-transfer process. In addition to the conventional licensing model for transferring the technology, there is a policy of R&D partnership, which aims to promote CERN’s technology more quickly and to further its dissemination outside particle physics. This type of transfer requires a large investment for the development of a specific product, so tangible financial results are uncertain.

VENUS reveals the future of heavy-ion sources

A key technology for the next generation of heavy-ion accelerators will be a powerful, high-charge-state heavy-ion injector that provides an ion-beam intensity an order of magnitude higher than is currently achievable. In addition, future facilities – such as the proposed Rare Isotope Accelerator (RIA) in the US, the Radioactive Ion Beam Factory at RIKEN in Japan, and the project to upgrade the facility at the Gesellschaft für Schwerionenforschung (GSI) in Germany – will demand a high flexibility in the species of ions available for experiments that may last several weeks. High-performance electron cyclotron resonance (ECR) ion sources routinely produce beams of ions ranging from hydrogen to uranium, thereby providing the necessary reliability and flexibility. However, to meet the requirements for high currents, a new generation of ECR ion source will be needed.

CCEven1_05-05

The Versatile ECR Ion Source for Nuclear Science (VENUS), designed and built at the Lawrence Berkeley National Laboratory (LBNL), is the most advanced superconducting ECR ion source and the first “next-generation” source in operation (figure 1). It is the first fully superconducting ECR ion source that reaches magnetic-confinement fields sufficient for optimum operation at 28 GHz. Recently the project passed a major milestone with the successful coupling of 28 GHz microwaves into the plasma of the ion source. Preliminary tests at this frequency have already resulted in record intensities for beams of medium and highly charged ions. The results indicate for the first time that the high demands of the next generation of heavy-ion accelerators can be met.

The development of ECR ion sources has its roots in fusion plasma research in the late 1960s. The principle is to use magnetic confinement and ECR heating to produce a plasma made up of energetic electrons and relatively cold ions. Figure 2 shows the main ingredients of an ECR ion source: magnets for plasma confinement, microwaves for ECR heating, and gas to create and sustain the plasma. For high-charge-state sources the magnetic confinement consists of an axial magnetic-mirror field superimposed by a radially increasing sextupole (also called hexapole) or other multipole magnetic field. The combination of the axial mirror field and the radial multipole field produces a “minimum-B” configuration, in which the magnetic field is at a minimum at the centre of the device and increases in every direction away from the centre, providing stable plasma confinement.

CCEven2_05-05

The electrons, which are heated resonantly by microwaves, produce the high-charge-state ions primarily by sequential impact ionization of atoms and ions in the plasma. The ions and electrons must be confined for long enough for this sequential ionization to take place. In a typical ECR ion source, ions need to be confined for about 10-2 s to produce high-charge-state ions. The ionization rate depends on the plasma density, which typically ranges from about 1011 cm-3 for low-frequency sources to more than 1012 cm-3 for the highest-frequency sources. Charge exchange with neutral atoms must be minimized, so operating pressures are typically 10-6 mbar or less. The plasma chamber is biased positively so that the ions can be extracted from the plasma and accelerated into the beam-transport system.

The first sources using ECR heating to produce multiply-charged ions were reported in 1972 in France by Richard Geller. Since then the development and refinement of ECR ion sources have improved performance dramatically. For example, in 1980 the Micromafios source at the Centre d’Etudes Nucléaires de Grenoble produced 20 eμA of Ar8+ and 10 eμA of Ar9+. Later the 18 GHz source at RIKEN produced 2000 eμA of Ar8+ and 1000 eμA of Ar9+ in 2003.

CCEven3_05-05

The main drivers for improving the performance of ECR ion sources were formulated in Geller’s famous ECR scaling laws, which predicted that higher magnetic fields and higher frequencies would increase both plasma density and ion-confinement times, which would improve performance. Following these guidelines and using other experimental data, the ambitious design for the VENUS ECR ion source was developed with magnetic-confinement fields much greater than those of previous sources. The strong forces between the superconducting sextupole and the solenoid coils were the main technical challenges of building such a source. Indeed, VENUS was the first source to build such a strong confinement structure, and it holds the world record for the highest ECR confinement field ever achieved, with an axial field of 4 T at injection, 3 T at extraction and a radial field at the plasma chamber wall of 2.4 T.

The technology that made this high field-strength possible was the careful design of the magnet-clamping structure, utilizing bladders filled with liquid metal and a split-pole structure made from iron and aluminium for the sextupole coils. The iron increases the radial field-strength by about 10%, and the aluminium pieces were used to match the thermal expansion of the superconducting wire and the pole. Figure 3 shows the conceptual design of the magnet structure.

Originally VENUS was designed to produce high-current, high-charge-state ions for the 88 inch cyclotron at LBNL, but it has evolved to serve also as the prototype injector ion source for the driver linac of the proposed RIA facility. In the latter application VENUS has become a highly visible project. Similar injector sources are proposed or under construction in RIKEN, GSI and the Laboratori Nazionali del Sud, Catania, in Italy.

Testing, testing…

The operational experience with VENUS has been excellent in terms of stability, reproducibility and reliability during the commissioning period with power at 18 and 28 GHz. During initial operation at 28 GHz, record intensities of medium-charge-state beams such as 245 eμA of Bi29+, and high-charge-state beams such as 16 eμA of Bi41+, were extracted easily. The testing programme has initially focused on bismuth, since its mass is close to that of uranium, which will be the most challenging ion beam for RIA and also for the radioactive ion-beam factory in RIKEN. Bismuth is an ideal test beam since it is less reactive than uranium, not radioactive and evaporates at modest temperatures. However, the processes of extraction and ion-beam formation, as well as the transport characteristics, are very similar to those for uranium. Moreover, bismuth is also very similar to lead, so the results could also be of interest for a future intensity upgrade for the Large Hadron Collider at CERN.

The preliminary performance data measured at 28 GHz in 2004 with VENUS confirmed the scaling laws for intensity, and were the first evidence that meeting the high-intensity requirements is feasible. Nevertheless, these high-intensity beams present a new challenge for the beam-transport lines of ECR ion sources, which are traditionally designed for low-current ion beams. In addition, the high magnetic field at the extraction region greatly affects ion-beam formation and quality (i.e. emittance). This could appear to limit a further increase in the confinement fields and heating frequencies. However, experiments have found that higher-charge-state beams have much higher beam quality than lower-charge-state beams.

This can be explained by a model where the high-charge-state ions are extracted closer to the magnetic-field axis than the low-charge-state ions, leading to less angular momentum and a smaller transverse beam emittance. VENUS’s widely variable magnetic field at extraction will enable us to explore this model experimentally. Later this year the VENUS source will be tested with uranium ions – one of the key ion beams for RIA and for the RIKEN radioactive ion-beam factory – which will be a major milestone for the project.

RICH meeting provides a wealth of information

The well established ring imaging Cherenkov (RICH) technique measures the Cherenkov angle via direct imaging of photons emitted through the Cherenkov effect. It is mainly used in high-energy and astroparticle physics experiments to identify charged particles over an impressive range in momentum, from a few hundred mega-electron-volts/c up to several hundred giga-electron-volts/c. The performance of the technique has yet to be matched by competing technologies, especially when the physics objectives require excellent particle identification.

CCEric1_05-05

In 1993, Eugenio Nappi of Bari and Tom Ypsilantis of Collège de France began a series of international workshops to provide a forum for reviewing the most significant developments and new perspectives on this powerful technique. From 30 November to 5 December 2004, the beautiful resort of Playa del Carmen on the Yucatan Peninsula in Mexico hosted the fifth in the series. Following on from the first workshop in Bari, Italy, and subsequent meetings in Uppsala, Ein Gedi and Pylos, this was the first foray to the other side of the Atlantic.

RICH2004 was dedicated to the centenary of the birth of Pavel Cherenkov in July 1904, who discovered the effect through which charged particles travelling faster than light in a medium emit a characteristic radiation. To honour him, the local organizing committee – headed by two seasoned RICH practitioners in Mexico, Jurgen Engelfried from the University in San Luis Potosi (UASLP) and Guy Paic from the Instituto de Ciencias Nucleares of the National University in Mexico City (UNAM) – invited Cherenkov’s daughter, Elena Cherenkova, to the meeting. A physicist herself, she captured the attention of the audience by recollecting her father through unpublished photographs and anecdotes. Boris Govorkov, a long-time collaborator of Cherenkov, was also invited to talk about the history of the discovery of Cherenkov radiation.

CCEric2_05-05

The workshop itself consisted of topical sessions on Cherenkov-light imaging applications and related technological issues. It attracted some 100 participants from around the world and the large number of abstracts received proved that this field is still very fertile and open to innovative ideas, about both the basic configuration of detectors and the technology used in their construction. While all the submitted contributions were very interesting, the organizers had to make a selection to allow time for extensive discussions between talks. In addition to the 10 invited talks, 55 other papers were accepted for oral presentations, while the remainder were conveyed in the poster session.

The main advances of the past few years played a central role in the workshop. These include the imaging of Cherenkov photons totally reflected in quartz bars (the basic principle of the Detection of Internally Reflected Cherenkov light [DIRC] technique adopted in the BaBar experiment at the Stanford Linear Accelerator Center [SLAC]); the development and applications of photocathodes made of thin films of caesium iodide (CsI); and the current availability of multi-anode photomultipliers (MAPMTs) and large-area hybrid photon detectors (HPDs).

Jochen Schwiening of SLAC gave an overview of the current DIRC layout for BaBar, while in a contribution to the poster session Jerry Va’vra, also from SLAC, described the possibility of enhancing the detector’s performance by adding a focusing system. The design and construction of gaseous photodetectors based on large-area CsI photocathodes, which work in reflective mode with electron extraction in CH4 at atmospheric pressure, have been mastered. This was shown by Abraham Gallas of CERN, Silvia Dalla Torre of Trieste and Mauro Iodice of Rome, who reported on applications in the ALICE and COMPASS detectors at CERN and in experiments in Hall A at the Jefferson Laboratory, respectively. Herbert Hoedlmoser of CERN also reviewed preliminary results from irradiation tests on CsI photocathodes.

Although gaseous photon detectors remain the most effective solution for large detector areas in relatively low-rate experiments, the improvements in the technology of multichannel vacuum-based photon detectors have created the possibility of using the Cherenkov-light imaging technique in applications that were unthinkable only a few years ago. One example is measuring how long Cherenkov photons take to propagate in long quartz bars (the time-of-propagation or TOP counter), as Toru Iijima from Nagoya discussed. The two RICHs being constructed for the LHCb experiment at CERN are the most exacting examples of this “new generation” and several talks covered their challenging design.

In parallel with the industrial production of HPDs and MAPMTs, the development of custom designs has recently evolved considerably. A partnership with one major industrial manufacturer has been established to develop multi-anode hybrid avalanche photodiodes and photodevices based on the combination of a micro-channel plate and micromegas, as reported by Takayuki Sumiyoshi of KEK and Va’vra, respectively.

Besides the CsI-based RICHs mentioned above, Vladimir Peskov from Paris also discussed novel gaseous detectors. In the same vein, Fabio Sauli of CERN reviewed advances in the gas electron-multiplier (GEM) technique, which enables high-performance detectors to be built that are essentially discharge-free and have very high gains. Amos Breskin of the Weizmann Institute reviewed the important results obtained in detecting single photons with a multi-GEM counter, filled with CF4 or Ar/CH4, which operates smoothly up to a gain as high as 105. These studies were the basis for the development of the “hadron-blind” Cherenkov detector under construction for the Pioneering High-Energy Nuclear Interaction Experiment (PHENIX) at the Brookhaven National Laboratory (BNL), as reported by Itzak Tserruya, also of the Weizmann Institute.

The trend of operating RICH detectors in the visible range improves performance because the chromatic aberration is less than with detectors working in the far-ultraviolet region; it also implies a larger choice of materials for the radiator, such as silica aerogel. Several speakers discussed the outstanding improvement of the optical characteristics of this amazing material, made possible by the work of the group of Alexander Danyliuk and Alexei Onuchin in Novosibirsk and of the Japanese company Matsushita.

The high transparency of aerogel nowadays and the possibility of producing tiles made of layers with different refractive indices enable more compact detector designs based on proximity focusing geometry. Such a design is envisaged for the upgrade of the Belle experiment at KEK and, in threshold mode, for heavy-ion experiments, as reported by Peter Krizan of Ljubliana and Paic, respectively. On technological aspects, Veljko Radeka of BNL reported on perspectives for front-end and read-out electronics, and Olav Ullaland of CERN discussed the design of fluid systems.

A discussion about the performance of operating detection systems included overviews of the RICH for the HERA-B experiment at DESY, from Marko Staric of Ljubljana; the triethylamine RICH in the CLEO III experiment at Cornell, from Radia Sia of Syracuse; and the dual-radiator RICH of the HERMES detector at DESY, from Harold Jackson of Argonne. Forthcoming RICH detectors in fixed-target and collider experiments took centre stage halfway through the workshop with reports on RICH2 for COMPASS at CERN from Fulvio Tessarotto of Trieste, and the RICHs for the Charged Kaons at the Main Injector (CKM) and B Physics at the Tevatron (BTeV) experiments at Fermilab from Peter Cooper of Fermilab and Tomasz Skwarnicki of Syracuse, respectively. The impressive results obtained with RICH detectors, especially in charge-parity violation in B-physics experiments, were reviewed by Blair Ratcliff of SLAC.

A full day of the workshop was devoted to astroparticle physics applications, beginning with overviews from Greg Hallewell of Marseille and Alan Watson of Leeds. They made it clear that the new generation of experiments under construction in astrophysics will be the most challenging designs ever attempted.

The consensus of the workshop is that nowadays, with the exception of the next generation of experiments at linear colliders, all experiments need and plan for particle identification at ever-higher momenta and therefore, for the most part, rely on RICH detectors. This was the key message of the talk by Nappi at the end of the meeting. The high quality of the talks and the enjoyable location made this event a great success and RICH practitioners are very much looking forward to the sixth in the series, which will be held in Trieste in autumn 2007.

• RICH2004 was sponsored by several Mexican institutions including the Centro de Investigación y de Estudios Avanzados (CINVESTAV), the Consejo Nacional de Ciencia y Tecnología (CONACyT), UNAM and UASLP; CERN; the National Science Foundation (NSF); the Centro LatinoAmericano de Fisica (CLAF); SLAC; and Fermilab.

bright-rec iop pub iop-science physcis connect