Comsol -leaderboard other pages

Topics

EU decides on the future of research

On 20 April Europe’s seven major intergovernmental research organizations, working together in the EIROforum partnership, presented their comprehensive paper on science policy, “Towards a Europe of Knowledge and Innovation”.

Five years ago, at the meeting of the European Council in Lisbon, the creation of a European Research Area (ERA) was proposed as a means to achieve the ambitious targets necessary to develop a leading, knowledge-based economy in Europe.

Two years later the EIROforum partnership was created between seven of Europe’s major intergovernmental research organizations, the oldest of which is CERN. These organizations operate some of the largest research infrastructures in the world, with a combined budget comparable to that of the current Sixth Framework Programme (FP6) of the European Union (EU).

The EIROforum paper describes the partnership’s collective vision for the future of European scientific research necessary to support the Lisbon Process by working for the implementation of the ERA. The partners support the creation of a climate in Europe in which competitive research is undertaken in an efficient, cost-effective and successful manner. The aim is to be able to recruit and retain world-leading scientists in Europe, and at the same time help European industry by promoting joint front-line research that can generate important spin-offs. The paper presents many concrete ways in which the EIROforum organizations can participate effectively in the consolidation of the ERA.

A couple of weeks earlier, the European Commission adopted the proposal for the seventh Framework Programme (FP7). FPs are the EU’s main instrument for funding research in Europe. They cover a period of five years with the last year of one FP and the first year of the following FP overlapping. FP6 has been operational since 2003 with a total budget of €17.5 billion. FP7 will cover the period 2007-2013 with a budget of €72.7 billion and a time span of seven instead of five years. The ambitious proposal calls for improved efficiencies and aims to build on the achievements of previous programmes.

A new element is the establishment of a “European Research Council”, an independent, science-driven body that will fund European frontier research projects and ensure that European research is competitive at a global level. It will implement the peer review and selection process and will ensure the financial and scientific management of the grants. The EIROforum paper also supports this proposal.

In a third European initiative, on 8 April the European Strategy Forum on Research Infrastructures (ESFRI) presented the EU Commission with its paper “Towards New Research Infrastructures for Europe – the ESFRI ‘List of Opportunities'”. The forum was launched in April 2002 to support a coherent approach to policy-making on research infrastructures in Europe. Its horizon is the next 10-20 years.

The projects chosen had to be of pan-European interest, in an advanced state of maturity so that they can receive funds in FP7 and of international relevance. The forum wanted a “balanced” list that best corresponds to major needs of Europe’s scientific community. Out of a total of 23 opportunities, there were four projects on physics and astronomy, four on multidisciplinary facilities and one in computing.

Of the physics and astronomy projects, two are in nuclear physics, one in astronomy and one in neutrino physics (KM3NeT, a future deep underwater experiment in the Mediterranean). Multidisciplinary facilities include a European X-ray free-electron laser (XFEL) facility. The report also mentions, without specific details, five global projects with strong European participation, including the International Space Station (ISS) and the International Linear Collider (ILC).

• The seven EIRO forum members are the European Organization for Nuclear Research (CERN), the European Fusion Development Agreement (EFDA), the European Molecular Biology Laboratory (EMBL), the European Space Agency (ESA), the European Southern Observatory (ESO), the European Synchrotron Radiation Facility (ESRF) and the Institut Laue-Langevin (ILL).

Neutral Atom Trap at TRIUMF places best limits on scalar bosons

“Table-top” experiments can still probe physics complementary to particle searches at high-energy accelerators. A beta-neutrino correlation experiment using TRIUMF’s Neutral Atom Trap (TRINAT) has now set the best limits on general scalar interactions contributing to nuclear beta decay.

CCEnew4_06-05

TRINAT uses the radiation pressure of laser light to capture radioactive atoms in a 1 mm-sized cloud. Laser light of a frequency slightly below an atomic resonance is shone from all sides of the trap. Atoms moving away from the trap then “see” along their direction of motion light that is blueshifted closer to the resonance, while away from their direction of motion they see light redshifted further away from resonance. The net effect is of radiation pressure opposite to the direction of motion, as the atom absorbs more light that is closer to its resonance.

The trapped atomic nuclei undergo beta decay, which produces three decay products: a positron (β+), a neutrino (ν) and the recoiling daughter nucleus. The daughter nucleus has a kinetic energy of 0-430 eV; while it would stop in 1 nm of material, it can escape the trap. By measuring the momentum of the nucleus in coincidence with that of the β+, the TRINAT team can deduce the momentum of the neutrino more accurately than in previous experiments (which did not measure the recoil energy).

These techniques have been pioneered at TRIUMF using potassium isotopes with 1 s half-lives produced at the Isotope Separation and Acceleration (ISAC) facility with the main TRIUMF cyclotron – this “table-top” experiment admittedly is driven by the world’s largest cyclotron. Results are also becoming available from other experiments based on neutral-atom traps at Berkeley and Los Alamos.

In the Standard Model the weak interaction is mediated by spin-1 vector bosons, the W+, W and Z. Measurements of the β-ν angular distribution in the decay of 38Km → 38Ar + β + ν where both parent and daughter have no nuclear spin allow the search for contributions from hypothetical spin-0 scalar bosons. The TRINAT result for the β-ν correlation parameter a is 0.9981 ± 0.0030 ± 0.0037, consistent with the Standard Model value a =1.
The previous best result, by a Seattle-Notre Dame collaboration using beta-delayed proton emission of 32Ar produced at the ISOLDE facility at CERN, is in the process of being re-evaluated after new measurements of the mass of parent and daughter. Such results constrain the existence of spin-0 bosons with mass:coupling ratios as great as four times the W+ mass, and are complementary to other measurements.

TRINAT can determine detector response functions in situ from the data itself. This is routinely done in high-energy experiments but never before for low-energy beta decay. The experiment has also used the equivalent of the missing-mass construction in high-energy physics to constrain the admixture of possible sterile neutrinos of million-electron-volt mass with the electron-neutrino.

TRINAT is also investigating other physics topics. These include measuring the neutrino asymmetry from polarized nuclei to search for evidence of non-Standard Model right-handed neutrinos (using a complementary measurement to the purely leptonic muon-decay studied at TRIUMF and PSI); measuring the spin asymmetry of the daughter nuclei in pure Gamow-Teller decays; and testing hints of a nonzero tensor interaction reported in π→νeγ by the PIBETA collaboration at PSI.

Giant flare illuminates the Earth

On 27 December 2004, one day after the devastating tsunami in the Indian Ocean, the Earth was illuminated by the biggest splash of light ever recorded from outside the solar system. For 0.2 s, the flare released as much energy as has been radiated by the Sun in 250,000 years. Five papers recently published in Nature describe this event.

The source of this giant flare was identified as the soft gamma repeater SGR 1806-20 located at some 50,000 light-years on the opposite side of the galaxy. Soft gamma repeaters flare up randomly and release gamma rays with a slightly softer spectrum than usual gamma-ray bursts. Only four such objects are known and a giant flare has now been detected for three of them.

The 2004 event is, however, more than an order of magnitude brighter than those recorded previously, on 5 March 1979 (SGR 0525-66) and 27 August 1998 (SGR 1900+14). Soft gamma repeaters are thought to be “magnetars” – isolated neutron stars with an extreme magnetic field that reaches 100 billion T at the surface of the star.

The most likely interpretation of this dramatic outburst is a magnetic reconnection, similar to – but much more powerful than – solar flares. Its unusual strength may be related to a quake in the crust at the surface of the neutron star. According to Kevin Hurley and collaborators, the opening of the magnetic field lines outward launched a hot fireball, a thermal pair plasma emitting the quasi-blackbody spectrum observed during the initial gamma-ray spike with a kT value of around 200 keV (Hurley et al.).

This prompt emission, first reported by ESA’s INTEGRAL satellite, was followed by an exponential decay lasting about 400 s. On top of the general trend, very clear oscillations have been recorded with a period of 7.56 s, the previously known spin period of the magnetar SGR 1806-20.

On 3 January 2005, the Very Large Array (VLA) in New Mexico detected a radio source at the position of the giant flare (B M Gaensler et al.). Further observations over the following weeks showed that the radio-emitting fireball was expanding at roughly one-third the speed of light. The source is not spherical as suggested by polarization measures. Indeed, it seems to be elongated, with its shape changing from one observation to the other.

The extraordinary luminosity of the flare of December 2004 suggests that similar events could have been seen in nearby galaxies. Such an event would look like one of the many short gamma-ray bursts (GRBs) detected by the Burst And Transient Source Experiment (BATSE) in the 1990s. Hurley and collaborators therefore speculate that about 40% of the short GRBs detected by BATSE could be due to such giant flares from magnetars. However, the suggestion by P Cameron and colleagues that SGR 1806-20 is at only about half the distance assumed for this estimate makes it less likely that such events could explain a significant fraction of the still mysterious GRBs (Cameron et al.).

Further reading

P B Cameron et al. 2005 Nature 434 1112.
B M Gaensler et al. 2005 Nature 434 1104.
K Hurley et al. 2005 Nature 434 1098.
D M Palmer et al. 2005 Nature 434 1107.
T Terasawa et al. 2005 Nature 434 1110.

PET and CT: a perfect fit

video-capture of David Townsend

David Townsend is a professor in the Department of Medicine, University of Tennessee Medical Center in Knoxville, Tennessee (TN). The winner of the 2004 Clinical Scientist of the Year Award from the Academy of Molecular Imaging, he is an internationally renowned researcher with 30 years’ experience as a physicist working in the field of positron emission tomography (PET). Townsend began his eight years at CERN in 1970. While working at the Cantonal Hospital in Geneva from 1979 to 1993, he recognised the importance of combining the functionality of PET with that of computed tomography (CT). During that same period, Townsend also worked with Georges Charpak, CERN physicist and 1992 Nobel laureate in physics, on medical applications of Charpak’s multi-wire chambers.

After Townsend moved to Pittsburgh in 1993, his group in the US helped to develop the first combined PET/CT scanner; more than 1000 are now used worldwide to image human cancer. In 1999, Townsend received the Image of the Year Award from the Society of Nuclear Medicine in the US, for an image he produced using the first prototype scanner combining state-of-the art PET with true diagnostic-quality CT.

Current research objectives in instrumentation for PET include advances in PET/CT methodology and the assessment of the role of combined PET/CT imaging for a range of different cancers. The PET/CT combination, pioneered by Townsend and Ron Nutt, CEO and president of CTI Molecular Imaging in Knoxville, TN, is a milestone in these developments, revealing in particular the role of the physicist and engineer in bringing such developments into clinical practice and exploring how they affect patient care.

The past 20 years have seen significant advances in the development of imaging instrumentation for PET. Current high-performance clinical PET scanners comprise more than 20,000 individual detector elements, with an axial coverage of 16 cm and around 15% energy resolution. Can you identify the most important factors that have contributed to this remarkable development in PET?

This impressive progress is due essentially to developments in detector construction, new scintillators, better scanner designs, improved reconstruction algorithms, high-performance electronics and, of course, the vast increase in computer power, all of which have been achieved without an appreciable increase in the selling price of the scanners.

The PET/CT image is one of the most exciting developments in nuclear medicine and radiology, its significance being the merging not simply of images but of the imaging technology. Why is the recent appearance of combined PET and CT scanners that can simultaneously image both anatomy and function of particular importance?

the first mouse image taken in 1977

Initial diagnosis and staging of tumours are commonly based on morphological changes seen on CT scans. However, PET can differentiate malignant tissue from benign tissue and is a more effective tool than CT in the search for metastases. Clearly, valuable information can be found in both, and by merging the two it is possible now to view morphological and physiological information in one fused image. To acquire the PET/CT image, a patient passes through the CT portion of the scanner first and then through the PET scanner where the metabolic information is acquired. When the patient has passed through both portions, a merged or fused image can be created.

Let’s take a step back. The history of PET is rich, dynamic and marked by many significant technological achievements. Volumes of books would be required to record the history of PET developments and its birth still remains quite controversial. Could you identify the most important events that have shaped modern PET?

You are indeed correct that the birth of PET is somewhat controversial. One of the first suggestions to use positron-emitting tracers for medical applications was made in 1951 by W H Sweet and G Brownell at Massachusetts General Hospital, and some attempts were made to explore the use of positron-emitting tracers for medical applications in the 1950s. During the late 1950s and 1960s, attempts were made to build a positron scanner, although these attempts were not very successful. After the invention of the CT scanner in 1972, tomography in nuclear medicine received more attention, and during the 1970s a number of different groups attempted to design and construct a positron scanner.

S Rankowitz and J S Robertson of Brookhaven National Laboratory built the first ring tomograph in 1962. In 1975, M Ter-Pogossian, M E Phelps and E Hoffman at Washington University in St Louis presented their first PET tomograph, known as Positron Emission Transaxial Tomograph I (PETT I). Later the name was changed to PET, because the transaxial plane was not the only plane in which images could be reconstructed. In 1979, G N Hounsfield and A M Cormack were awarded the Nobel Prize for Physiology and Medicine in recognition of their development of X-ray CT.

Since the very early development of nuclear-medicine instrumentation, scintillators such as sodium iodide (NaI) have formed the basis for the detector systems. The detector material used in PET is the determining factor in the sensitivity, the image resolution and the count-rate capability.

The only detector of choice in the mid-1970s was thallium-activated NaI – NaI(Tl) – which requires care when manufactured because of its hygroscopic nature. More importantly, it also has a low density and a low effective atomic number that limits the stopping power and efficiency to detect the 511 keV gamma rays from positron annihilation. Which other scintillators have contributed to modern PET tomography?

Thanks to its characteristics, bismuth germanate, or BGO, is the crystal that has served the PET community well since the late 1970s, and it has been used in the fabrication of most PET tomographs for the past two decades. The first actual tomograph constructed that employed BGO was designed and built by Chris Thompson and co-workers at the Neurological Institute in Montreal in 1978.

Although the characteristics of BGO are good, a new scintillator, lutetium oxyorthosilicate (LSO) (discovered by C Melcher, now at CTI Molecular Imaging in Knoxville, TN), is a significant advance for PET imaging. BGO is very dense but has only 15% of the light output of NaI(Tl). LSO has a slightly greater density and a slightly lower effective atomic number, but has five times more light output and is seven times faster than BGO. The first LSO PET tomograph, the MicroPET for small animal imaging, was designed at the University of California in Los Angeles (UCLA) by Simon Cherry and co-workers. The first human LSO tomograph, designed for high-resolution brain imaging, was built by CPS Innovations in Knoxville, TN, and delivered to the Max Planck Institute in February 1999.

What were your key achievements in PET during your career at CERN? Did CERN play a role in its birth?

a PET/CT scan revealing malignancy in two pericaval nodes

In 1975, I was working at CERN when Alan Jeavons, a CERN physicist, asked me to look at the problem of reconstructing images from PET data acquired on the small high-density avalanche chambers (HIDACs) he had built for another application with the University of Geneva. We got the idea for using the HIDACs for PET because a group in Berkeley and University of California, San Francisco (UCSF) was using wire chambers for PET. I developed some software to reconstruct the data from Jeavons’ detectors, and we took the first mouse image with the participation of radiobiologist Marilena Streit-Bianchi in 1977 at CERN.

The reconstruction methods I developed at CERN were further extended mathematically by Benno Schorr (a CERN mathematician), Rolf Clackdoyle and myself from 1980 to 1982. We used those, and other algorithms developed by Michel Defrise in Brussels and Paul Kinahan in Vancouver, in 1987 and 1988 to reconstruct PET data from the first CTI [Computer Technology and Imaging Inc, renamed CTI Molecular Imaging in June 2002] multi-ring PET scanner installed in London at Hammersmith Hospital. PET was not invented at CERN, but some essential and early work at CERN contributed significantly to the development of 3D PET, and then to a new scanner design, the Advanced Rotating Tomograph (ART).

The prototype of the ART scanner, the Partial Ring Tomograph (PRT), was developed at CERN from 1989 to 1990 by Martin Wensveen, Henri Tochon-Danguy and myself, and evaluated clinically at the Cantonal Hospital within the Department of Nuclear Medicine under Alfred Donath. The ART was a forerunner of the PET part of the combined PET/CT scanner, which has now had a major impact on medical imaging.

What has to happen for us to reach a more highly performing PET/CT combination?

The sensitivity of the PET components must be improved in order to acquire more photons in a given time. That is still a challenge, because the axial coverage of current scanners is only 16 cm, whereas after injection of the radiopharmaceutical, radiation is emitted from everywhere in the patient’s body where the radiopharmaceutical localizes. So, if the detector covered the whole body, the patient could be imaged in one step. However, building such a system would be very expensive.

Do you think it is still possible to have other combinations with other imaging techniques?

Yes, absolutely, but only if there is a medical reason to do it – such a development won’t be driven by advances in technology alone. When we looked at building a PET/CT scanner, we found that most whole-body anatomical imaging for oncology is still performed with CT, whereas in brain and spinal malignancies, anatomical imaging is performed with magnetic resonance (MR).

PET/CT is less technologically challenging than combining PET with MR. PET and CT modalities basically do not interfere with each other, except maybe when they are operated simultaneously within the same gantry. The combined PET/CT scanner provides physicians with a highly powerful tool to diagnose and stage disease, monitor the effects of treatment, and potentially design much better, patient-specific therapies.

What is the actual cost of a PET/CT scanner?

The cost of the highest-performing system is about $2.5 million [€1.98 million], but it may be significantly less if a lower-performance design is adequate for the envisaged application.

  • This article was adapted from text in CERN Courier vol. 45, June 2005, pp23–25

Energy-recovering linacs begin maturing

In March, 159 scientists from around the world gathered at the US Department of Energy’s (DOE’s) Jefferson Lab (JLab) in Newport News, Virginia, for ERL2005, the first international workshop dedicated to energy-recovering linear accelerators (ERLs). The workshop was conceived during accelerator discussions preceding the publication in 2003(p13) of the DOE’s Facilities for the Future of Science: A Twenty-Year Outlook.

CCEene1_06-05

Those discussions initially focused on the need to develop high-brightness, high-current injectors, but soon expanded to include the ERLs then beginning to be implemented on three continents. Planning ensued for ERL2005, which was approved by the International Committee for Future Accelerators (ICFA) as an Advanced ICFA Beam Dynamics Workshop, and interest quickly grew. Other sponsors included three institutions building or planning to build superconducting radio-frequency (SRF) ERLs: Cornell University and Brookhaven National Laboratory in the US and the Council for the Central Laboratory of the Research Councils’ (CCLRC’s) Daresbury Laboratory in the UK.

The growth of ERLs

ERLs began to come of age in 1999 at a light source at JLab – the SRF ERL-driven free-electron laser (FEL). At present, several ERL projects around the world are under design or construction, and test facilities at several laboratories have been funded. Unlike the recycling of electrons in a synchrotron or a storage ring, an ERL uses a conceptually simple phasing technique to recycle the electrons’ energy. On a path measuring exactly an integer multiple of the linac RF wavelength plus a half-wavelength, an ERL’s accelerated beam travels through an experiment and re-enters the linac to yield back its energy, via the RF field, to the beam being accelerated. The decelerated beam is then dumped at low energy.

An obvious advantage of ERLs is economic. Consider, for example, the ERL-driven 4th Generation Light Source (4GLS) facility planned for Daresbury, where a prototype ERL is under construction. In its May 2003 issue (p7), Physics World reported that without energy recovery, “4GLS would consume roughly the output of a large commercial power station”. Energy recovery also simplifies spent-beam disposal.

CCEene2_06-05

The overall promise of ERLs has been distilled in a paper by JLab’s Lia Merminga, who chaired ERL2005 with Swapan Chattopadhyay, also from JLab. Together with co-authors D R Douglas and G A Krafft, Merminga wrote: “At the most fundamental level, beam-energy recovery allows the construction of electron linear accelerators that can accelerate average beam currents similar to those provided by storage rings, but with the superior beam quality typical of linacs. Such an ability to simultaneously provide both high current and high beam quality can be broadly utilized in, for example, high-average-power free-electron laser sources designed to yield unprecedented optical beam power; light sources extending the available photon brilliance beyond the limits imposed by present-day synchrotron light sources; electron cooling devices which would benefit from both high average current and good beam quality to ensure a high cooling rate of the circulating particles in a storage ring collider; or, possibly, as the electron accelerator in an electron-ion collider intended to achieve operating luminosity beyond that provided by existing, storage-ring-based colliders” (Merminga et al. 2003).

Realizing these prospects will require overcoming the technical challenges that the workshop was convened to discuss. These include polarized and unpolarized photoinjectors with high average current and low emittance; optimized lattice design and longitudinal gymnastics; beam stability and multibunch/multipass instabilities; beam-halo formation and control of beam loss; SRF optimization for continuous-wave, high-current applications; higher-order-mode (HOM) damping and efficient extraction of HOM power; RF control and stability; synchronization; and high-current diagnostics and instrumentation.

CCEene3_06-05

Neither the energy-recovery idea nor its close association with SRF is new. In 1965, Cornell’s Maury Tigner suggested a possible collider combining the then-novel concept of the superconducting linear accelerator with what he called “energy recovery” – an “artifice”, he wrote, that “might also be useful in experiments other than the clashing-beam type” (Tigner 1965). Energy recovery was demonstrated as early as the mid-1970s, but the first ERL with high average current drove the first kilowatt-scale FEL from 1999 to 2001 at JLab.

That FEL, which was later substantially upgraded, gave users infrared light at 3-6 μm for 1800 hours – the most achievable with available funding – and led to publications by some 30 groups. Research topics included nanotube production, hydrogen-defect dynamics in silicon, and protein energy transport. The experimentation influenced thinking about linear and nonlinear dynamical processes. Moreover, the ERL itself directly produced broadband light in the terahertz region between electronics and photonics, at over four orders of magnitude higher average power than anywhere before. In Nature, Mark Sherwin of the University of California, Santa Barbara (UCSB) predicted “new investigations and applications in a wide range of disciplines” (Sherwin 2002).

CCEene4_06-05

At 5 mA and 42 MeV, JLab’s original SRF ERL was a small but much-higher-current cousin of CEBAF, the five-pass, 6 GeV recirculating linac that enables the laboratory’s main mission of research in nuclear physics. The ERL/FEL has now been upgraded to produce light at 10 kW in the infrared, with a 1 kW capability imminent in the ultraviolet (figure 1). For infrared operation, the average beam current has been doubled to 10 mA. In the further evolution of ERLs, high average current will be crucial. Optimal performance, in fact, is a trade-off between that and beam degradation. Envisaged ERL projects involve average currents about an order of magnitude higher than those demonstrated so far.

In his plenary speech at the workshop, Todd I Smith of Stanford University summarized the status and outlook for ERL-based FELs. After mentioning electrostatic machines at UCSB, the College of Judea and Samaria in Israel, the Korea Atomic Energy Research Institute (KAERI) in South Korea, and FOM Nieuwegein in the Netherlands, he moved on to JLab and the other two operational RF linac FELs – an SRF machine at the Japan Atomic Energy Research Institute (JAERI) and a room-temperature ERL at the Budker Institute for Nuclear Physics (BINP), Novosibirsk. Smith said that energy-recovering RF-linac-based FELs are proliferating at a rate both “astonishing” and “satisfying”. Among those being planned are machines at KAERI, at Saclay in France and 4GLS at Daresbury. In Florida, in partnership with JLab and UCSB, the National High-Field Magnetic Laboratory has proposed initial steps toward a 60 MeV SRF ERL to drive a kilowatt FEL spanning a wavelength range of 2-1000 μm.

Let there be light

All existing hard X-ray synchrotron radiation facilities are based on storage rings. A half-century ago, first-generation synchrotron-light devices tapped particle accelerators parasitically. Then came a second generation of light sources that were based on dedicated storage rings, followed, in the 1990s, by third-generation machines with high brightness. Third-generation facilities include short-wavelength hard X-ray sources (such as the European Synchrotron Radiation Facility in Grenoble, the Advanced Photon Source at Argonne, and SPring-8 in Japan) and long-wavelength soft X-ray sources (such as the Advanced Light Source at Berkeley, the Synchrotrone Trieste in Italy, the Synchrotron Research Center in Taiwan, and the Pohang Light Source in South Korea). Fourth-generation X-ray light sources based on FELs driven by linacs are under development at DESY, SLAC and RIKEN’s Harima Institute in Japan. The idea of an X-ray synchrotron light source based on ERLs was advocated in 1998 by G Kulipanov, N Vinokurov and A N Skrinsky at BINP, with their pioneering MARS proposal, and later by JLab’s Geoffrey Krafft.

Serious pursuit of a design for an ERL light source by Cornell has recently yielded funding from the US National Science Foundation (NSF) to begin developing a major ERL-based upgrade of the Cornell High Energy Synchrotron Source at the Cornell Electron Storage Ring. ERLs also constitute “a natural and cost-effective upgrade path” for storage-ring light sources, according to Charles K Sinclair of Cornell. At the workshop, Sinclair characterized the potential improvements in ERLs in brightness, coherence and pulse brevity as “transformational”. In one of his examples of applications, he noted that on the timescale of hundreds of femtoseconds, an ERL can enable experimenters to follow the structure of ultrafast chemical reactions. With the NSF funding, Cornell is developing an injector to deliver low-emittance beams at 100 mA.

At JLab – Cornell’s partner in preparing the NSF proposal – collaborative experiments are being conducted concerning other issues in ERL development: beam break-up in the ERL/FEL and RF control in both the ERL/FEL and CEBAF. To complement the FEL’s demonstration of high average current, CEBAF was specially configured briefly during 2003 for a single-pass proof-of-principle study of energy recovery at the giga-electron-volt scale. JLab’s assets for developing SRF-driven ERLs also include the Center for Advanced Studies of Accelerators (CASA) and the Institute for SRF Science and Technology, housed in a test laboratory with a substantial complement of SRF R&D facilities.

As a first step in the 4GLS project, Daresbury is building a 50 MeV prototype ERL that will supply electron beams to a test FEL using an infrared wiggler on loan from JLab. Eventually, with a 600 MeV ERL, 4GLS would complement the UK’s higher-energy X-ray light source, Diamond, which is under construction at the CCLRC’s Rutherford Appleton Laboratory. The 4GLS facility is planned to exploit the sub-picosecond regime and to combine exceptionally high transverse and longitudinal brightness. Central to the plan are a variety of opportunities for pump-probe experiments and the combining of spontaneous and stimulated sources at a single centre. Two photocathode guns are planned, one for high average current, the other for high peak current (figure 2).

For physics research conducted at colliders, ERLs offer the promise of providing electron cooling for hadron storage rings and high-current, low-emittance electron beams for high-luminosity electron-ion colliders. In the cooling process, which brings higher luminosity to ion-beam collisions, an ion beam and an electron beam are merged. The electron beam’s energy is chosen to match the ion beam’s velocity, enabling the electron beam to remove thermal energy from the ion beam. An R&D ERL designed for 0.5 A average current is under construction at Brookhaven. It serves as a prototype for the electron cooler designed for RHICII, the proposed upgrade that could increase the luminosity of the Relativistic Heavy Ion Collider (RHIC) by an order of magnitude. It is also a prototype for an envisaged RHIC upgrade called eRHIC, in which an ERL would provide electron beams for electron-ion collisions. A similar concept, ELIC, envisages the upgrade of CEBAF at JLab for energy-recovering acceleration of electrons to use in collisions with light ions from an electron-cooled ion-storage ring.

A new ERL/FEL concept known as the “push-pull FEL” was presented at ERL2005 by Andrew Hutton of JLab. This proposal, which in some ways resembles a high-energy collider configuration mentioned in Tigner’s 1965 paper, calls for two sets of superconducting cavities with two identical electron beams travelling in opposite directions. Each set of cavities accelerates one electron beam and decelerates the other. This arrangement allows the energy used to accelerate one beam to be recovered and used again for the other. The difference compared with other energy-recovery proposals is that each electron beam is decelerated by a different structure from the one that accelerated it, so this is energy exchange rather than energy recovery. The push-pull approach can lead to a compact layout (figure 3).

The continued success of ERLs would most likely accelerate interest in Chattopadhyay’s call for “practical, affordable yet unique and exciting new accelerator facilities” at the “mezzo scale”. Such successes would also, as Merminga and colleagues concluded, “set the stage for high-energy machines at the gigawatt scale, providing intense, high-quality beams for investigation of fundamental processes as well as the generation of photon beams at wavelengths spanning large portions of the electromagnetic spectrum”. Toward such ends, said Chattopadhyay, “Jefferson Lab is advancing the ERL field at the fastest pace possible and is committed to working in partnership with the international community to promote the development of ERLs further as the next-generation instrument of science wherever it is feasible”. He added that “the successful emergence of the Cornell and Daresbury facilities, both collaborators with Jefferson Lab, signals a bright future ahead”.

HERA and LHC workshops help prepare for the future

After the major luminosity upgrade of DESY’s electron-proton collider HERA in 2001, experiments at the accelerator are now producing data for Run II, which will last until the end of HERA operation in 2007. The results obtained by the two collider experiments H1 and ZEUS will have a profound impact on the physics to be explored at CERN’s Large Hadron Collider (LHC). Since March 2004, members of the communities working at HERA and preparing for the LHC have been meeting regularly at CERN and DESY in a series of workshops intended to promote co-operation between the two communities. The aim of the six “HERA and the LHC” workshops, the final meeting of which was held at DESY during the week before Easter 2005, was to investigate the exact implication of HERA results on the physics at the LHC.

CCElhc1_06-05

The goals of the series of workshops, which had more than 200 registered participants, were as follows:
• to identify and prioritize those measurements to be made at HERA that have an impact on the physics reach of the LHC;
• to encourage and stimulate the transfer of knowledge between both communities and establish an ongoing interaction;
• to encourage and stimulate theoretical and phenomenological efforts;
• to examine and improve theoretical and experimental tools;
• to increase the quantitative understanding of the implication of HERA measurements on LHC physics.

At the final meeting of the series, the speakers summarized the results and presented the conclusions from studies and discussions carried out during the past year by working groups on parton density functions, multijet final states and energy flows, heavy quarks, diffraction and Monte Carlo tools. In general it was made very clear that there is a strong interest from the LHC physics community in detailed studies at HERA. Several general talks on physics at the LHC and HERA outlined the importance of the results obtained at HERA, with special emphasis on the measurements that have still to be done and that will have a significant impact on the physics reach of the LHC. “Clearly, to calculate properly the production rates of Higgs and supersymmetry we absolutely need to understand quantum chromodynamics [QCD] as well as possible,” said John Ellis from CERN. It also became evident that much more theoretical, phenomenological and experimental investigations would be desirable, and to this end several projects were launched during the workshop.

Speakers repeatedly stressed the importance for LHC physics of precise measurements of the parton densities, i.e. the densities of the various types of quarks and the gluons within the proton. In particular, the whole issue of parton density functions (PDFs), from the standard integrated ones to unintegrated and generalized PDFs and eventually to diffractive PDFs, is a rich field for theoretical and experimental studies. These include not only a precise experimental determination of the PDFs, but also address the more fundamental question of the universality of the PDFs and in particular whether those obtained at HERA can be applied to the LHC without further modification beyond evolution effects in QCD.

In the multijet working group, one of the main topics was the issue of multiple scatterings and underlying events. The understanding of these effects has an impact on, for example, the Higgs cross-section measurements in the boson-fusion channel at the LHC. A major step towards a deeper understanding of multiple scatterings is their relation to diffractive scattering: they are simply different facets of the large density of partons at high energies. The dynamics of these high-density systems require extensions of the concept of parton densities from transverse-momentum dependent (unintegrated) to generalized and diffractive parton densities, which can be measured precisely at HERA. These parton densities will be essential for analysing diffractive Higgs production at the LHC, a very clean and promising channel. However, to study this process and also problems of parton dynamics at low x that are still unsolved, the forward region of the LHC detectors needs further instrumentation. This is a task for which the experiments at HERA have accumulated both technical and physics experience over recent years.

Heavy quark production at the LHC is also interesting in terms of QCD. The densities of heavy quarks will play an important role at the LHC, for example in Higgs production channels, and they will be accurately measured at HERA in the high-luminosity programme. In the forward production of heavy quarks, as will be the case in the LHCb experiment, effects coming from high parton densities and the saturation of the cross-section might be observed directly.

All of these studies require adequate tools and simulation programs. The working groups made measurements from HERA, the Tevatron at Fermilab and the SPS at CERN available in the form of easy-to-use computer codes. These will be useful for any tuning of Monte Carlo generators. New concepts were also investigated and user-friendly interfaces to simulation programs were developed.

A unique machine

During the year of the workshops, co-operation between experimenters at HERA and the LHC and the interest from the theoretical and phenomenological side have continuously increased. It has become clear that not only can the LHC profit from HERA (i.e. with exact measurements of parton densities), but also that HERA will profit from investigations carried out for the LHC, such as the application of next-to-leading (NLO) calculations in Monte Carlo event generators (MC@NLO).

HERA is a unique machine; it is the only high-energy electron-proton collider in the world. During the workshop meetings, it became obvious that for many topics, it is the only place today where many of the necessary measurements and studies can be performed. HERA is a machine for precision QCD measurements, just as the Large Electron-Positron collider was for the electroweak sector, with the difference that QCD is richer but also more difficult. Many questions are still unanswered, for example those concerned with the understanding of diffraction and issues in parton evolution with all its consequences for the LHC.

The workshops have critically assessed the physics programme of HERA and made suggestions for further measurements and investigations, in particular those that will be important for the physics reach of the LHC and that cannot be performed anywhere other than at HERA. One example is the precise measurement of the gluon density using the longitudinal structure function FL, which is important for clarifying uncertainties in the present knowledge of the gluon density and the formulation of QCD at high parton densities.

In view of the prospects for further progress emerging from the high-statistics HERA Run II data, a continuation of the workshop series is now planned on an annual basis. The next meeting will be held at CERN in March 2006.

Records fall at Cornell

Improvements in our understanding of the mechanisms that limit accelerating gradients or electric fields, together with technological advances from worldwide R&D, have steadily increased the performance of superconducting cavities over the past decade.

The TESLA collaboration is now achieving accelerating gradients of 35 MV/m in 1 m-long superconducting structures suitable for the proposed 500 GeV International Linear Collider. The best single-cell cavities at many laboratories reach 40-42 MV/m. At these gradients, energy losses from the superconducting microwave cavity resonators are still miniscule, with “intrinsic Q” values exceeding 1010, i.e. it takes 1010 oscillations for the stored energy in the resonator to die out. If Galileo’s original pendulum oscillator had possessed a similar Q value, it would still be oscillating now, 400 years later.

CCEcor1_06-05

One goal of future R&D programmes is to push accelerating gradients and Q values even higher either to reach tera-electron-volt energies or to save on costs. However, above 40 MV/m the magnetic field at the surface of the resonator approaches the fundamental limit where superconductivity breaks down. One way to circumvent this limit is to modify the shape of the cavity to reduce the ratio between the peak magnetic field and the accelerating field.

About two years ago, Valery Shemelin, Rongli Geng and Hasan Padamsee at the Cornell University Laboratory for Elementary- Particle Physics (LEPP) introduced a “re-entrant” shape, which lowers the surface magnetic field by 10%. Figure 1 compares the re-entrant cavity shape and the shape of the TESLA cavity. The downside of the new shape is the higher accompanying surface electric fields, which enhance “field emission” of electrons from the regions of high electric field. Field emission does not present a “brick wall” limit, however, because techniques such as high-pressure rinsing with high-purity water at pressures of about 100 bar eliminate the microparticle contaminants that cause field emission.

Another important aspect of cavity shape is beam aperture. When a bunch of charge passes through an accelerating cavity it leaves behind a wakefield, which disrupts oncoming bunches. Smaller apertures produce stronger wakefields. The re-entrant shape has the same aperture as the TESLA shape; nevertheless, reducing the aperture, say from 70 to 60 mm, would yield higher accelerating gradients because it would allow a surface magnetic field 16% lower. Further studies are in progress to evaluate the trade-off between higher wakefields and higher potential accelerating gradients.

New ideas are usually proved in single-cell cavities before the technical challenges of multi-cell accelerating units are addressed. The first 70 mm-aperture re-entrant single-cell cavity fabricated at Cornell reached a world record accelerating field of 46 MV/m at a Q value of 1010, and 47 MV/m in the pulsed mode, which is suitable for a linear collider. Figure 2 shows how Q varies with accelerating field for the cavity. To reach these record performance levels, the cavity was made from high-purity, high-thermal-conductivity niobium (with residual resistivity ratio of 500) to avoid thermal breakdown of the superconductivity. Electropolishing provided an ultra-smooth surface.

High-pressure rinsing at 100 bar thoroughly scrubbed the surface free of the microparticles that cause field emission. Final assembly took place in a Class-100 clean-room environment. All these are now standard techniques for the best superconducting cavity preparation. In addition, baking at 100 °C for 50 h promoted a redistribution of the oxygen in the radio-frequency (RF) layer, which is known to avoid premature RF losses.

Record operating Qs

When operating an accelerating cavity with beam, another important Q value is the “operating” or “loaded” Q. This is determined by the power lost to the beam, whereas the “intrinsic Q” is determined by the ohmic power loss in the cavity walls. Intrinsic Q values are 1010 or higher as discussed above. For applications with minimal beam loading, the closer loaded Q is to intrinsic Q, the smaller the overall RF power investment and operating costs.

CCEcor2_06-05

The state of the art for structures designed to accelerate velocity-of-light particles is operation at a loaded Q of 2 x 107. Higher loaded Qs are extremely challenging because the resulting bandwidth of the cavity resonance is only a few hertz (out of a typical 1.5 GHz), making the field in the cavity extremely sensitive to any perturbation of the resonance frequency due to microphonics or detuning of Lorentz forces. However, Qs above 108 are highly desirable for future applications, in particular for energy-recovery linacs (ERLs) for future high-flux, high-brilliance light sources. These are being pursued by many laboratories around the world, including Cornell. No control system has ever met the amplitude and phase-stability requirements of the RF field at a loaded Q of 108.

Building on techniques developed at DESY for the Tesla Test Facility, researchers at Cornell, under the direction of Matthias Liepe, have developed a new digital RF control system that provides great flexibility, high computational power and low latency for a wide range of control and data-acquisition applications. Recently Cornell tested this system in two extreme regimes of loaded Q. First, in the Cornell Electron Storage Ring (CESR), the system stabilized the vector-sum field of two of the ring’s superconducting 500 MHz cavities at a loaded Q of 2 x 105 with a beam current of several hundred milliamps. Several months of continuous operation proved the system’s high reliability and the field stability surpassed design requirements.

In a more crucial and demanding test, a team from Cornell and Jefferson Laboratory (JLab) connected the system to a cavity with a loaded Q greater than 108 at JLab’s infrared free-electron laser and tested it with beam in the energy-recovery mode, in which the effective beam current is practically zero. In continuous operation, excellent field stability – about 2 x 10-4 rms in relative amplitude and 0.03 degrees rms in phase – was achieved at a loaded Q of 1.4 x 108 in full energy-recovery mode. This sets a new record for loaded Q operation of linac cavities. At the highest loaded Q, less than 500 W of klystron power was required to operate the cavity at a field of 12 MV/m in energy-recovery mode with a beam current of 5 mA. At the more usual loaded Q of 2 x 107, about 2 kW is required.

The control system used includes digital and RF hardware developed in-house; very fast feedback and feed-forward controls; automatic start-up and trip recovery; continuous and pulsed-mode operation; fast quench-detection; and cavity-frequency control. The cavity-frequency control relied on a fast tuner based on a piezoelectric tuning element, which proved effective in keeping the cavity on resonance. As an added bonus, the ramp-up time to high gradients was less than 1 s, instead of the more usual minutes.

Hard probes conference finds success in Portugal

The town of Ericeira on the Atlantic coast faces Cabo da Roca, the western limit of the European continent. It proved an inspiring setting for Hard Probes 2004, the first International Conference on Hard and Electromagnetic Probes of High Energy Nuclear Collisions.

The conference grew out of a series of Hard Probe Café meetings, the first of which was held in 1994 at CERN. The idea then was to form a collaboration of theorists and experimentalists interested in the interface between hard perturbative quantum chromodynamics (QCD) and relativistic heavy-ion physics. CERN’s Super Proton Synchrotron (SPS), with a beam energy of up to 200 GeV/nucleon, was the highest-energy heavy-ion facility at the time and hard processes were rare. But it was becoming clear that the use of penetrating hard probes – for example, high-mass lepton pairs and high-momentum photons – held promise for understanding the strongly interacting hot medium formed in heavy-ion collisions.

CCEcon1_06-05

Subsequent experimental results from the SPS, and the commissioning of the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory, put hard processes in the focus of physicists’ attention. After meetings in Europe and the US, when the first published proceedings helped in planning experiments at RHIC and the Large Hadron Collider (LHC) at CERN, the Hard Probe Café could no longer accommodate all the enthusiasts. So Hard Probes 2004 was born, organized by Carlos Lourenço, Helmut Satz, João Seixas and Jorge Dias de Deus, and held on 3-10 November 2004 in the beautiful resort of Ericeira. The 120 or so participants did not have much free time to enjoy the sea breeze; the programme was intense as well as interesting, and local maritime advice underlined the importance of keeping the aim in mind (figure 1).

After a first day of lectures that were more pedagogically oriented, Krishna Rajagopal of MIT opened the conference by surveying what is known about the QCD phase diagram and its new states of matter, from quark-gluon plasma (QGP) to colour superconductors. Jochen Bartels of DESY recalled the parton formulation of high-energy interactions, addressing parton evolution and saturation. These aspects have led to major progress in the understanding of the initial conditions in heavy-ion collisions, forming a new approach to the physics of high-energy hadron and nuclear collisions: the colour glass condensate, reviewed by Edmond Iancu of Saclay and Raju Venugopalan of Brookhaven. Related percolation studies were presented by Carlos Pajares of Santiago de Compostela. It is becoming evident that QCD at high parton density can provide a common framework for describing different high-energy interactions, from deep inelastic scattering to relativistic nuclear collisions.

Probes with charm

One of the main topics discussed at the Hard Probe Café was the fate of heavy quarkonia – bound states of heavy quarks and anti-quarks – in hot quark-gluon matter. Around 20 years ago, Tetsuo Matsui and Helmut Satz predicted that at sufficiently high temperatures Debye screening in the quark-gluon plasma would lead to the dissociation of quarkonia. At the conference, Frithjof Karsch of Bielefeld surveyed the status of theoretical quarkonium studies; our understanding of the topic has progressed significantly following recent lattice QCD calculations, which were discussed by Tetsuo Hatsuda of Tsukuba, Peter Petreczky of Brookhaven and others.

CCEcon2_06-05

The different binding energies and bound-state radii of the various quarkonia lead to different dissociation temperatures; while the higher excited charmonium states melt near the deconfinement point, the J/ψ (the cbarc ground state) can survive up to higher temperatures. Such behaviour had been previously obtained from potential model studies, and had shown that the in-medium dissociation pattern of quarkonia constitutes a very effective tool for the study of quark-gluon plasma. It can now provide a direct way to relate QCD calculations to data collected from heavy-ion collisions.

The use of heavy quarks for the diagnostics of QCD matter depends of course on reliable computations of their yields in perturbative QCD; the status of these calculations was reviewed by Stefano Frixione of Genova and Ramona Vogt of Lawrence Berkeley National Laboratory (LBNL). The increase of heavy-quark production at high energies could in fact even lead to enhanced quarkonium yields, as Ralf Rapp of Texas A&M and Bob Thews of Arizona showed for different recombination and coalescence models. A further issue to be resolved is the possibility of initial state quarkonium dissociation by parton percolation, which was reviewed by Marzia Nardi of Torino.

The suppression of charmonium production in nuclear collisions was indeed observed at the SPS (figure 2). Louis Kluberg of CERN and Laboratoire Leprince-Ringuet reviewed the 20 year evolution and the final results of the pioneering NA38 and NA50 experiments. Further studies are being pursued at CERN by NA60, with improved detector capabilities, and at RHIC by PHENIX, where the much lower integrated luminosities, so far, limit the usefulness of the higher collider energies. The HERA-B collaboration presented recent results on χc production in proton-nucleus collisions at HERA, while Mike Leitch of Los Alamos reviewed several issues in quarkonium production. It is particularly puzzling that the ground state resonances J/ψ and Υ show complete absence of polarization, contrary to the expectations of non-relativistic QCD, while the excited states Υ(2S) and Υ(3S) show maximum transverse polarization.

CCEcon3_06-05

The meeting also discussed measurements of heavy-flavour production. The STAR collaboration reported on open-charm measurements made by reconstructing the D° → Kπ+ hadronic decay in d-Au collisions at RHIC. The reconstruction of such hadronic decay modes is difficult to perform in heavy-ion collisions, owing to the high particle multiplicities. The single-electron transverse momentum spectrum provides an alternative, albeit indirect, measurement of charm production at RHIC energies. The charm production cross-sections currently derived from the PHENIX and STAR data differ by a factor of two. Effects that might cause this discrepancy are being investigated and improved results should be available soon.

Another promising direction is the use of electromagnetic probes – leptons and photons; their production has for a long time been considered one of the basic pieces of evidence for the formation of a quark-gluon plasma. There is great interest in these probes because they escape from the medium almost without any interactions, and thus carry valuable information about the early stages in the evolution of dense matter. Moreover, their emission rates can be calculated in lattice QCD as well as in perturbation theory, as discussed by Jean-Paul Blaizot of Saclay, Charles Gale of McGill and others. Rolf Baier of Bielefeld showed that parton-saturation effects also play a crucial role here.

New experimental information on dilepton production was presented by the NA60 experiment at CERN, which took proton-nucleus and In-In data in 2002 and 2003, respectively, with better statistics and mass resolution than previous measurements. Such “second-generation” data should answer some of the questions raised by results previously obtained by CERES at the SPS and lower-energy experiments (such as DLS at LBNL’s BEVALAC and KEK’s E235), reviewed by Itzhak Tserruya of the Weizmann Institute. Currently, PHENIX at RHIC cannot explore the physics of the low-mass dilepton continuum, given the overwhelming combinatorial background levels. This should be solved by a “hadron blind detector”, based on a proximity focus Cherenkov detector, soon to be added to PHENIX.

Jets are another classic hard probe. Colliding beams of protons or heavy nuclei produce jets when partons from the incoming projectiles undergo hard scattering off each other and emerge from the reaction at large angles. In the early 1980s, James Bjorken proposed that jets would interact with the material generated by high-energy nuclear collisions in a way analogous to the more familiar interaction of charged particles in detector material. He suggested that this interaction would lead to energy loss in a quark-gluon plasma (jet quenching). Further theoretical analysis showed that gluon bremsstrahlung is an efficient way of dissipating jet energy to the medium, generating large and potentially observable differences between hot and cold strongly interacting matter.

Jets and RHIC

Jets are the hard probe par excellence at RHIC, where the collision energy is high enough to produce them in vast numbers. The first runs with gold beams at RHIC did indeed reveal strong modifications to jet structure, agreeing with the predictions of jet quenching in matter many times denser than cold nuclear matter. Heavy-ion physicists are now looking more deeply into jet-related measurements and interesting nuclear effects continue to emerge. The diversity and quality of the high-momentum-transfer data from the four RHIC experiments justified eight detailed talks. Data were presented from pp, d-Au and Au-Au collisions at the top RHIC centre-of-mass energy of 200 GeV, together with Au-Au measurements at 62.4 GeV, chosen to match the energy of CERN’s Intersecting Storage Rings (for which extensive pp collision data are available for comparison).

One of the key pieces of evidence for jet quenching is the strong suppression of high-momentum inclusive pion and charged particle production in the most central nuclear collisions, seen by all RHIC experiments and now provided by PHENIX for transverse momenta up to 14 GeV/c. It is crucial to crosscheck such measurements and theoretical calculations in simpler systems. Inclusive particle spectra at high transverse momentum in pp collisions are described well by perturbative QCD calculations, so that the reference spectra for measuring nuclear effects are well understood. Jets and hard photons at high momentum are generated by similar mechanisms, but direct photons should not lose energy in the nuclear medium, since they have no colour charge. Klaus Reygers of Münster showed that, at RHIC, direct photons are indeed produced at the rate expected from QCD calculations, while high-momentum pions are suppressed by a factor of five (figure 3).

On the theory side, Xin-Nian Wang of LBNL and Urs Wiedemann of CERN discussed partonic energy loss in matter, and showed that perturbative QCD calculations incorporating medium-induced bremsstrahlung can describe the main jet-related measurements. A key test is the variation of energy loss with collision energies. Jet quenching generates strong effects at the top RHIC energy of 200 GeV, but does it diminish at lower collision energy? Recently analysed Au-Au data from RHIC at 62.4 GeV found a hadron suppression similar to that at 200 GeV. Model calculations of jet quenching had predicted this, as the result of smaller overall energy loss convoluted with a softer underlying initial partonic spectrum.

It is therefore natural to look at the extensive data amassed by the fixed-target experiments at the SPS, with centre-of-mass energies of 17-20 GeV. Though jet production at SPS energies is rare, high-statistics data sets can probe the lower reaches of the hard-scattering regime. Until recently it was thought that in Pb-Pb collisions at the SPS, production of hadrons with high transfer momentum was enhanced, not suppressed. David D’Enterria of Columbia has re-examined the pp reference data used to measure the hadron production at the SPS, and concludes that its uncertainties were previously underestimated and that signs of jet quenching may indeed also be present at the SPS. This has spurred the SPS heavy-ion collaborations to re-analyse their old data, with more news to be expected by the summer of 2005.

Future prospects

Most of the Au-Au data from RHIC presented at the conference are from the 2002 run, with an integrated luminosity of 250 μb-1. The RHIC collaborations are still analysing the 2004 data set, with a much higher integrated luminosity (3.7 nb-1), and new results on jet physics and other rare probes are expected within a few months.

John Harris of Yale discussed the long-term future of RHIC, including upgrades to the major detectors and the addition of electron cooling to the accelerator which will increase its luminosity for Au-Au by a factor of 10 (RHIC~II). The theoretical interest in low x forward physics was emphasized by Al Mueller of Columbia; this topic should also be high on RHIC’s agenda, as noted by Les Bland of Brookhaven. In 2008, the high-energy frontier in heavy-ion collisions will move to the LHC; Bolek Wyslouch of MIT, Andreas Morsch of CERN and Philippe Crochet of Clermont-Ferrand previewed the possibilities this will open up.

Further heavy-ion runs at the SPS could occur in parallel with operation of the LHC, as advocated by Hans Specht of Heidelberg, to profit from what seems to be an ideal collision energy for the studies of the transition to the QGP phase, associated with the high luminosities offered by fixed target running. High-precision data from such runs could be available several years before the start of GSI’s Facility for Antiproton and Ion Research (FAIR), where heavy-ion collisions will be studied at up to 35 GeV/nucleon (data sets have been taken at the SPS at 20-200 GeV/nucleon).

The wealth of information presented at the meeting was summarized in three talks: on quarkonia and heavy flavours, by Enrico Scomparin of Torino; on jets and high-transverse-momentum physics, by Peter Jacobs of LBNL; and on electromagnetic probes, by Axel Drees of Stony Brook. Dmitri Kharzeev of Brookhaven summarized the theory presentations at the meeting, inspired by the venue’s history. When the Pope divided the unknown world between Portugal and Spain in the 1494 treaty of Tordesillas, he drew a line in what he thought was an empty ocean; 10 years later, South America had been discovered and was being explored. Similarly, the boundaries of 10 years ago, between the old hadronic and the new partonic worlds in the phase diagram of strongly interacting matter, are now more complex and less sharp, thanks to impressive recent progress.

The focused programme, good attendance, spectacular location and extracurricular activities (including a concert of 18th-century popular music in the majestic Convento de Mafra) made this a memorable and successful meeting – one in a new conference series. The second will be held in the spring of 2006 in the San Francisco Bay Area, convened by physicists from Berkeley and Brookhaven. A third is already on the horizon, as Santiago de Compostela in northern Spain would like to welcome a pilgrimage of hard probe physicists.

Join the open-access revolution

There is a quiet revolution under way in academic publishing that will change how we publish and access scientific knowledge. “Open access”, made possible by new electronic tools, will give enormous benefits to all readers by providing free access to research results.

CCEvie1_06-05

The scientific articles published in journals under the traditional publishing paradigm are paid for through subscriptions by libraries and individuals, creating barriers for those unable to pay. The ever-increasing cost of the traditional publishing methods means that many libraries in Europe and the US – even the CERN Library, which is supposed to serve international researchers at a centre of excellence – are unable to offer complete coverage of their core subjects.

In 2003 the Berlin Declaration on open access to knowledge in the sciences and the humanities was launched at a meeting organized by the Max Planck Society. Six months later, the first practical actions towards implementing the recommendations of the declaration on an international level were formulated at a meeting held at CERN in May 2004. So far the declaration has been signed by 61 organizations throughout the world, which are now taking concrete measures for its implementation.

An obvious prerequisite for open access is that institutions implement a policy requiring their researchers to deposit a copy of all their published works in an open-access repository. The Council for the Central Laboratory of the Research Councils’ library committee in the UK sponsored such a project, ePubs, with the aim of achieving an archive of the scientific output of CCLRC in the form of journal articles, conference papers, technical reports, e-prints, theses and books, containing the full text where possible.

The feasibility study, carried out from January to March 2003, demonstrated the business need for this service within the organization. The data, going back to the mid-1960s, can be retrieved using the search interface or the many browse indices, which include year, author and journal title. In addition the ePubs system is today indexed by Google and Google Scholar. The scientific content of the system has further led Thomson ISI (the provider of information resources including Web of Knowledge and Science Citation Index) to classify ePubs as a high-quality resource.

The next step is to encourage the researchers – while of course fully respecting their academic freedom – to publish their research articles in open-access journals where a suitable journal exists.
In recent years new journals applying alternative publishing models have appeared in the arena. The problem so far is that none of these journals have a long-term business model. They are sponsored either by a research organization or by other titles in the publisher’s portfolio, or enjoy sponsorship that will not last forever.

Scientific publishing has a price and will continue to have a price, currently mainly covered by academic libraries through subscriptions. Moving to an open-access publishing model should dramatically reduce the global cost for the whole of the academic community. The publication costs should be considered a part of the research cost and the research administrators should budget for these when the research budgets are allocated. However, a change must not take place without safeguarding the peer-review system, which is the guarantor of scientific quality and integrity.

Outside biology and medicine, few journals that support open access are given the same academic credits as the traditional journals. This situation is further reinforced if there is a direct coupling between research funding and the “impact factors” of journals where results are published. However, by taking the risk and publishing important work in new journals that implement the open-access paradigm, the impact factor will automatically be enhanced.

The example of the Journal of High Energy Physics (JHEP) is striking. This relatively new journal was launched by the International School for Advanced Studies (SISSA) in Trieste in 1997. Today some studies give it an impact factor close to that of Physical Review Letters in publishing papers on high-energy physics. JHEP was launched ahead of its time and was forced, because of the lack of financial support, to become a subscription journal. However, with the support of the main physics laboratories, it would be possible in the present climate for this successful journal to enter the open-access arena once again.

If a change is wanted, it is up to us. Particle physics cannot change the world alone, but a clear position among our authors and our members of editorial boards will have a strong synergy with our colleagues pulling in the same direction in other fields.

MICE project gets the green light

On 21 March, the UK’s science and innovation minister announced the approval and funding of the Muon Ionisation Cooling Experiment, MICE, at the Rutherford Appleton Laboratory (RAL). MICE will use a new, dedicated muon beam line at the laboratory’s pulsed neutron and muon source, ISIS.

CCEnew1_05-05

MICE is an essential step in accelerator R&D towards the realization of a neutrino factory, in which an intense neutrino beam is obtained from the decay of muons in a storage ring. The unique feature of such a facility is that it can produce intense and well defined beams of electron-(anti)neutrinos at high energies, well above the production threshold for tau particles. This should allow measurements of the “appearance” of both muon- and tau-neutrinos from electron-neutrinos. Neutrino factories are therefore the ultimate tool for precision studies of neutrino oscillations and of leptonic charge-parity (CP) violation, a measurement that might prove decisive in understanding the matter-antimatter asymmetry of the universe.

The largest novelty of a neutrino factory in terms of accelerator physics is probably muon ionization cooling, which improves performance by a factor of four to ten, depending on the design; it also represents a large fraction of the neutrino factory’s estimated cost. Although proposed more than 20 years ago and generally considered as sound, the ionization cooling of muons has never been demonstrated.

CCEnew2_05-05

Muons are born in a rather undisciplined state at a few hundred million electron-volts from interactions of proton beams, and need to be cooled before they can be accelerated – to about 20 GeV – and stored to produce neutrinos. Known beam-cooling techniques (electron, stochastic or laser cooling) are much too slow, considering that muons live only a few microseconds before they decay.

A method that is expected to work instead is to cool the transverse phase-space of the beam by passing it through energy-absorbing material and accelerating structures embedded within a focusing magnetic lattice. The muons lose energy in both the transverse and longitudinal direction when they pass though the absorbers, while the acceleration increases only their longitudinal momentum. This technique, based on a principle first described by Russian pioneers Gersh Budker and Alexander Skrinsky in the early 1970s, is known as ionization cooling.

Unfortunately, although its mathematics is simple on paper, ionization cooling is in practice a delicate mix of technologies involving liquid hydrogen (the best absorber material), strong radio-frequency (RF) electric fields (to re-accelerate the muons in an orderly fashion) and magnetic fields for containment. This combination is extremely challenging. The windows of the vessel for the liquid hydrogen need to be as thin as possible to prevent multiple scattering, while ensuring safety in the confined space between potential sources that could ignite the highly inflammable hydrogen. The operation of RF cavities at high gradient in high magnetic fields is still unproven. Finally, the precise study of cooling requires measuring the beam properties with unprecedented accuracy; each muon will be measured using techniques from high-energy physics rather than standard beam diagnostics.

The size and complexity of this undertaking require the close collaboration of the accelerator and experimental particle-physics communities. MICE comprises some 140 physicists and engineers from Belgium, Italy, the Netherlands, Japan, Russia, Switzerland, the US and the UK. The proposed schedule for MICE envisages that the technical feasibility of muon ionization cooling will be established by 2008/9. The path will then be clear for a detailed proposal for a neutrino factory.

bright-rec iop pub iop-science physcis connect