Comsol -leaderboard other pages

Topics

Herschel favours quiet galaxy build-up theory

Did most galaxies form their stars through violent and tumultuous merging events or via more steady and gentle processes? A study of deep-field observations by the Herschel Space Observatory favours a quiet build-up for most galaxies, which is in contradiction with preconceptions.

The Herschel and Planck missions of the European Space Agency were launched together by an Ariane 5 rocket on 14 May 2009 (CERN Courier July/August 2009 p12). While Planck is scanning the whole sky with its prime objective to study the cosmic microwave background, conversely Herschel is performing accurate, deep observations of specific regions of the sky. Named after William Herschel, who discovered infrared radiation in 1800, it observes a variety of far-infrared sources – from the formation of stars and planets in the Milky Way to faint galaxies in the distant universe (CERN Courier September 2010 p11).

There are basically two routes to star formation: one involves a rather slow and steady process in spiral galaxies; the other is via short-lived starbursts that are thought to be driven by the interactions and mergers of galaxies. Starburst galaxies are known to be more frequent in the early universe and – with their stronger star formation rate (SFR) – they are thought to be prime contributors to the build-up of the stellar content of galaxies. The Herschel mission offers an unrivalled opportunity to test this hypothesis by determining the relative contribution of the two modes to the global SFR of the universe.

To have a statistically significant sample of galaxies over a broad range of distances Herschel turned its 3.5-m mirror – the largest space telescope – to fields already observed by the Hubble Space Telescope and other facilities, namely the northern and southern fields of the Great Observatories Origins Deep Survey (GOODS) and the wider COSMOS (Cosmic Evolution Survey) field. The GOODS-North and GOODS-South fields each cover a region of about 10 × 15 arcminutes (around a quarter of the size of the full moon), whereas the COSMOS field of 2° square is 50 times larger.

A first study by David Elbaz from CEA Saclay and colleagues from Europe and America has investigated the SFR in more than 2000 galaxies that were identified in the GOODS fields. These were found to have surprisingly similar infrared properties despite spanning the past 11 billion years, or 80% of the history of the universe. Starburst galaxies can be identified as outliers with a relative deficit of emission at a wavelength of about 8 μm. The origin of the difference would lie in the destruction of 8-μm emitting molecules – known as polycyclic aromatic hydrocarbons – by UV radiation in galaxies with compact starburst regions.

Elbaz and colleagues find that starburst galaxies contribute only about 20% to star formation in the universe. The result is corroborated by a second study led by Giulia Rodighiero of the University of Padova and European colleagues, which concentrated on the cosmic peak of the star-formation activity about 10 billion years ago – at a redshift of around z = 2 – using the GOODS-South and the wide COSMOS fields.

This evidence that most galaxies in the universe formed their stars in a gentle and steady fashion triggers a new question: how could such a process be more efficient in the early universe than now? A possible solution is that early galaxies are fed by rapid, narrow streams of cold gas, providing them with continuous flows of raw material for star formation. This scenario has not yet been observed but is suggested by computer simulations in which massive galaxies form in the knots of the cosmic web of dark matter and gas filaments that pervades the universe (CERN Courier September 2007 p11).

The discovery of type II superconductors

CCtyp1_09_11

Regular readers of CERN Courier are well aware that the LHC depends on some 10,000 magnets made of type II superconductor, which remains superconductive in high magnetic fields. Many will also recall that the era of superconducting magnets began 50 years ago when John “Gene” Kunzler and colleagues at Bell Telephone Laboratories showed that a primitive Nb3Sn wire could carry more than 1000 A/mm2 in a field of 8.8 T. What is much less well known is that the path to type II superconductors had already been demonstrated a quarter of a century earlier by Lev Shubnikov, Vladimir Khotkevich, Georgy Shepelev and Yuri Rjabinin in Kharkov (Shubnikov et al. 1936a, 1936b and 1937). So how was it that this understanding was lost for 25 years and rediscovered only by accident in 1961?

From the beginning

The huge value of superconducting wires for high-field magnets was clearly understood by Heike Kamerlingh Onnes, the discoverer of superconductivity. In his report submitted to the 3rd International Congress of Refrigeration in Chicago of 1913, he described his design for a 10 T superconducting solenoid. He had recently passed almost 500 A/mm2 through a lead wire, although his first attempt at a silk-insulated lead wire was not so successful, no doubt because of some “bad places in the wire” (Kamerlingh Onnes 1913). Sadly, just one year later, he found that pure-metal superconductors lose their superconductivity at a critical magnetic field, Hc, that is much less than 0.1 T. His interest then languished when the First World War intervened.

Work restarted in the 1920s, by which time laboratories in Leiden, Toronto, Oxford and Kharkov all had liquid helium and work on the superconductivity in metal alloys was taken up again. The initial results were complex because the loss of diamagnetism occurred at fields much lower than those at which resistance was restored. In the best cases, traces of superconductivity by transport were seen at almost 2 T. Kurt Mendelssohn in Oxford put forward the “sponge hypothesis”, which hypothesized that the small supercurrent densities observed at high fields were associated with a fine, filamentary network of tiny relative volume (Mendelssohn 1935). Because most samples had poorly controlled homogeneity and cold work state, metallurgical inhomogeneity was, indeed, a contributor to the large variation in properties.

CCtyp2_09_11

This plausible but fundamentally and decisively wrong hypothesis was soon to get its rebuttal by the experiments on lead–indium and lead–thallium alloys made by Shubnikov’s group in Kharkov. This seminal work of 1936 was characterized by its use of well annealed single crystals that in principle completely invalidated the premise of the “sponge model”. The experiments showed three main features:

1. There is a critical alloy concentration, xc, below which alloys behave as pure superconductors with a full Meissner effect and abrupt loss of superconductivity at a critical field, Hc (figure 1a).

2. Increasing the alloy concentration beyond xc, for example from 0.8 to 2.5% by weight of Tl in figure 1b, drastically changes the equilibrium magnetic properties, separating the loss of superconductivity, which occurs at an increasingly higher critical field Hc2, from the onset of flux penetration at the lower critical field Hc1.

3. With increasing xc, Hc1 becomes smaller, while Hc2 grows larger (figure 2). Shubnikov realized, however, that the energy of the superconducting state in his well annealed, almost reversible (i.e. low current density) single crystals was almost independent of alloy content.

CCtyp3_09_11

In all normal circumstances, the high quality of the Kharkov crystals, their evident homogeneity and above all the finding that their superconductivity must have been a bulk effect incapable of being explained by a small filament network should have undercut the sponge hypothesis and instigated much greater attention to the thermodynamic properties of the new type II superconducting state.

Regrettably, this discovery occurred against a backdrop of bitter conflict and human tragedy. Shubnikov’s friend Lev Landau, who was “held captive” by the “Mendelssohn sponge”, did not recognize this discovery either in 1936 or in 1950, when he and Vitaly Ginzburg created the phenomenological theory of superconductivity that, as Alexey Abrikosov later found, provided a beautiful description not just of the type I superconductors that they considered but also of the type II superconductivity discovered by Shubnikov. It is clear that their parameter κ describes perfectly the transition from type I to type II behaviour at the critical value 1√2. However, Landau still did not recognize the discovery by Shubnikov’s group, even though their results and the published paper were presented by Martin Ruhemann at the 6th International Congress of Refrigeration in The Hague in 1936. For reasons that appear quite mystifying in 2011, none of the scientists present either supported or continued the Kharkov work, even though a number of contemporary references cite it.

The real tragedy occurred a year later. Shubnikov, the director of the Low Temperature Laboratory in Kharkov, who had come under suspicion and been confined to the Soviet Union in 1936, was arrested in 1937 on charges of spying (the laboratory was well connected to Western laboratories, Shubnikov having spent several years in Leiden). He was summarily shot dead without any legal process. The following year, Landau was arrested and held in prison for a year, being released only on the advocacy of Pyotr Kapitza. The Soviet Union was experiencing a difficult time.

The dormant period

The results of Shubnikov and his co-workers remained generally unknown for another 25 years, even though Abrikosov drew attention to the work in the 1950s when he predicted the vortex state in high κ (>> 1/√2) superconductors. In his 2003 Nobel address describing his work explaining type II superconductivity, Abrikosov said: “I compared the theoretical predictions about the magnetization curves with the experimental results obtained by Lev Shubnikov and his associates on Pb–Tl alloys in 1937, and there was a very good fit” (Abrikosov 2004). However, as he has pointed out, his paper came out just as the Bardeen–Cooper–Schrieffer theory of superconductivity was published and all interest became focused on the superconducting mechanism, rather than on what some regarded as an esoteric vortex state. So work on high-field magnets lay dormant until the totally unexpected discovery by Kunzler’s group, which connected all of the disconnected sightings of high-field and high-current-density superconductivity that had been impossible to explain by the Mendelssohn sponge. Ted Berlincourt has written fine recollections of this fertile period in the 1950s when finally things began to gel (Berlincourt 1987).

CCtyp4_09_11

After 1961, Shubnikov’s seminal role in this extraordinary advance in science and technology was finally recognized, in particular at the International Conference on the Science of Superconductivity held in Hamilton, New York, in 1963, where several speakers praised the research. The conference chair, John Bardeen, and the secretary, Roland Schmitt, stated formally in the proceedings: “It should be noted that our theoretical understanding of type II superconductors is due mainly to Landau, Ginsburg, Abrikosov and Gor’kov, and that the first definitive experiments were carried out as early as 1937 by Shubnikov” (Bardeen and Schmitt 1964). Soon after, future Nobel laureate Pierre-Gilles de Gennes introduced the designation the “Shubnikov phase” for the mixed-vortex state that is stable between Hc1 and Hc2 (de Gennes 1966). It is also the case that the first doctoral dissertation on type II superconductors was that written by G D Shepelev under Shubnikov’s guidance.

Finally, we may note that the long, 25-year period of 1936–1961, in which the sponge hypothesis held sway, was also a period in which many new superconductors – such as NbN and Nb3Sn – were discovered. Like the more recent discoveries of cuprates, organics, MgB2 and the new iron-based systems, all are type II superconductors. What might have been if only the poignant politics of the Soviet 1930s had not so tragically entwined the studies and fate of Shubnikov’s group in Kharkov?

Farewell to the Tevatron

On 30 September 2011, Helen Edwards aborted the beam and dumped the ramp for the last time on what has for the past 28 years been one of the most productive physics machines in the world. The world’s first superconducting particle accelerator represented a major advance in both technology and physics reach. The Tevatron’s place in history is secure. During its life it provided fixed-target beams as well as colliding beams that resulted in numerous discoveries, including the first observations of the τ neutrino and top quark.

The concept of a superconducting accelerator predates the establishment of the National Accelerator Laboratory (NAL), later renamed Fermi National Accelerator Laboratory in 1974. In 1967 NAL’s first director, Robert R Wilson, discussed the possibility of using superconducting technology soon after the new laboratory moved into temporary offices in Oakbrook, Illinois. He recognized that it was premature to begin developing the concept of a new machine before construction of the planned 500 GeV accelerator at NAL had even begun. Nevertheless, superconducting technology held the promise of higher energies and lower operating costs. Not only would a superconducting accelerator in the Main Ring tunnel double the energy of the fixed-target beams, it would also enable collisions between beams. The Intersecting Storage Rings at CERN had at that stage already proved the feasibility of colliding proton beams at 62 GeV in two conventional storage rings. It would be a huge leap to go from conventional accelerator technology with one beam at NAL to a superconducting accelerator with colliding beams, but the thought was too tempting to dismiss completely.

The superconducting challenge

The Main Ring was commissioned in 1972. It was completed under budget and on schedule even though many difficult problems were encountered – and then resolved – during construction. The laboratory’s staff had demonstrated a desire to persevere and clearly had the talent to succeed in the face of tight budgets and enormous technical challenges. The Main Ring extended the energy reach by more than a factor of five over existing accelerators. The first 200 GeV beam to the fixed-target programme was a major accomplishment. Eventually, beams at 400 GeV with 3 × 1013 protons per pulse were delivered and split between up to 15 experiments, resulting in many physics results, including the discovery of the Υ in 1977.

Fermilab accelerator complex

Once the Main Ring was commissioned the laboratory answered the call of the superconducting machine, initially known as the Energy Doubler/Saver because Wilson’s vision was to reach an energy of 1000 GeV while also saving the cost of acceleration to lower energies. In 1973 work began in earnest to develop a superconducting accelerator magnet. Superconducting magnets had been built and used since the late 1940s and early 1950s – their primary use in particle physics being in bubble chambers. However, a new accelerator in the Main Ring tunnel would require approximately 1000 high-quality dipoles and quadrupoles: a reproducible magnet of accelerator quality would prove to be a major challenge.

Alvin Tollestrup played a key role in the effort to design such a magnet. After testing short magnets with monolithic superconductor, a design was chosen based on a warm-iron, collared coil of the niobium-titanium multifilament-strand cable developed at the Rutherford Laboratory in the UK. The first 20-ft (6.1-m) magnet was ready for tests in 1974 and by 1977 full-sized magnets were being produced and tested. However, many of these would be relegated to beam lines because further design improvements were still being implemented while magnet testing continued on test stands and in the beam lines.

An active quench-protection system had been developed and was exercised extensively during the early magnet testing phase – in which the people conducting the tests were ensconced behind the “dewar deflector”. This experience led to a robust system that has worked well over the years.

Towards construction

In 1979, energy-deposition studies were carried out to measure the quench behaviour of two Energy Doubler dipoles in 350 GeV and 400 GeV beam extracted from the Main Ring. These measurements provided an early opportunity to use the MARS Monte Carlo shower simulation software that Nikolai Mokhov wrote at the Institute of High Energy Physics, Protvino, in 1974 and is now widely used for many accelerator and beam-related applications. Mokhov began visiting Fermilab with MARS in 1979. He helped to collect the data from the tests and used his software to determine that a superconducting collider should be feasible. A fixed-target machine was more uncertain; the extraction system would have to have better loss properties than the extraction system from the Main Ring. Helen Edwards and Mike Harrison came to the rescue with a modified design that moved the electrostatic extraction septa halfway round the ring from the extraction point, while Curtis Crawford developed a way to make the wire planes in the electrostatic septa straighter, so as to reduce losses.

The Tevatron ring

Construction of the superconducting ring was authorized that same year and a final design for the magnets was in place by 1980. Because Wilson had anticipated building a second accelerator in the Main Ring tunnel, he left space underneath the Main Ring and designed its magnet stands to allow the magnets of a new machine to slip through them. The first step was to install magnets in one sector for a test in 1982. Concurrently, a large cryogenic refrigeration system was being built to provide the necessary cooling for the new accelerator. The cryogenic plant included 24 satellite refrigerators located in the service buildings that were spaced around the Main Ring tunnel. A large helium-liquefaction plant fed helium to the satellite refrigerators.

The completed accelerator was ready to be commissioned in 1983. It was a hectic and exciting time. Many challenges had been encountered and overcome, but many of those working on the project were still sceptical that it would succeed. Nevertheless, they made an incredible effort that brought the first superconducting synchrotron to life.

Beam was injected for the first time on 2 June 1983. It took less than a day to make the first turn all of the way round.

Beam was injected for the first time on 2 June 1983. It took less than a day to make the first turn all of the way round. On 3 July the Energy Doubler reached 512 GeV. Resonant extraction was established in August and the fixed-target programme at 400 GeV was underway in the autumn. By 1984 the energy had reached 800 GeV and the Energy Doubler was renamed the Tevatron. Five experiments took beam during the initial 400 GeV fixed-target run.

Construction of an antiproton source began in 1981, led by John Peoples. Antiprotons were stochastically cooled in the source using the technique that Simon van der Meer had first proposed at CERN. Work also began to construct a collision hall in the BØ straight section that would accommodate the proposed CDF detector. The antiproton source was completed in 1985 and in October the first proton–antiproton collisions were observed in a partially complete CDF detector. The first collider-physics run began in 1987 using only the CDF detector. DØ came online in 1992 with a detector in the DØ straight section.

The Main Injector ring

The Main Ring was still being used as an injector during the early collider runs, so it had to be accommodated in the collision halls. The CDF experiment had a Main Ring bypass that passed over the top of the detector, while the DØ collaboration had to learn to live with a Main Ring beam that went through the detector. In 1999 a new 150 GeV synchrotron, the Main Injector, was completed that replaced the Main Ring and provided more protons for both the collider and antiproton production. Built in a separate enclosure, it remedied the bypass problem. It would eventually enable simultaneous fixed-target and collider running, which had alternated until 1999.

In 1989 US President Bush awarded the National Medal of Technology to Helen Edwards, Rich Orr, Dick Lundy and Alvin Tollestrup for their work in building the Tevatron. Not only were they instrumental in solving the technical problems associated with building the forefront machine but they had also succeeded in maintaining an enthusiastic technical team in the face of problems that often seemed insurmountable.

High luminosities

The design luminosity for the early running of the collider programme was 1 × 1030 cm–2s–1 at 1800 GeV. During the first physics run in 1988 and 1989, 1.6 × 1030 cm–2s–1 was achieved. By the end of Run I in 1996, initial luminosities were typically 1.6 × 1031 cm–2 s–1 – a factor of 16 higher than the initial design luminosity. By this time a total integrated luminosity of 180 pb–1 had been delivered to the two detectors – and the top quark had been discovered.

Electron beam on the energy spread

By 2001, when Run II began, many improvements to the accelerator complex had been made, including the addition of electrostatic separators to create helical orbits that prevented collisions at locations other than BØ and DØ, where the two detectors were situated. Antiproton cooling systems were also improved and the linac was upgraded from 200 MeV to 400 MeV to improve injection into the 8 GeV booster. Cold compressors were also added to the satellite refrigerators in 1993 to lower the operating temperature by 0.5 K, making it possible to raise the beam energy to 980 GeV. However, the new compressors were not used until the beginning of Run II in 2001.

The Main Injector had a larger aperture and could deliver more protons with higher efficiency. When Run II began, this enabled the delivery of more protons to the antiproton target and better transfer efficiencies for protons and antiprotons. There were also improvements to the Antiproton Source and the incorporation of a new permanent magnet ring, the Recycler, in the Main Injector tunnel. Initially meant to recycle antiprotons, it was never used for this purpose; instead it was used to stash and cool antiprotons delivered from the antiproton source.

Initial luminosities at the beginning of Run II were in the region of 2 × 1031 cm–2 s–1. A luminosity improvement “campaign” was initiated and implemented concurrently with the physics programme. Improvements continued to be made over most of the Run II period. Significant improvements were made to the antiproton source resulting in an increase in the stacking rate from 7 × 1010 to 26 × 1010 antiprotons per hour. The Tevatron lattice was improved and magnets were reshimmed to correct problems with the “smart bolts”. Slip stacking was developed in the Main Injector, which resulted in more protons on the antiproton target.

The National Medal of Technology

However, the largest single improvement made during Run II was the development and implementation of electron cooling in the Recycler. This effort, led by Sergei Nagaitsev, was commissioned in 2005 and resulted in smaller longitudinal emmitances. Using the Recycler to stash and cool also increased the stacking rate in the antiproton source because protons could be off-loaded to the recycler often, making the cooling more efficient. The net increase from electron cooling was more than a factor of two. Other improvements included a reduction of the β* in the two interaction regions and there was a vigorous programme to improve the reliability of the entire complex. Altogether the improvements resulted in initial luminosities a factor of 350 better than the original design.

During Run II, the Fermilab accelerator complex consisted of seven accelerators that together delivered beam for the collider programme, two neutrino beams and one test beam. It has performed magnificently over the years. All but the Tevatron will now continue operating to carry Fermilab into the future. Nevertheless, the Tevatron defined the laboratory for 30 years. It has been an incredible experience for those of us fortunate enough to work on it.

Advances in acceleration: the superconducting way

CCsup1_09_11

In a seminal paper published in June 1961 A P Banford and G H Stafford described how a future superconducting proton linear accelerator could run continuously, instead of at the 1% duty cycle of the 50 MeV proton accelerator that was operating at the time at the Rutherford High Energy Laboratory in the UK. The basic argument was that, because ohmic losses in the accelerating cavity walls increase as the square of the accelerating voltage, copper cavities become uneconomical when the demand for high continuous-wave (CW) voltage grows with particle energy. It is here that superconductivity comes to the rescue.

The RF surface resistance of a superconductor is five orders of magnitude less than that of copper. The quality factor (Q0) of a superconducting resonator is typically in the billions (i.e., a billion oscillations before the resonator energy dissipates). After accounting for the refrigerator power needed, the net gain in the overall cooling power remains a factor of several hundred. It became clear that the higher-voltage, shorter superconducting structures can also reduce the disruptive effect that accelerating cavities have on the beam, resulting in better beam quality, higher maximum current and less beam halo (less activation). By virtue of low losses in the walls, a superconducting RF (SRF) cavity design can also afford a large beam aperture, which further reduces beam disruption and beam halo.

It took nearly 40 years for the early dream of that high duty-factor, high-intensity proton linear accelerator to be fulfilled. Today, the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory runs at 6% duty cycle with 88 m of superconducting cavities providing 1 MW of beam power with a 1 GeV, 10 mA beam. The success of the SNS has stimulated the construction of the European Spallation Source (ESS), with 5 MW of beam power, to be completed in 2016.

Pioneering work

In the early 1960s, Stanford University, under the leadership of William Fairbank, pioneered the development of superconducting cavities for electron accelerators. By 1968 they achieved a Q value of more than 1010 at 1.7 K for an 8.5 GHz TM010-mode single-cell pill-box resonator built of solid niobium. The first niobium cavity also demonstrated the exciting prospect of gradients of more than 30 MV/m.

However, with the more practical, lower frequency (1.3 GHz) accelerator structures that were built in the 1970s, the performance level fell to 2–4 MV/m. The primary roadblock was multipacting – the spontaneous resonant production of electrons. By the mid-1980s, the physics of multipacting was understood. It turned out that the limiting field-levels scale with the RF frequency, so the high-frequency cavities of the 1960s had been fortuitously exempt.

CCsup2_09_11

The next three decades saw several layers of gradient problems being uncovered, the underlying physics understood and solutions developed. Cavity performance then ratcheted up at a steady pace, as did accelerator applications. The development of the anti-multipacting, spherical (and elliptical) cavities was a breakthrough moment. With multipacting overcome, thermal breakdown of superconductivity became the next limiting mechanism, at 4–6 MV/m. Local heating at surface imperfections led to thermal runaway and a quench of superconductivity. The cure was to switch to niobium of high-purity – high residual-resistance ratio (RRR) niobium. With the co-operation of industry, RRR improved by an order of magnitude and cavity gradients rose on average by a factor of three. Another cure for thermal breakdown was to sputter a micron-thin film of niobium onto a copper cavity-substrate, which also had the benefit of reduced material costs – especially for the cavernous, low-frequency (0.35 GHz) cavities.

With the corresponding rise in surface electric fields, electron emission became the next limit to gradients, at 10–15 MV/m. Global R&D revealed microparticle contamination to be the dominant source of field emission, so the solution demanded better preparation techniques, such as powerful surface scrubbing with high-pressure (100 atm) water and assembly in Class 100 clean rooms. With these advances, cavity gradients climbed to 20 MV/m.

CCsup3_09_11

Above 20 MV/m, however, RF losses began mysteriously to rise exponentially with the field. The physics of such losses is still under investigation but pragmatic countermeasures are already in place. Electro-polishing has replaced the standard chemical etching to obtain a smoother surface, followed by mild baking at 120 °C for two days. There is now excellent prognosis for reaching 35–40 MV/m. Many nine-cell, 1 m-long niobium structures have demonstrated performance above 40 MV/m in qualification tests, while basic research continues to push towards the theoretical limit of 55 MV/m.

SRF takes off

As gradients improved steadily from the mid-1980s, RF superconductivity grew into a key technology for accelerators at the energy and luminosity frontiers, as well as at the cutting edge of low- and medium-energy nuclear physics, nuclear astrophysics and basic materials science. SRF cavities are now routinely accelerating electron, proton and heavy-ion beams in a variety of frontier accelerators.

CCsup4_09_11

It was in the early 1990s that SRF took off to push the energy frontier in storage rings, with TRISTAN at KEK and HERA at DESY. In the late 1990s the energy of the Large Electron–Positron collider at CERN doubled, with 500 m of superconducting cavities built by sputtering niobium on copper. Nb-Cu superconducting cavities now meet the voltage and high current demands of the LHC at CERN. At the luminosity frontier, high-current, high-luminosity electron–positron storage rings have operated and continue to operate with SRF cavities for copious production of c and b quarks at the Cornell Electron Storage Ring in the US, the KEKB facility in Japan and the Beijing Electron Positron Collider in China.

At the cutting edge of nuclear physics, Jefferson Lab has installed a 1 GeV superconducting linac to achieve 6.5 GeV beam by re-circulation. The laboratory’s Continuous Electron Beam Accelerator Facility (CEBAF) has been operating for 15 years with more than 150 m of SRF cavities, the largest number in operation at one facility. Looking ahead, Jefferson Lab has also developed 20 MV/m cavities to upgrade CEBAF’s energy from 6.5 GeV to 12 GeV.

For heavy ions, the CW superconducting accelerator provides an optimized array of independently phased resonators, to accelerate a variety of ion species with different velocities and charge states. The Argonne Tandem Linac Accelerator System at Argonne National Laboratory and the ALPI machine at the Legnaro National Laboratory have been operating for several decades. TRIUMF has expanded its radioactive-beam facility ISAC by adding a superconducting heavy-ion linac to supply more than 40 MV. Heavy-ion linacs in New Delhi and Mumbai have also come online. More than 250 superconducting resonators are currently operating around the world. New radioisotope beam (RIB) facilities are under construction with the SPIRAL2 project at the GANIL laboratory, HIE-ISOLDE at CERN and the ReA3 re-accelerator at Michigan State University (MSU).

Electron storage rings working as light sources are having an enormous impact on materials and biological science. SRF accelerating systems have been used in upgrading storage-ring light sources, such as the Cornell High Energy Synchrotron Source and the Taiwan Light Source. The Canadian Light Source, DIAMOND in the UK, the Shanghai Light Source in China and SOLEIL in France also operate with SRF; the National Synchrotron Light Source II at Brookhaven and the Pohang Light Source in Korea are planning to use SRF cavities. The Swiss Light Source at PSI and ELETTRA in Trieste have both installed third-harmonic superconducting cavities to improve beam lifetime and stability.

CCsup5_09_11

Free-electron lasers (FELs) based on SRF linacs provide tunable, coherent radiation over a wide range of wavelengths. The Jefferson Lab FEL generates 14 kW of CW laser power in the infrared, with energy recovery by recirculating nearly 1 MW of beam power. This is an important milestone toward the use of energy-recovery linacs (ERLs) for future light sources and electron-cooling applications. SRF-based FELs have operated at the Japan Atomic Energy Research Institute and at the ELBE project in Germany. FLASH at DESY is a short-wavelength FEL based on the self-amplified stimulated emission (SASE) principle, delivering 6 nm wavelength light. Its SRF linac uses more than 60 cavities, each 1 m long to accelerate a 1 GeV electron beam. A variety of innovative linac-based light sources are also under study for FELs and ERLs to deliver orders of magnitude higher brightness and optical beam quality. High-intensity beams for ERLs have spurred explorations for electron-cooling applications and for electron-ion colliders, for example to upgrade the Relativistic Heavy Ion Collider at Brookhaven.

With many exciting prospects on the horizon, the world SRF community has expanded to include many new laboratories where extensive SRF facilities have been installed. In all, more than 1 km of superconducting cavities have been installed worldwide to provide more than 7 GeV of acceleration. The next big jump of 16 GeV is already under construction, with the largest SRF application underway on a superconducting linac for the European XFEL at DESY. It will be based on nearly 700 niobium cavities operating at a gradient of more than 22 MV/m. When completed in 2016 it will provide X-ray beams of unprecedented brilliance at sub-nanometre (Ångström) wavelengths.

A new Facility for Rare Isotope Beams (FRIB) is underway at MSU to allow the study of exotic isotopes related to stellar evolution and the formation of elements in the cosmos. FRIB will be based on more than 330 low-velocity resonators, doubling the number currently in operation.

CCsup6_09_11

The most ambitious future application under study is for the International Linear Collider (ILC), a 500 GeV superconducting linear accelerator. It will require 16 km of superconducting cavities operating at gradients of 31.5 MV/m. Intense research is underway to reach a high yield for high gradients: 30–40 MV/m. New vendors for niobium, for cavities and for associated components are being developed around the world. Improved techniques for performance reliability and cost reduction are emerging. New assembly and test facilities are coming together at DESY, Saclay, KEK and Fermilab; the experience of the DESY XFEL will be a key stepping stone. Future ILC energy upgrades toward 1 TeV would benefit from even higher gradients that would push niobium towards its ultimate potential of 55 MV/m and thus open the door for new materials with gradients of 100 MV/m. Nb3Sn is the most promising candidate offering the prospect of 100 MV/m gradients, but substantial research is needed to verify this potential and guide the development necessary to harness it.

With the success of the SNS and the upcoming ESS, high-intensity proton linacs are likely to fulfil future needs in a variety of arenas: upgrading injector chains of proton accelerators at Fermilab’s Tevatron (Project X) and CERN’s LHC (the SPL), transmutation applications for treatment of radioactive nuclear waste, nuclear-energy production using thorium fuel, high-intensity neutrino beamlines, high-intensity muon sources for neutrino factories based on muon storage rings and eventually a muon collider at the multi-tera-electron-volt energy scale. All of these far future prospects will of course depend on the success of on-going efforts.

The 2011 International SRF conference in Chicago hosted more than 350 SRF enthusiasts. We can remain confident that the RF superconductivity community has both the creativity and determination to face the upcoming challenges and successfully bring these exciting prospects to fruition.

Progress in applied superconductivity at KEK

CCkek1_09_11

The Japanese High-Energy Accelerator Research Organization, or KEK, was established (originally as the National Laboratory for High Energy Physics) in Tsukuba in 1971, around the same time that superconductivity – discovered 60 years earlier – was just beginning to find large-scale applications in physics. The laboratory became involved in superconducting technology almost from the start and KEK has continued to push frontiers in the field as its research programme has evolved. Two pioneering scientists, the late Hiromi Hirabayashi and Yuzo Kojima, deserve particular mention for their leading roles in starting research and development at KEK in the mid-1970s – on superconducting magnets and RF superconductivity for accelerator science, respectively.

Superconducting-magnet technology was first put to practical use at KEK in a secondary-particle beamline at the 12 GeV proton synchrotron. Two cosθ dipole magnets and one superconducting septum magnet formed major components in the beamline, while a large-aperture “window-frame” superconducting spectrometer-magnet was built for one of the physics experiments. Hirabayashi not only took the lead in this milestone project, he also used it to train the next generation of magnet scientists and engineers. They would take forward the various superconducting-magnet projects that were subsequently carried out at KEK and in collaborative international programmes, including R&D on the Superconducting Super Collider in the US and LHC project at CERN.

Frontier projects with superconducting magnets

The frontier project for the 1980s was an electron–positron collider, TRISTAN, which had a maximum beam energy of 30 GeV and operated between 1987 and 1995. KEK successfully developed large-aperture insertion-quadrupole magnets for the four interaction regions, to bring high-brightness beams into collision in the physics experiments.

CCkek2_09_11

Following on from TRISTAN, KEK constructed the accelerator for the B-factory, KEKB – an energy-asymmetric electron–positron collider with two rings handling 3.5 GeV positrons and 8 GeV electrons – built in the TRISTAN tunnel. Superconducting interaction-region quadrupole (IRQ) magnets were again developed. Based on a sophisticated coil design, with corrector-coils in additional coil layers, they were very closely integrated with the collider detector, BELLE (figure 1). The IRQs contributed to the highest beam luminosity ever achieved, as described later, enabling the KEKB accelerator and the BELLE experiment to help in establishing the Kobayashi-Maskawa theory for which the Nobel prize was awarded in 2008. A further sophisticated multiple-magnet system is now being developed for the interaction region at Super-KEKB, the upgraded B-factory, which was approved in 2010.

The experience acquired in these projects was to allow KEK to make important contributions to the LHC, in particular in a fundamental study of high-field dipoles to reach 10 T and in the construction of insertion quadrupoles with a design field gradient of 215 T/m at a coil aperture of 70 mm (figure 2). The quadrupole magnets were developed and supplied in collaboration with Fermilab.

More recently, KEK developed a primary proton-transport line at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, in a collaboration between KEK and the Japan Atomic Energy Agency (JAEA). To create and direct a neutrino beam towards the Kamioka neutrino observatory nearly 300 km away, an internally extracted proton beam from J-PARC has to bend through around 90°, with a much smaller bending radius than that of the main-ring accelerator. This requirement has been achieved using a series of uniquely fashioned superconducting magnets with combined-function coils having dipole and quadrupole field components within a single-layer coil (figure 3). The experience accumulated in the earlier projects contributed to achieving this distinctive superconducting-magnet design, which also involved important co-operation with Brookhaven National Laboratory. At J-PARC superconductivity has taken an essential role in providing high-intensity pulsed muon beams in the meson-science laboratory, as well as the superconducting solenoid beamlines for muon science and a superconducting magnetic spectrometer for particle physics.

CCkek3_09_11

For the future, KEK intends to contribute to upgrade programmes for the LHC, to the application of advanced high-field superconductors in co-operation with the National Institute of Materials Science and to high-temperature superconductors in co-operation with other laboratories and industry. Fundamental research on the effect of stress-strain on superconductor performance is crucially important for high-field superconducting magnets. Experimental studies of structural and stress analysis are in progress using neutron-diffraction techniques at the J-PARC neutron-beam facility in co-operation with JAEA.

KEK has also applied superconducting-magnet technology to particle-detector magnets. The TRISTAN collider’s three major particle detectors – TOPAZ, VENUS and AMY – and the BELLE detector at KEKB were based on superconducting solenoid magnets to provide the magnetic fields for momentum analysis in particle spectroscopy. In particular, these involved a great deal of development work on aluminium-stabilized superconductor technology.

The key feature of this technology is that it allows the maximum magnetic field for the minimum material – an important step in matching the physicists’ dream of having only a magnetic field, without additional material, in an experiment. It therefore leads to the possibility of “thin-walled” superconducting coils that are in effect transparent to particles passing through. The use of aluminium stabilizer instead of ordinary copper stabilizer allows for low density and low resistance but requires sufficiently high strength. It has become a fundamental technology in the construction of magnets for large-scale particle detectors, including – most recently – the magnet systems of the ATLAS and CMS experiments at the LHC. KEK provided the ATLAS central solenoid magnet, which had the extremely demanding requirement that it should be installed in a common cryostat with the liquid-argon calorimeter system in addition to employing the advanced high-strength aluminium-stabilized superconductor technology to meet the physics requirement for the magnetic field to be as transparent as possible.

KEK has also applied this technology in a variety of global collaborations, including the muon g-2 parameter measurement experiment (E-821) at Brookhaven National Laboratory and the WASA experiment at Uppsala University (now transferred to the Cooler Synchrotron (COSY) ring at the Forschungszentrum Jülich). A more extreme application is in the field of astroparticle physics. The Balloon-borne Experiment with a Superconducting Spectrometer (BESS) has successfully flown twice over Antarctica to search for primordial antiparticles in the universe, in collaboration with NASA in the framework of Japan–US co-operation in space science.

Superconducting acceleration

Turning now to RF superconductivity, TRISTAN was the first high-energy particle accelerator in the world to use superconducting RF cavities as the main acceleration components with a frequency of 500 MHz in the routine operation of the accelerator (figure 4). This is where Kojima took the lead and established a milestone by using superconducting RF to provide a high continuous-wave accelerating gradient in storage rings. He also trained many next-generation scientists in RF superconductivity, who have since extended the application in a variety of subsequent projects and global collaborations.

CCkek4_09_11

The technology pioneered at TRISTAN was extended for the KEKB accelerator, which was commissioned in 1998 with superconducting RF cavities as a major accelerating component. Eight single-cell cavities with sufficiently damped higher-order modes (HOM) accelerated the electron beam of 1.4 A, delivering the RF power of 350 kW per cavity. This technology was also applied to the Beijing Electron–Positron Collider II in co-operation with the Institute for High-Energy Physics in Beijing. Furthermore, collaboration with the National Synchrotron Radiation Research Center is under way to apply superconducting RF technology to its new synchrotron-light source, the Taiwan Photon Source. At the same time, a unique superconducting RF cavity, called the “crab cavity”, was successfully developed as a key component to maximize the peak luminosity of KEKB (figure 5). It was designed to reach the optimum beam-interaction efficiency by tilting the beam and then compensating the crossing angles. Once installed at KEKB, the crab cavity contributed to the facility’s world-record luminosity of 2.11 × 1034 cm2s–1 achieved in 2009. KEKB shut down in June 2010 to be upgraded to Super KEKB, so as to allow operation with a peak luminosity of 8 × 1035 cm–2s–1.

Looking to future applications of RF superconductivity in accelerator science, KEK is now undertaking research and development in two major directions. Energy-recovery linacs (ERLs), which in effect recycle energy from the beam, will inevitably be required for efficient acceleration, especially in applications of intense electron beams and in photon science. KEK is building a compact ERL facility as a prototype for a potential future ERL accelerator.

Aiming towards the high-energy frontier, research and development for the International Linear Collider (ILC) is being carried out in a global co-operation led by the Global Design Effort (GDE). The design, based on RF superconductivity, foresees more than 16,000 superconducting 9-cell 1.3 GHz cavities in series, operating with an average field gradient of 31.5 MV/m, to achieve a linear electron–positron collider based on two 250 GeV linear accelerators.

CCkek5_09_11

KEK is contributing to developing the advanced superconducting RF cavity technology for the ILC within the global collaboration. There has been successful progress towards demonstrating a field gradient of more than 40 MV/M in 9-cell cavities, based on accumulated long-term fundamental research and development. In a unique global effort, KEK has hosted a cavity-string test (the so-called S1-Global) with a cavity-string and a cryomodule system jointly contributed by DESY, Fermilab, INFN, SLAC and KEK (figure 6). The test facility has demonstrated how international collaboration can be possible in providing a plug-compatible cavity-string assembly, which would inevitably be required in constructing the ILC accelerator.

CCkek6_09_11

Applied superconductivity has been an essential and fundamental technology in all of the major experimental facilities for accelerator science and for physics programmes that have and will be carried out at KEK, as well as for international co-operation programmes, including the LHC and the ILC. The hope is that KEK will continue both to play an important role in contributing to advanced technology and to be a centre of excellence in applied superconductivity for fundamental physics and accelerator science.

PET and MRI: providing the full picture

CCmri1_09_11

Modern medical imaging of the human body often provides not only anatomical detail but also functional information or the biochemical status of a particular region of the body. The first example of the combined use of anatomical and functional imaging, now known as “hybrid imaging”, put positron emission tomography (PET) together with computed tomography (CT). David Townsend, a former CERN scientist working at the University Hospital in Geneva, first thought of incorporating an X-ray-based CT scanner in the same instrument as a PET camera in 1991. The first such instrument was in operation by the end of the 1990s and now all PET cameras that are commercially available from the major international companies are combined PET/CT scanners.

Although PET/CT has proved its value in oncology, CT still has some serious limitations related to soft-tissue contrast, which often needs additional injections of contrast agents for the patient. In CT imaging, high levels of radiation exposure are also a concern, particularly in paediatrics, in repeated scanning for therapy monitoring and in other non-oncology pathologies. An alternative approach for hybrid imaging has arisen recently with the emergence of systems that combine PET with magnetic-resonance imaging (MRI). The advantages of a PET/MRI scanner is evident from the table below, which illustrates the merits of the different medical-imaging techniques.CCmri2_09_11

Unlike CT, MRI provides good contrast in soft tissue. This technique involves aligning the magnetic moments of hydrogen nuclei in a strong magnetic field and then using a temporary RF field to flip the spin of some of them. When these nuclei revert to their former state, they radiate at the same radio frequency. The key to providing an image is to apply an additional magnetic-field gradient so that the resonant frequency varies with position. A typical MRI scanner comprises a strong magnet to produce a static, homogeneous, longitudinal magnetic field (B0), three “gradient” coils that can be switched on and off, an RF transmitter, RF receiver coils and a computer-control and data-acquisition system (figure 1).

Superconducting magnets offer the optimum way to provide the necessary field strength over the volume required in a whole-body scanner and the commercial development of these magnets from the 1970s onwards has led to the wide medical use of MRI. Modern systems have fields of 1.5–3 T, although some with fields up to 7 T already exist. The gradient coils generate magnetic-field gradients in x, y and z directions and are used to encode position, while the RF-transmitter coil is used to excite nuclei by dragging longitudinal magnetization from the B0 direction to a desired predefined angle. Several RF-receiver coils are used – one is integrated into the MRI scanner and several “surface” coils can be placed closer to the patient to improve signal-to-noise ratios. These coils receive RF pulses transmitted from nuclei when they lose their excitation and their re-alignment with B0. Recently, the RF receiver–transmitter system has evolved to accommodate two parallel transmitters that improve the spatial homogeneity of acquired signals – this is important in 3 T systems for whole-body imaging.

CCmri3_09_11

A PET camera, by contrast, is used to detect and measure the distribution in the body of radioisotopes decaying via positron emission. For every positron annihilation event in tissue, an almost coplanar pair of 511 keV gamma-ray photons is produced. A PET detector consists of a pixelated scintillator ring connected to banks of photomultiplier tubes (PMTs) via optical guides. The PMTs convert the photons of visible light from the scintillators into voltage and the relative output voltages of pairs of PMTs determine the position of the photon pairs at the detector surface. To identify photon pairs, the PMTs operate in coincidence using a timing window of a few nanoseconds. The detected coincident pair defines a line-of-response (LOR), somewhere along which a positron annihilation event happened. Detectors with fast timing-resolution, typically less than 600 ps, can localize the annihilation on the LOR using time-of-flight technology (TOF).

PET meets MRI

PMTs, however, are inherently unable to operate inside a magnetic field. Early attempts to develop smaller animal PET/MRI scanners produced several innovative prototypes using different approaches to overcome the cross-talk between the PET and MRI systems. In 2008, both the Philips and Siemens medical companies developed their first PET/MRI prototypes for humans. The Philips system had two independent scanners and additional shielding to contain the magnetic field from its 3 T magnet, with a coaxial distance between the PET detector and the MRI scanner of 4.2 m (figure 2). Furthermore, each PMT was individually shielded and its photocathode aligned along the flux lines of the magnetic field. Apart from this change the PET detector was the same as the commercially available TOF scanner and this PET/MRI system was capable of acquiring whole-body images in a sequential fashion. The Siemens prototype had a PET scanner integrated into an MRI system with a 3 T scanner. A retractable PET detector used avalanche photodiodes (APDs) which are solid-state photon detectors coupled to lutetium oxyorthosilicate scintillator crystals (LSO). The system was designed to acquire simultaneous PET and MRI pictures of the brain.

CCmri4_09_11

Today, three large-imaging companies have PET/MRI whole-body scanners in their portfolios, although all three are significantly different. In 2010, Siemens announced a whole-body simultaneous PET/MRI scanner based on their original technology and this has already received medical devices registration (CE mark for Europe and 510(k) for the US). This latest model comprises a 70 cm diameter 3 T magnet with an integrated PET detector ring 60 cm in diameter. The PET detector consists of pixelated scintillators coupled via optical guides to an array of APDs (figure 3). APDs are insensitive to magnetic fields and have high gain (102–103) and timing resolution of the order of 1 ns (Lewellen 2008). The APD arrays are connected to front-end electronics for pre-amplification and digitization and have a cooling circuit to maintain constant temperature because their gain is temperature sensitive. Meanwhile, Philips has commercialized its sequential whole-body TOF-PET/MRI system (CE mark already received and 510(k) approval pending). A third company, General Electric, is proposing an arrangement where an MRI scanner is placed in a room adjacent to a PET/CT scanner and the patient is transferred from one system to the other using a shuttle couch arrangement.

CCmri5_09_11

Some concerns remain about the integration of MRI and PET. Photons travelling through a patient’s body are absorbed or attenuated and are not registered. In PET/CT systems, this is compensated for by using a low-dose CT scan, which provides an accurate attenuation map of the object being imaged; this CT attenuation map can then be used for attenuation correction of the PET image. This is not possible in PET/MRI systems and various methods for estimating attenuation coefficients are still under development. Another problem is cross-talk between PET and MRI because RF pulses from the MRI may cause the PET electronics to lose counts during transmission of the RF pulses. Nevertheless, the latest commercial systems seem to have overcome most of the problems and fine-tuning of the designs continues. Industry and the medical-imaging community are now actively collaborating to use and improve this new medical technology, as well as to demonstrate a true clinical utility for PET/MRI scanners. This has already resulted in a multitude of scientific publications on these topics in both journals and conferences on PET and MRI.

There is a remarkable similarity in design between these integrated PET/MRI clinical scanners and the large, general-purpose detector systems developed in particle physics. For example, the CMS experiment at the LHC in CERN has a central detector of 4 m × 15 m within an axial magnetic field of 4 T (figure 4). This can be compared with a commercial whole-body PET/MRI scanner, with a field of view of 0.6 m × 0.26 m and magnetic fields of up to 3 T. It is therefore reasonable to expect that the latest technologies now being used in particle physics detectors – e.g. silicon photomultipliers – will soon be incorporated by industry into newer and more sensitive combined PET/MRI scanners, with timing-resolution capabilities superior to even those of today’s state-of-the-art PET/CT scanners that are based on PMTs.

Accelerator-driven systems for nuclear energy

CCvie1_09_11

Some years after Ernest Rutherford invented nuclear physics, he expressed a wish for “a copious supply of atoms and electrons which have an individual energy far transcending that of the α and β particles” available from natural sources so as to “open up an extraordinarily interesting field of investigation”. He was calling for the invention of the particle accelerator, but probably had no idea that by 2011 – the centenary of his famous publication on the nuclear atom – some 30,000 of them would operate worldwide, mostly for applications outside discovery science. And given that he famously dismissed as “moonshine” all talk about someday extracting useful energy from atoms, he surely did not foresee what might conceivably become one of the most important practical applications of accelerators: accelerator-driven systems, or ADS, for transmuting nuclear waste and generating electricity.

How would an accelerator replace a nuclear reactor? Today’s reactors include a core in which the composition and configuration of the nuclear fuel are such that there are enough neutrons to maintain a fission chain reaction. An ADS system involves a fuel configuration where the neutrons necessary to establish a sustainable fission chain reaction are produced by spallation of a target by an accelerator. Because the neutrons that maintain the chain reaction are produced by the accelerator – and are thus external to the core of the ADS reactor – an ADS reactor has a lot of flexibility in the elements and isotopes that can be fissioned in its core.

Among the fission products of uranium-235 are the minor actinides (mainly americium and californium), which are radioactive with extremely long half-lives. Their presence in nuclear waste drives the storage requirements for spent fuel: 100,000 years to return the radioactivity to its initial levels. These isotopes could be fissioned (burnt) in a conventional reactor, but given the characteristics of their delayed neutrons there is a maximum concentration of these materials that can be consumed in existing reactors. With ADS, however, the minor actinides can be a much larger fraction of the fuel because an ADS core is subcritical and the fission neutrons are produced by the accelerator externally to the core. Thus, core kinetics and stability do not come into play as much as they would in a conventional reactor. So, ADS can burn up a much greater quantity of the minor actinides than a typical commercial reactor, and can return the radiation to its initial level in “only” 300 years.

Nobel laureate Carlo Rubbia and others have been advocating ADS for two decades – and for good reason. By efficiently burning the minor actinides, ADS could conceivably transform the landscape of the waste-disposal and storage problem. And to paraphrase a US white paper from September 2010, additional advantages are flexibility of fuel composition and potentially enhanced safety. With ADS, nonfissile fuels such as thorium can be used without incorporating uranium or plutonium into fresh fuel. An ADS can be shut down simply by switching off the accelerator. With a large enough margin to criticality, reactivity-induced transients cannot cause a supercritical accident and power control via beam-current control allows fuel burn-up compensation. However, as we have learnt from Fukushima, it remains necessary to address the problem of long-term removal of the residual decay heat left in the fuel once the fission reaction has been shut down.

The overall potential of ADS has been understood for two decades, but technological evolution during that time has improved the outlook for actual implementation. As early as 2002, a European study concluded that “beam powers of up to 10 MW for cyclotrons and 100 MW for linacs now appear to be feasible”.

It is important to highlight the ADS prospects for power production using thorium-based fuel. Even though thorium has no current market value, it is known to be plentiful. Thorium can absorb a neutron to become 233U, which is fissile. Its potential benefits for nuclear energy are proliferation resistance, minimized production of radiotoxic transuranics, avoidance of the need to incorporate fissile material in the fuel and the potential to operate nearly indefinitely in a closed fuel cycle.

Interest has been growing worldwide. Thorium particularly interests India, Norway and China, all with programmes investigating the 233U-thorium fuel cycle. India, which has little uranium but much thorium, sees ADS as part of its energy future. China is rapidly building reactors, but not having identified a stable geological waste repository is investigating ADS for transmutation of minor actinides. Perhaps most notably, Belgium is planning MYRRHA, an 85 MW ADS prototype to be built at the Belgian Nuclear Research Centre, SCK.CEN. The projected total capital cost is €950 million, with a construction start set for 2015.

Some 200 of us are gathering on 11–14 December in Mumbai for the 2nd International Workshop on Accelerator-Driven Sub-Critical Systems & Thorium Utilization. The first conference was held last year at Virginia Tech, in Blacksburg, Virginia. Considerable effort has been spent to get a world-class International Advisory Committee that includes Rubbia as well as Srikumar Banerjee, chair, Atomic Energy Commission, India, and Hamid Aït Abderrahim, director, MYRRHA. For more about the conference, see www.ivsnet.org/ADS/ADS2011.

Daya Bay experiment begins taking data

CCnew1_08_11

The Daya Bay Reactor Neutrino Experiment has begun its quest to answer some of the puzzling questions that still remain about neutrinos. The experiment’s first completed set of twin detectors is now recording interactions of antineutrinos as they travel away from the powerful reactors of the China Guangdong Nuclear Power Group, in southern China.

The start-up of the Daya Bay experiment marks the first step in the international effort of the Daya Bay collaboration to measure a crucial quantity related to the third type of oscillation, in which the electron-neutrinos morph into the other two flavours of neutrino. This transformation occurs through the least known neutrino-mixing angle, θ13, and could reveal clues leading to an understanding of why matter predominates over antimatter in the universe.

The experiment is well positioned for a precise measurement of the poorly known value of θ13 because it is close to some of the world’s most powerful nuclear reactors – the Daya Bay and Ling Ao nuclear power reactors, located 55 km from Hong Kong – and it will take data from a total of eight large, virtually identical detectors in three experimental halls deep under the adjacent mountains. Experimental Hall 1, a third of a kilometre from the twin Daya Bay reactors, is the first to start operating. Hall 2, about a half kilometre from the Ling Ao reactors, will come online in the autumn. Hall 3, the furthest hall, about 2 km from the reactors, will be ready to take data in the summer of 2012.

The Daya Bay experiment is a “disappearance” experiment. The detectors in the two closest halls will measure the flux of electron-antineutrinos from the reactors; the detectors at the far hall will look for a depletion in the expected antineutrino flux. The cylindrical antineutrino detectors are filled with liquid scintillator, while sensitive photomultiplier tubes line the detector walls, ready to amplify and record the telltale flashes of light produced by the rare antineutrino interactions. As a result of the large flux of antineutrinos from the reactors, the twin detectors in each hall will capture more than 1000 interactions a day, while at their greater distance the four detectors in the far hall will measure only a few hundred interactions a day. To measure θ13, the experiment records the precise difference in flux and energy distribution between the near and far detectors.

The experimental halls are deep under the mountain to shield the detectors from cosmic rays and the detectors themselves are submerged in pools of water to shield them from radioactive decays in the surrounding rock. Energetic cosmic rays that make it through the shielding are tracked by photomultiplier tubes in the walls of the water pool and muon trackers in the roof over the pool so that events of this kind can be rejected.

After two to three years of collecting data with all eight detectors, the Daya Bay Reactor Neutrino Experiment should be well positioned to meet its goal of measuring the electron-neutrino oscillation amplitude – and hence sin213 – with a sensitivity of 1%.

The start up of the experiment begins after eight years of effort – four years of planning and four years of construction – by hundreds of physicists and engineers from around the globe. China and the US lead the Daya Bay collaboration, which also includes participants from Russia, the Czech Republic, Hong Kong and Taiwan. The Chinese effort is led by project manager Yifang Wang of the Institute of High Energy Physics (IHEP), Beijing, and the US effort is led by project manager Bill Edwards of Lawrence Berkeley National Laboratory and chief scientist Steve Kettell of Brookhaven National Laboratory.

ALICE measures the shape of 
head-on lead–lead collisions

CCnew2_08_11

One the many surprises to have emerged from studies of heavy-ion collisions at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) and now at CERN’s LHC concerns the extreme fluidity of the dense matter of the nuclear fireball produced. This has traditionally been studied experimentally by measuring the second harmonic of the azimuthal distribution of emitted particles with respect to the plane of nuclear impact. Known as v2, this observable is remarkably large, saturating expectations from hydrodynamic models, suggesting that the so-called quark-gluon plasma is one of the most perfect fluids in nature. Many assumed that the matter in the elliptical nuclear overlap region becomes smooth upon thermalization, rendering the Fourier coefficients other than v2 negligible in comparison.

However, recently it was proposed that collective flow also responds to pressure gradients from the “chunkiness” of matter distributed within the initial fireball in random event-by-event fluctuations. These nonuniformities lead to anisotropy patterns beyond smooth ellipses: triangular, quadrangular, and pentangular flow are now being studied by measurements of v3, v4, v5 and beyond at RHIC and the LHC.

The new measurements evoke comparisons with the vestigial cosmic microwave background (CMB) radiation, whose nonuniformities offer hints about the conditions at the universe’s earliest moments. Just as the CMB anisotropy is expressed by multipole moments, the azimuthal anisotropy of correlated hadron pairs from heavy-ion collisions can be represented by a spectrum of Fourier coefficients V. In pair-correlation measurements, a “trigger” particle is paired with associated particles in the event to form a distribution in relative azimuth Δφ. Over many events, a correlation function is produced, whose peaks and valleys describe the relative probability of pair coincidence.

The left side of the figure shows a correlation function measured by ALICE for the 2% most central (i.e. head-on) lead–lead collisions at the LHC, where the particle pairs are separated in pseudorapidity to suppress “near-side” jet correlations near Δφ = 0. Even when this gap is imposed, a curious longitudinally-extended near-side “ridge” feature remains. Considerable theoretical effort has been devoted to explaining the source of this feature since its discovery at RHIC. In the correlation function in the figure, the first five V harmonics are superimposed. The right side of the figure shows the spectrum of the Fourier amplitudes. Evidently in the most head-on collisions, the dominant harmonic is not the second elliptical term, but the triangular one, V; moreover, the Fourier coefficients here are significant up to n = 5. These results corroborate the idea that initial density fluctuations are non-negligible.

The intriguing double-peak structure evident on the “away side” (i.e. opposite to the trigger particle, at Δφ = π) was not observed in inclusive (i.e. not background-subtracted) correlation functions prior to the LHC. However, in the hope of isolating jet-like correlations, the v2 component was often subtracted as a non-jet background, leaving a residual double peak when the initial away-side peak was broad. This led to interpretation of the structure as a coherent shock-wave response of the nuclear matter to energetic recoil partons, akin to a Mach cone in acoustics. However, the concepts of higher-order anisotropic flow are now gaining favour over theories that depend on conceptually independent Mach-cone and ridge explanations.

These measurements at the LHC are significant because they suggest a single consistent physical picture, vindicating relativistic viscous hydrodynamics as the most plausible explanation for the observed anisotropy. The same collective response to initial spatial anisotropy that causes elliptic flow also economically explains the puzzling “ridge” and “Mach cone” features, once event-by-event initial-state density fluctuations are considered. Moreover, measuring the higher Fourier harmonics offers tantalizing possibilities to improve understanding of the nuclear initial state and the transport properties of the nuclear matter. For example, the high-harmonic features at small angular scales are suppressed by the smoothing effects of shear viscosity. This constrains models incorporating a realistic initial state and hydrodynamic evolution, improving understanding of the deconfined phase of nuclear matter.

Lepton Photon goes to Mumbai

The global nature of modern particle physics was clearly manifest at the biennial Lepton Photon conference that took place this year in India. The Tata Institute of Fundamental Research (TIFR), Mumbai, was host to the 25th International Symposium on Lepton Photon Interactions at High Energies – Lepton Photon 2011 – on 22–27 August.

The conference opened with a welcome from Mustansir Barma, director of TIFR, and speeches by Srikumar Banerjee, chair of the Atomic Energy Commission, Shri Prithviraj Chavan, the Chief Minister of the State of Maharashtra, and Patricia McBride, chair of the C11 Committee of the International Union of Pure and Applied Physics, under whose auspices the Lepton Photon conferences take place.

CCnew4_08_11

New results from the LHC and the latest news on searches for the Higgs boson were among the highlights, as at the EPS-HEP 2011 meeting held in Grenoble in July. Thanks to the outstanding performance of the LHC, the experiments and the Worldwide LHC Computing Grid, some of the results were from analyses based on roughly twice the data sample presented in Grenoble. With the additional data analysed, the ATLAS and CMS experiments have now excluded the existence of a Higgs over most of the mass region 145–466 GeV with 95% confidence level. Moreover, the significance of hints of a Higgs signal has slightly decreased and it remains the case that the slight excess observed could be the effect of statistical fluctuations.

The talks covered a range of other physics that is being investigated by the LHC experiments. These included precision measurements in top-quark physics, for example, and in B physics, where results from the LHCb experiment on B mesons are becoming the most precise yet. There were also reports on the status and prospects of the LHC machine and, by CERN’s direct general, Rolf Heuer, on the future of colliders after the LHC.

Reports on some of the results presented at the conference follow on the next two pages.

• For the conference programme and presentations, see http://www.tifr.res.in/~lp11/.

bright-rec iop pub iop-science physcis connect