After some 180 days of running and 4 × 1014 proton–proton collisions, the LHC’s 2011 proton run came to an end at 5.15 p.m. on 30 October. For the second year running, the LHC team has largely surpassed its operational objectives, steadily increasing the rate at which the LHC has delivered data to the experiments.
At the beginning of the year’s run, the objective was to deliver an integrated luminosity of 1 fb–1 during the course of 2011. This came on 17 June, setting the experiments up well for the major physics conferences of the summer and requiring the 2011 data objective to be revised upwards to 5 fb–1. That milestone was passed by 18 October; when proton running ended, the LHC had delivered around 5.6 fb–1 to both the ATLAS and CMS experiments, 1.2 fb–1 to LHCb and 5 pb–1 to ALICE. Physics highlights for these four big experiments include closing down the space available for the long sought Higgs and supersymmetric particles to hide in, putting the Standard Model of particle physics through increasingly gruelling tests and advancing our understanding of the primordial universe.
“At the end of this year’s proton running, the LHC is reaching cruising speed,” comments CERN’s director for accelerators and technology, Steve Myers. “To put things in context, the present data-production rate is a factor of 4 million higher than in the first run in 2010 and a factor of 30 higher than at the beginning of 2011.”
Time has also been devoted to some special physics runs for the smaller TOTEM and ALFA experiments, which probe small-angle (forward) scattering, allowing them to measure the total proton–proton cross-section and the absolute luminosity calibration. In these runs, the beam is de-squeezed to a β* of 90 m in ATLAS and CMS. This is instead of the usual 1m β*, and gives a larger beam size at the interaction points – resulting in a reduced beam divergence there.
A number of factors have contributed to these impressive totals, including: the increase in the total number of bunches to 1380 during the first part of the year, the high bunch intensity and small beam sizes delivered by the LHC injectors, and the good aperture in the regions around ATLAS and CMS, which have allowed a squeeze to β* = 1 m. About 25% of the programmed physics time was spent with stable beams – which is not bad at this stage in the LHC’s career, given its complexity and the operation with high-intensity beams.
Following the end of proton running, a week of machine development began. An early high point was the cohabitation of protons and lead ions in the LHC – low-intensity beams of protons (clockwise) and lead ions (anti-clockwise) were successfully injected and ramped together. A first test of proton–lead collisions was scheduled to follow after commissioning for the lead-ion run and a 5-day technical stop. If successful, these tests will lead to a new strand of LHC operation, using protons to probe the internal structure of the much more massive lead ions.
As in 2010, the main goal before the end of year, however, is a four-week period of lead-ion running before the machine closes down for the winter technical stop.
On the night of 11–12 October, just a few hours after installation of its camera, the First G-APD Cherenkov Telescope (FACT) recorded flashes of Cherenkov light from air showers induced by cosmic rays. Remarkably, the shower images were recorded during a full moon – a feat that would not have been possible with a conventional air Cherenkov telescope.
FACT, installed at an altitude of 2200 m at the Roque de los Muchachos Observatory on La Palma, in the Canary Islands, uses newly developed Geiger-mode avalanche photo-diodes (G-APDs) instead of the photomultiplier tubes (PMTs) normally used in Cherenkov telescopes. These first images, taken in ambient light 100 times brighter than PMT-based telescopes could tolerate, demonstrate for the first time the use of silicon detectors capable of recording images at a rate of 109 a second.
The pioneering camera was designed and built by a collaboration from the universities of Dortmund, Geneva and Würzburg as well as EPF Lausanne (EPFL) and led by ETH Zurich. It consists of 1440 G-APDs, each one a square with a side of only 3 mm. To increase the active area, the collaboration developed solid light concentrators together with the University of Zurich. Each concentrator has a hexagonal entrance window 9.5 mm across and a square exit window with a side of 2.8 mm to match onto a G-APD. The result is a 10-fold concentration area for light reflected from the telescope mirror, while at the same time rejecting background light from outside the area of the mirror. There is one concentrator glued to each G-APD, providing a field of view of 0.1° per pixel and 4.5° for the full camera.
The electronics to read each of the 1440 pixels individually is based on the DRS-4 analogue ring sampler chip operating at a frequency of 2 gigasamples/s. The complete electronics package is integrated into the camera body, and data are sent to the counting house via standard Ethernet. The complete camera weighs about 150 kg and has a power consumption of around 500 W.
The camera was assembled and tested at ETH Zurich, before being installed in the refurbished HEGRA CT3 telescope at the Roque de los Muchachos Observatory, next to the MAGIC telescopes. The telescope, which has a total mirror area of 9.5 m2 , was equipped with a new drive system and improved mirror facets.
The installation of the camera is the first step towards establishing a monitor telescope for variable gamma-ray sources. It has already begun to demonstrate that G-APDs are a viable alternative to PMTs in Cherenkov telescopes. Future developments with these devices promise even higher photon detection efficiencies and availability at lower costs than PMTs. Moreover, their bias voltages of about 70 V render their operation under the harsh conditions of Cherenkov telescope sites stable and robust.
The LHCb experiment has had a remarkable year, moving from first results to world-beating measurements of B-hadron properties, such as the oscillation frequency of the Bs meson, CP and forward-backward asymmetries, as well as limits on rare decays, for example Bs → μ+μ–. Even though the physics harvest is now in full flow, the collaboration is already planning for the eventual upgrade of the experiment, which is scheduled to be ready for data-taking in 2019.
The instantaneous luminosity delivered to LHCb has steadily increased throughout the year, reaching 4 × 1032 cm2 s–1 by the end of the run, already twice the original design luminosity for the experiment. Unlike the general-purpose detectors at the LHC, ATLAS and CMS, LHCb has been specifically designed for the optimal study of B hadrons, covering an angular range of 10–300 mrad from the beam axis (the forward region). This gives it different constraints concerning the luminosity. The track density increases in this region, so detectors suffer from higher occupancy, and are potentially more prone to radiation damage. In addition, because the experiment is tuned for the precise study of B-hadron decay vertices, too many overlapping events can confuse the picture. Finally, the experiment’s trigger has the special feature that it can select fully hadronic decays, rather than only relying on electron or muon signatures, and this trigger cannot handle too high an input rate without reducing its efficiency.
As a result, the luminosity cannot be pushed much higher in the current experiment. This has the positive aspect that next year should be one of continuous operation with the experiment in stable conditions, but eventually it means that the time taken to double the data-set will become long. The goal for this year was 1 fb–1 of integrated luminosity, which (thanks to the excellent performance of the LHC) was comfortably passed with a few weeks to spare; it represents more than 30 times as much data as last year. The expectation is to at least double that sample again in 2012, but for the longer term the collaboration plans to upgrade the experiment so that it can operate at higher luminosity and accumulate an order of magnitude more data. This will allow even higher precision in the search for new physics in the flavour sector.
The key to the upgrade will be to read out the full experiment at 40 MHz, the design bunch-crossing rate of the LHC, and to perform the trigger in software in a powerful computer farm. For this to succeed, collisions will indeed have to be provided by the LHC at 40 MHz at the time of the upgrade, rather than at the current rate of 20 MHz.
The LHCb Collaboration submitted a Letter of Intent describing the proposed upgrade to the LHC Committee (LHCC) in March, and the committee endorsed the physics programme. A review panel looked into the proposed 40 MHz readout scheme and gave a positive report, so that the LHCC has now encouraged LHCb to proceed with preparing Technical Design Reports for the upgrade components. This will ensure the future of the experiment into the next decade.
Upgrades to the B-factory experiments are under consideration in Japan and Italy on the same timescale, and have a complementary reach for this physics. While they have strong performance for neutral decay products, they cannot compete with the enormous production rate at the LHC for charged modes, and the time-dependent study of Bs states will remain the province of LHCb. The upgrade will also allow the LHCb experiment to act as a general-purpose detector in the forward region, with the ability to search for exotic particles that might give long decay lengths, or to study in detail the influence of any new physics states that might be discovered at the LHC over the same period. The collaboration is now pressing ahead with the R&D necessary to ensure the upgrade’s success.
• For more information see Letter of Intent for the LHCb Upgrade, CERN-LHCC-2011-001.
In 1946, accelerator pioneer Robert Wilson laid the foundation for hadron therapy with his article in Radiology about the therapeutic interest of protons for treating cancer (CERN Courier December 2006 p24). Sixty-five years later, proton therapy has grown into a mainstream clinical modality. More than 60,000 patients worldwide have been treated since the establishment of the first hospital-based treatment centre in Loma Linda, California, in 1990 and various companies are now offering turn-key solutions for medical centres. Moreover, encouraging studies with other types of hadrons have resulted in the creation and planning of various dedicated facilities.
Hadron therapy is the epitome of a multidisciplinary and transnational venture: its full development requires the competences of physicists, physicians, radiobiologists, engineers and IT experts, as well as collaboration between research and industrial partners. The translational aspects are extremely relevant because the communities involved are traditionally separate and they have to learn to speak the same “language”. Ions that are considered “light” by physicists, such as carbon, are “heavy” for radiobiologists – and this is just one of many examples.
Although state-of-the-art techniques borrowed from particle accelerators and detectors are increasingly being used in the medical field for the early diagnosis and treatment of tumours and other diseases, medical doctors and physicists lack occasions to get together and discuss global strategies. The first Physics for Health (PHE) workshop was organized in 2010 at CERN exactly to develop synergies between these diverse communities. Preparations are now underway for a follow-up workshop, which will join forces with the International Conference on Translational Research in Radiation Oncology (ICTR). The ICTR-PHE 2012 conference will be held in Geneva on 27 February – 2 March. The aim is to catalyse and enhance further exchanges and interactions between experts in this multidisciplinary field where medicine, biology and physics intersect.
The advantages of hadron therapy
The clinical interest in hadron therapy resides in the fact that it delivers precision treatment of tumours, exploiting the characteristic shape of the Bragg curve for hadrons, i.e. the dose deposition as a function of the depth of matter traversed. While X-rays lose energy slowly and mainly exponentially as they penetrate tissue, hadrons deposit almost all of their energy in a sharp peak – the Bragg peak – at the very end of their path.
The Bragg peak makes it possible to target a well defined cancerous region at a depth in the body that can be tuned by adjusting the energy of the incident particle beam, with reduced damage to the surrounding healthy tissues. The dose deposition is so sharp that new techniques had to be developed to treat the whole target. These fall under the categories of passive scattering, where one or more scatterers are used to spread the beam, and spot scanning, where a thin, pencil-like beam covers the target volume in 3D under the control of sweeping magnets coupled to energy variations.
While the advantages of protons over photons are quantitative in terms of the amount and distribution of the delivered dose, several studies show evidence that carbon ions damage cancer cells in a way that the cells cannot repair themselves. Carbon therapy may be the optimal choice to tackle radio-resistant tumours; other light ions, such as helium, are also being investigated.
Although hadron therapy has largely shown its potential scientifically, the relative complexity of the required infrastructures limits its exploitation. “Hadron therapy is not a replacement for conventional radiotherapy or surgery, but is an additional tool in the toolbox of the oncologists,” confirms Robert Miller of the Mayo Clinic in the US, which just embarked on the construction of two proton-therapy facilities. Indeed, hadron therapy is mostly used for treating tumours that are located close to vital organs that would be unacceptably damaged by X-rays, or in paediatric oncology, where quality of life and late side effects are a major concern.
At present, the world map of hadron therapy is divided into three distinct regions: Asia (mainly Japan), the US and Europe. In addition, three proton-therapy facilities are operational in Russia and one in South Africa.
Japan is the uncontested leader in treatment and clinical studies with carbon ions (CERN Courier June 2010 p22). By the end of 2010, its two major facilities – the Heavy-Ion Medical Accelerator in Chiba (HIMAC) and the Hyogo Ion Beam Medical Center – had treated more than 90% of the 6600 world total of patients irradiated with carbon ions. Clinical experience in the Japanese centres has not only demonstrated that carbon therapy is more effective than conventional photon radiotherapy on certain types of tumours but also that, with respect to both protons and photons, a significant reduction of the overall treatment time and the number of irradiation sessions can be achieved. In addition to the existing facilities, Japan is planning the construction of two more centres for carbon-ion therapy and two more for proton therapy. Following this lead, China and other countries in Asia have constructed or are planning several carbon-ion and proton-therapy facilities.
In the US alone more than 30,000 patients have already been treated with protons over the past 20 years, half of them at Loma Linda. There are currently six active proton facilities, three more under construction and a number of centres announced or planned in the near future. When hadron therapy was still confined to facilities operating within particle-physics laboratories, the US pioneered the use not only of protons but also of other ions: between 1957 and 1992, the Bevalac in Berkeley treated about 2500 cancer patients with particles including neon, carbon, silicon and argon. Today, there is no therapy centre delivering ions other than protons in America. Plans for the future include only an R&D facility in the San Francisco Bay area called SPARC, which would be a joint effort between Stanford/SLAC and Lawrence Berkeley National Laboratory/University of California San Francisco, and a carbon and helium facility at the Mayo Clinic.
Europe has 10 active proton facilities, with five more planned or under construction. Capitalizing on the experience gained from the carbon-therapy programme at GSI in Darmstadt and at Heidelberg, Europe is now witnessing the birth of “dual” centres that are capable of delivering beams of both protons and carbon ions. Two major centres were recently completed: Heidelberg Ion Therapy Centre (HIT), which started treatments at the end of 2009 and has irradiated about 500 patients with carbon ions to date; and the Centro Nazionale di Adroterapia Oncologica (CNAO) in Pavia, which started treating the first patient with protons in September and will launch the preclinical phase with carbon ions in the coming months. The MedAustron dual facility in Wiener Neustadt is currently under construction (CERN Courier October 2011 p33) and more centres of a similar nature are at different stages in planning and implementation in France and Germany.
HIT is the first facility in the world to be equipped with a gantry for carbon ions, i.e. a structure to rotate the particle beam and guide it to the patient at a chosen angle. Using the gantry, radio-oncologists can select the optimal beam direction to minimize the amount of healthy tissue traversed by the hadrons before reaching the tumour. They can also irradiate the target from multiple angles – a technique that, thanks to the overlapping beams, delivers to the target a total dose that is much higher than in the surrounding normal tissues. CNAO relies on an accelerator design implemented by Terapia con Radiazion Adroniche Foundation based on the results of the Proton Ion Medical Machine Study hosted at CERN from 1996 to 1999. The CNAO facility will deliver horizontal and vertical beams and a gantry will be added at a later stage.
Co-ordination and training
With the blossoming of carbon therapy in Europe, the European Network for Light Ion Therapy (ENLIGHT) considered that the time was right to leverage the experience at the various facilities, as well as the wealth of advances in beam delivery for conventional radiation therapy, and improve the technology with the aim of more effective and affordable cancer treatments with particles. While developing and optimizing the next-generation facilities remains the community’s primary goal, it is also of paramount importance that the existing centres collaborate intensively and that researchers, clinicians and patients have protocols to access these structures. Within this framework, the Union of Light Ion Centres in Europe (ULICE) project was launched in September 2009, funded by the European Commission.
ULICE is a collaboration of 20 partners led by Roberto Orecchia, scientific director of CNAO. The project involves all of the existing and planned European carbon-therapy facilities, including the two leading European companies in the hadron-therapy sector, IBA and Siemens. The participation of private companies ensures that specific issues related to possible future industrial production are addressed. IBA has designed and installed the majority of clinically operating proton-therapy facilities in the world and is developing innovative and more affordable single-room proton systems, as well as superconducting cyclotron solutions for carbon. Siemens Healthcare is one of the world’s largest providers of medical solutions and was the first company outside of Asia to enter the carbon-ion therapy market. The company delivered the complete patient environment at HIT and the treatment-planning system at CNAO.
ULICE is a four-year project built around three pillars: Joint Research Activities that focus on development of instruments and protocols; Networking, to increase co-operation between facilities and research communities wanting to work with the research infrastructure; and Transnational Access, which aims at allowing researchers to use the facilities and for radiobiological and physics experiments to take place.
At the recent mid-term review meeting in Marburg, Richard Pötter, a radiation oncologist at the University of Vienna and co-ordinator of the Joint Research Activities of ULICE, confirmed that the first achievements of the research work are extremely encouraging. The existing clinical-study protocols worldwide have been reviewed to start defining common guidelines for patient selection. Specific studies have focused on setting up appropriate structures for a comprehensive and prospective multicentre clinical-research programme and the development of a dosimetry protocol. Important steps forward have also been made in defining uniform methods and concepts for irradiation doses and tumour volumes in radio-oncology, to create a common language not only within the consortium but across all of the communities involved in different forms of radiotherapy. The ULICE consortium is working hard to develop new concepts for more compact and affordable gantries: the HIT gantry is a steel giant of 25 m in length and 13 m in height, and alternative designs are clearly needed.
Within the activities of Transnational Access, the ULICE partners are examining the complex task of setting up a structure to allow access to the existing European facilities for patients, clinical and experimental research, as well as for clinical training and education. Japan is once again an example to follow, with the International Open Laboratory (IOL) programme of the National Institute of Radiological Sciences launched in 2008 to grant beam time at HIMAC to external researchers. There are currently four active IOLs with Columbia University, Colorado State University, the University of Sussex, Karolinska Institutet and GSI. As of summer 2011, researchers from eligible countries can apply to take part in research activities or submit experimental proposals in the clinical, radiobiological and physical field at the University Hospital of Heidelberg and at CNAO. In the words of Jürgen Debus, medical director of the Department of Radiation Oncology and Radiation Therapy of Heidelberg University Hospital and co-ordinator of the ULICE Transnational Access: “A technology has worth in the medical field only if it is spread and if everyone can participate to its evolution with their experience and feedback.” Applications for participation in the Transnational Access programme will be reviewed by a multicentre scientific committee and successful applicants will be granted free access thanks to the European Union Transnational Access funding. In the same framework, ULICE is also developing an international web-based documentation and data-management system, which will be an essential tool for transnational and multicentre clinical studies in particle therapy.
In the coming years, the project will focus on expanding and consolidating the transnational access and on developing innovative gantry designs. The support of ENLIGHT will be instrumental to dissemination, communication and networking, which will help it reach out to the widest possible community.
ENLIGHT also actively supports the creation of the next generation of the necessary highly specialized experts through the Particle Training Network for European Radiotherapy (PARTNER), funded by the European Commission under the Marie Curie Initial Training Network programme (CERN Courier March 2010 p27). Both ENLIGHT and PARTNER are co-ordinated by Manjit Dosanjh at CERN. PARTNER is offering research and training opportunities in leading European institutions and companies to 25 young researchers who are mostly involved in PhD studies at the same time. At the recent annual meeting in Marburg, the presentations of the individual projects displayed clearly the variety of topics being addressed and the quality of the research. PARTNER is now in its fourth and final year, and in a few months it will be time to review the results that have been achieved.
The ALICE detector is optimized to investigate collisions of heavy-ions – in practice lead–lead (Pb–Pb) – in which the production of quark–gluon plasma (QGP), a new state of matter, will provide invaluable insight into the “quark–gluon coloured world”. Many aspects of this new state make particle identification an obligation, especially in the study of strangeness enhancement and heavy-flavour production. One technique developed for ALICE is based on relatively “low-tech” detectors, considering the many areas of frontier technology employed at the LHC, but its performance is proving surprisingly good.
Time-of-flight (TOF) is one of several methods that ALICE uses to identify particles. In the mid range of momenta (0.5–2.5 GeV/c) the TOF array shows an excellent performance in separating pions from kaons. The system is based on the multigap resistive plate chambers (MRPCs), first developed in 1996. When built with small gas gaps, this type of detector shows exceptionally good intrinsic time resolution, below 50 ps – and full efficiency.
The ALICE TOF is made of 1593 MRPCs, each of which is 120 cm long and consists of a double-stack MRPC, with a total of 10 gaps 250 μm wide. The unusual feature of the device, however, is that even though the time resolution is at the cutting edge, the technology itself is relatively low-tech.
The resistive plates are made out of thin (400–550 μm thick) sheets of “soda-lime” glass (window glass) and fishing line is used to create the 250 μm spacing between the sheets. The simplicity of the construction and the relatively low cost allowed the collaboration to build a very large area TOF (around 140 m2) that covers the full ALICE barrel region, with 152 928 read-out pads.
Full exploitation of the extraordinary time resolution of the MRPC requires a suitable electronics chain. For this purpose, the “NINO” chip was developed in collaboration with CERN’s microelectronics group. The chip consists of an ultrafast amplifier and discriminator, which also provide charge information (needed for time-slewing corrections) by means of the time-over-threshold technique.
In addition to its extremely precise time response, the MRPC has low noise (singles rate of 0.06 Hz/cm2), which allows the TOF to be used as a trigger device for both cosmic rays and for collider physics. Another advantage is that all of the MRPC modules are operated at the same voltage and all of the thresholds of the front-end electronics are the same. This is in contrast to TOF arrays based on scintillators, where the high voltage of each phototube has to be carefully tuned.
At present the global time resolution achieved in Pb–Pb collisions is 86 ps, including fluctuations on the time-zero of the event and the track length (see figure). This value matches the design goals and provides a fundamental contribution to the particle identification analysis, which is the prominent feature of the ALICE experiment. The time resolution is still being improved and the collaboration is highly motivated to exploit all the possibilities of this extremely precise and stable detector.
Meanwhile, the MRPC has revolutionized TOF technology and many research laboratories and experiments have quickly followed ALICE’s lead. These include the HARP experiment at CERN, the STAR experiment at the Relativistic Heavy Ion Collider and the FOPI experiment at GSI.
The ALICE TOF was built by the University and Sezione INFN of Bologna, the University of Salerno, the Institute for Theoretical and Experimental Physics in Moscow and the Department of Physics at Kangnung National University.
Regular readers of CERN Courier are well aware that the LHC depends on some 10,000 magnets made of type II superconductor, which remains superconductive in high magnetic fields. Many will also recall that the era of superconducting magnets began 50 years ago when John “Gene” Kunzler and colleagues at Bell Telephone Laboratories showed that a primitive Nb3Sn wire could carry more than 1000 A/mm2 in a field of 8.8 T. What is much less well known is that the path to type II superconductors had already been demonstrated a quarter of a century earlier by Lev Shubnikov, Vladimir Khotkevich, Georgy Shepelev and Yuri Rjabinin in Kharkov (Shubnikov et al. 1936a, 1936b and 1937). So how was it that this understanding was lost for 25 years and rediscovered only by accident in 1961?
From the beginning
The huge value of superconducting wires for high-field magnets was clearly understood by Heike Kamerlingh Onnes, the discoverer of superconductivity. In his report submitted to the 3rd International Congress of Refrigeration in Chicago of 1913, he described his design for a 10 T superconducting solenoid. He had recently passed almost 500 A/mm2 through a lead wire, although his first attempt at a silk-insulated lead wire was not so successful, no doubt because of some “bad places in the wire” (Kamerlingh Onnes 1913). Sadly, just one year later, he found that pure-metal superconductors lose their superconductivity at a critical magnetic field, Hc, that is much less than 0.1 T. His interest then languished when the First World War intervened.
Work restarted in the 1920s, by which time laboratories in Leiden, Toronto, Oxford and Kharkov all had liquid helium and work on the superconductivity in metal alloys was taken up again. The initial results were complex because the loss of diamagnetism occurred at fields much lower than those at which resistance was restored. In the best cases, traces of superconductivity by transport were seen at almost 2 T. Kurt Mendelssohn in Oxford put forward the “sponge hypothesis”, which hypothesized that the small supercurrent densities observed at high fields were associated with a fine, filamentary network of tiny relative volume (Mendelssohn 1935). Because most samples had poorly controlled homogeneity and cold work state, metallurgical inhomogeneity was, indeed, a contributor to the large variation in properties.
This plausible but fundamentally and decisively wrong hypothesis was soon to get its rebuttal by the experiments on lead–indium and lead–thallium alloys made by Shubnikov’s group in Kharkov. This seminal work of 1936 was characterized by its use of well annealed single crystals that in principle completely invalidated the premise of the “sponge model”. The experiments showed three main features:
1. There is a critical alloy concentration, xc, below which alloys behave as pure superconductors with a full Meissner effect and abrupt loss of superconductivity at a critical field, Hc (figure 1a).
2. Increasing the alloy concentration beyond xc, for example from 0.8 to 2.5% by weight of Tl in figure 1b, drastically changes the equilibrium magnetic properties, separating the loss of superconductivity, which occurs at an increasingly higher critical field Hc2, from the onset of flux penetration at the lower critical field Hc1.
3. With increasing xc, Hc1 becomes smaller, while Hc2 grows larger (figure 2). Shubnikov realized, however, that the energy of the superconducting state in his well annealed, almost reversible (i.e. low current density) single crystals was almost independent of alloy content.
In all normal circumstances, the high quality of the Kharkov crystals, their evident homogeneity and above all the finding that their superconductivity must have been a bulk effect incapable of being explained by a small filament network should have undercut the sponge hypothesis and instigated much greater attention to the thermodynamic properties of the new type II superconducting state.
Regrettably, this discovery occurred against a backdrop of bitter conflict and human tragedy. Shubnikov’s friend Lev Landau, who was “held captive” by the “Mendelssohn sponge”, did not recognize this discovery either in 1936 or in 1950, when he and Vitaly Ginzburg created the phenomenological theory of superconductivity that, as Alexey Abrikosov later found, provided a beautiful description not just of the type I superconductors that they considered but also of the type II superconductivity discovered by Shubnikov. It is clear that their parameter κ describes perfectly the transition from type I to type II behaviour at the critical value 1√2. However, Landau still did not recognize the discovery by Shubnikov’s group, even though their results and the published paper were presented by Martin Ruhemann at the 6th International Congress of Refrigeration in The Hague in 1936. For reasons that appear quite mystifying in 2011, none of the scientists present either supported or continued the Kharkov work, even though a number of contemporary references cite it.
The real tragedy occurred a year later. Shubnikov, the director of the Low Temperature Laboratory in Kharkov, who had come under suspicion and been confined to the Soviet Union in 1936, was arrested in 1937 on charges of spying (the laboratory was well connected to Western laboratories, Shubnikov having spent several years in Leiden). He was summarily shot dead without any legal process. The following year, Landau was arrested and held in prison for a year, being released only on the advocacy of Pyotr Kapitza. The Soviet Union was experiencing a difficult time.
The dormant period
The results of Shubnikov and his co-workers remained generally unknown for another 25 years, even though Abrikosov drew attention to the work in the 1950s when he predicted the vortex state in high κ (>> 1/√2) superconductors. In his 2003 Nobel address describing his work explaining type II superconductivity, Abrikosov said: “I compared the theoretical predictions about the magnetization curves with the experimental results obtained by Lev Shubnikov and his associates on Pb–Tl alloys in 1937, and there was a very good fit” (Abrikosov 2004). However, as he has pointed out, his paper came out just as the Bardeen–Cooper–Schrieffer theory of superconductivity was published and all interest became focused on the superconducting mechanism, rather than on what some regarded as an esoteric vortex state. So work on high-field magnets lay dormant until the totally unexpected discovery by Kunzler’s group, which connected all of the disconnected sightings of high-field and high-current-density superconductivity that had been impossible to explain by the Mendelssohn sponge. Ted Berlincourt has written fine recollections of this fertile period in the 1950s when finally things began to gel (Berlincourt 1987).
After 1961, Shubnikov’s seminal role in this extraordinary advance in science and technology was finally recognized, in particular at the International Conference on the Science of Superconductivity held in Hamilton, New York, in 1963, where several speakers praised the research. The conference chair, John Bardeen, and the secretary, Roland Schmitt, stated formally in the proceedings: “It should be noted that our theoretical understanding of type II superconductors is due mainly to Landau, Ginsburg, Abrikosov and Gor’kov, and that the first definitive experiments were carried out as early as 1937 by Shubnikov” (Bardeen and Schmitt 1964). Soon after, future Nobel laureate Pierre-Gilles de Gennes introduced the designation the “Shubnikov phase” for the mixed-vortex state that is stable between Hc1 and Hc2 (de Gennes 1966). It is also the case that the first doctoral dissertation on type II superconductors was that written by G D Shepelev under Shubnikov’s guidance.
Finally, we may note that the long, 25-year period of 1936–1961, in which the sponge hypothesis held sway, was also a period in which many new superconductors – such as NbN and Nb3Sn – were discovered. Like the more recent discoveries of cuprates, organics, MgB2 and the new iron-based systems, all are type II superconductors. What might have been if only the poignant politics of the Soviet 1930s had not so tragically entwined the studies and fate of Shubnikov’s group in Kharkov?
On 30 September 2011, Helen Edwards aborted the beam and dumped the ramp for the last time on what has for the past 28 years been one of the most productive physics machines in the world. The world’s first superconducting particle accelerator represented a major advance in both technology and physics reach. The Tevatron’s place in history is secure. During its life it provided fixed-target beams as well as colliding beams that resulted in numerous discoveries, including the first observations of the τ neutrino and top quark.
The concept of a superconducting accelerator predates the establishment of the National Accelerator Laboratory (NAL), later renamed Fermi National Accelerator Laboratory in 1974. In 1967 NAL’s first director, Robert R Wilson, discussed the possibility of using superconducting technology soon after the new laboratory moved into temporary offices in Oakbrook, Illinois. He recognized that it was premature to begin developing the concept of a new machine before construction of the planned 500 GeV accelerator at NAL had even begun. Nevertheless, superconducting technology held the promise of higher energies and lower operating costs. Not only would a superconducting accelerator in the Main Ring tunnel double the energy of the fixed-target beams, it would also enable collisions between beams. The Intersecting Storage Rings at CERN had at that stage already proved the feasibility of colliding proton beams at 62 GeV in two conventional storage rings. It would be a huge leap to go from conventional accelerator technology with one beam at NAL to a superconducting accelerator with colliding beams, but the thought was too tempting to dismiss completely.
The superconducting challenge
The Main Ring was commissioned in 1972. It was completed under budget and on schedule even though many difficult problems were encountered – and then resolved – during construction. The laboratory’s staff had demonstrated a desire to persevere and clearly had the talent to succeed in the face of tight budgets and enormous technical challenges. The Main Ring extended the energy reach by more than a factor of five over existing accelerators. The first 200 GeV beam to the fixed-target programme was a major accomplishment. Eventually, beams at 400 GeV with 3 × 1013 protons per pulse were delivered and split between up to 15 experiments, resulting in many physics results, including the discovery of the Υ in 1977.
Once the Main Ring was commissioned the laboratory answered the call of the superconducting machine, initially known as the Energy Doubler/Saver because Wilson’s vision was to reach an energy of 1000 GeV while also saving the cost of acceleration to lower energies. In 1973 work began in earnest to develop a superconducting accelerator magnet. Superconducting magnets had been built and used since the late 1940s and early 1950s – their primary use in particle physics being in bubble chambers. However, a new accelerator in the Main Ring tunnel would require approximately 1000 high-quality dipoles and quadrupoles: a reproducible magnet of accelerator quality would prove to be a major challenge.
Alvin Tollestrup played a key role in the effort to design such a magnet. After testing short magnets with monolithic superconductor, a design was chosen based on a warm-iron, collared coil of the niobium-titanium multifilament-strand cable developed at the Rutherford Laboratory in the UK. The first 20-ft (6.1-m) magnet was ready for tests in 1974 and by 1977 full-sized magnets were being produced and tested. However, many of these would be relegated to beam lines because further design improvements were still being implemented while magnet testing continued on test stands and in the beam lines.
An active quench-protection system had been developed and was exercised extensively during the early magnet testing phase – in which the people conducting the tests were ensconced behind the “dewar deflector”. This experience led to a robust system that has worked well over the years.
Towards construction
In 1979, energy-deposition studies were carried out to measure the quench behaviour of two Energy Doubler dipoles in 350 GeV and 400 GeV beam extracted from the Main Ring. These measurements provided an early opportunity to use the MARS Monte Carlo shower simulation software that Nikolai Mokhov wrote at the Institute of High Energy Physics, Protvino, in 1974 and is now widely used for many accelerator and beam-related applications. Mokhov began visiting Fermilab with MARS in 1979. He helped to collect the data from the tests and used his software to determine that a superconducting collider should be feasible. A fixed-target machine was more uncertain; the extraction system would have to have better loss properties than the extraction system from the Main Ring. Helen Edwards and Mike Harrison came to the rescue with a modified design that moved the electrostatic extraction septa halfway round the ring from the extraction point, while Curtis Crawford developed a way to make the wire planes in the electrostatic septa straighter, so as to reduce losses.
Construction of the superconducting ring was authorized that same year and a final design for the magnets was in place by 1980. Because Wilson had anticipated building a second accelerator in the Main Ring tunnel, he left space underneath the Main Ring and designed its magnet stands to allow the magnets of a new machine to slip through them. The first step was to install magnets in one sector for a test in 1982. Concurrently, a large cryogenic refrigeration system was being built to provide the necessary cooling for the new accelerator. The cryogenic plant included 24 satellite refrigerators located in the service buildings that were spaced around the Main Ring tunnel. A large helium-liquefaction plant fed helium to the satellite refrigerators.
The completed accelerator was ready to be commissioned in 1983. It was a hectic and exciting time. Many challenges had been encountered and overcome, but many of those working on the project were still sceptical that it would succeed. Nevertheless, they made an incredible effort that brought the first superconducting synchrotron to life.
Beam was injected for the first time on 2 June 1983. It took less than a day to make the first turn all of the way round.
Beam was injected for the first time on 2 June 1983. It took less than a day to make the first turn all of the way round. On 3 July the Energy Doubler reached 512 GeV. Resonant extraction was established in August and the fixed-target programme at 400 GeV was underway in the autumn. By 1984 the energy had reached 800 GeV and the Energy Doubler was renamed the Tevatron. Five experiments took beam during the initial 400 GeV fixed-target run.
Construction of an antiproton source began in 1981, led by John Peoples. Antiprotons were stochastically cooled in the source using the technique that Simon van der Meer had first proposed at CERN. Work also began to construct a collision hall in the BØ straight section that would accommodate the proposed CDF detector. The antiproton source was completed in 1985 and in October the first proton–antiproton collisions were observed in a partially complete CDF detector. The first collider-physics run began in 1987 using only the CDF detector. DØ came online in 1992 with a detector in the DØ straight section.
The Main Ring was still being used as an injector during the early collider runs, so it had to be accommodated in the collision halls. The CDF experiment had a Main Ring bypass that passed over the top of the detector, while the DØ collaboration had to learn to live with a Main Ring beam that went through the detector. In 1999 a new 150 GeV synchrotron, the Main Injector, was completed that replaced the Main Ring and provided more protons for both the collider and antiproton production. Built in a separate enclosure, it remedied the bypass problem. It would eventually enable simultaneous fixed-target and collider running, which had alternated until 1999.
In 1989 US President Bush awarded the National Medal of Technology to Helen Edwards, Rich Orr, Dick Lundy and Alvin Tollestrup for their work in building the Tevatron. Not only were they instrumental in solving the technical problems associated with building the forefront machine but they had also succeeded in maintaining an enthusiastic technical team in the face of problems that often seemed insurmountable.
High luminosities
The design luminosity for the early running of the collider programme was 1 × 1030 cm–2s–1 at 1800 GeV. During the first physics run in 1988 and 1989, 1.6 × 1030 cm–2s–1 was achieved. By the end of Run I in 1996, initial luminosities were typically 1.6 × 1031 cm–2 s–1 – a factor of 16 higher than the initial design luminosity. By this time a total integrated luminosity of 180 pb–1 had been delivered to the two detectors – and the top quark had been discovered.
By 2001, when Run II began, many improvements to the accelerator complex had been made, including the addition of electrostatic separators to create helical orbits that prevented collisions at locations other than BØ and DØ, where the two detectors were situated. Antiproton cooling systems were also improved and the linac was upgraded from 200 MeV to 400 MeV to improve injection into the 8 GeV booster. Cold compressors were also added to the satellite refrigerators in 1993 to lower the operating temperature by 0.5 K, making it possible to raise the beam energy to 980 GeV. However, the new compressors were not used until the beginning of Run II in 2001.
The Main Injector had a larger aperture and could deliver more protons with higher efficiency. When Run II began, this enabled the delivery of more protons to the antiproton target and better transfer efficiencies for protons and antiprotons. There were also improvements to the Antiproton Source and the incorporation of a new permanent magnet ring, the Recycler, in the Main Injector tunnel. Initially meant to recycle antiprotons, it was never used for this purpose; instead it was used to stash and cool antiprotons delivered from the antiproton source.
Initial luminosities at the beginning of Run II were in the region of 2 × 1031 cm–2 s–1. A luminosity improvement “campaign” was initiated and implemented concurrently with the physics programme. Improvements continued to be made over most of the Run II period. Significant improvements were made to the antiproton source resulting in an increase in the stacking rate from 7 × 1010 to 26 × 1010 antiprotons per hour. The Tevatron lattice was improved and magnets were reshimmed to correct problems with the “smart bolts”. Slip stacking was developed in the Main Injector, which resulted in more protons on the antiproton target.
However, the largest single improvement made during Run II was the development and implementation of electron cooling in the Recycler. This effort, led by Sergei Nagaitsev, was commissioned in 2005 and resulted in smaller longitudinal emmitances. Using the Recycler to stash and cool also increased the stacking rate in the antiproton source because protons could be off-loaded to the recycler often, making the cooling more efficient. The net increase from electron cooling was more than a factor of two. Other improvements included a reduction of the β* in the two interaction regions and there was a vigorous programme to improve the reliability of the entire complex. Altogether the improvements resulted in initial luminosities a factor of 350 better than the original design.
During Run II, the Fermilab accelerator complex consisted of seven accelerators that together delivered beam for the collider programme, two neutrino beams and one test beam. It has performed magnificently over the years. All but the Tevatron will now continue operating to carry Fermilab into the future. Nevertheless, the Tevatron defined the laboratory for 30 years. It has been an incredible experience for those of us fortunate enough to work on it.
In a seminal paper published in June 1961 A P Banford and G H Stafford described how a future superconducting proton linear accelerator could run continuously, instead of at the 1% duty cycle of the 50 MeV proton accelerator that was operating at the time at the Rutherford High Energy Laboratory in the UK. The basic argument was that, because ohmic losses in the accelerating cavity walls increase as the square of the accelerating voltage, copper cavities become uneconomical when the demand for high continuous-wave (CW) voltage grows with particle energy. It is here that superconductivity comes to the rescue.
The RF surface resistance of a superconductor is five orders of magnitude less than that of copper. The quality factor (Q0) of a superconducting resonator is typically in the billions (i.e., a billion oscillations before the resonator energy dissipates). After accounting for the refrigerator power needed, the net gain in the overall cooling power remains a factor of several hundred. It became clear that the higher-voltage, shorter superconducting structures can also reduce the disruptive effect that accelerating cavities have on the beam, resulting in better beam quality, higher maximum current and less beam halo (less activation). By virtue of low losses in the walls, a superconducting RF (SRF) cavity design can also afford a large beam aperture, which further reduces beam disruption and beam halo.
It took nearly 40 years for the early dream of that high duty-factor, high-intensity proton linear accelerator to be fulfilled. Today, the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory runs at 6% duty cycle with 88 m of superconducting cavities providing 1 MW of beam power with a 1 GeV, 10 mA beam. The success of the SNS has stimulated the construction of the European Spallation Source (ESS), with 5 MW of beam power, to be completed in 2016.
Pioneering work
In the early 1960s, Stanford University, under the leadership of William Fairbank, pioneered the development of superconducting cavities for electron accelerators. By 1968 they achieved a Q value of more than 1010 at 1.7 K for an 8.5 GHz TM010-mode single-cell pill-box resonator built of solid niobium. The first niobium cavity also demonstrated the exciting prospect of gradients of more than 30 MV/m.
However, with the more practical, lower frequency (1.3 GHz) accelerator structures that were built in the 1970s, the performance level fell to 2–4 MV/m. The primary roadblock was multipacting – the spontaneous resonant production of electrons. By the mid-1980s, the physics of multipacting was understood. It turned out that the limiting field-levels scale with the RF frequency, so the high-frequency cavities of the 1960s had been fortuitously exempt.
The next three decades saw several layers of gradient problems being uncovered, the underlying physics understood and solutions developed. Cavity performance then ratcheted up at a steady pace, as did accelerator applications. The development of the anti-multipacting, spherical (and elliptical) cavities was a breakthrough moment. With multipacting overcome, thermal breakdown of superconductivity became the next limiting mechanism, at 4–6 MV/m. Local heating at surface imperfections led to thermal runaway and a quench of superconductivity. The cure was to switch to niobium of high-purity – high residual-resistance ratio (RRR) niobium. With the co-operation of industry, RRR improved by an order of magnitude and cavity gradients rose on average by a factor of three. Another cure for thermal breakdown was to sputter a micron-thin film of niobium onto a copper cavity-substrate, which also had the benefit of reduced material costs – especially for the cavernous, low-frequency (0.35 GHz) cavities.
With the corresponding rise in surface electric fields, electron emission became the next limit to gradients, at 10–15 MV/m. Global R&D revealed microparticle contamination to be the dominant source of field emission, so the solution demanded better preparation techniques, such as powerful surface scrubbing with high-pressure (100 atm) water and assembly in Class 100 clean rooms. With these advances, cavity gradients climbed to 20 MV/m.
Above 20 MV/m, however, RF losses began mysteriously to rise exponentially with the field. The physics of such losses is still under investigation but pragmatic countermeasures are already in place. Electro-polishing has replaced the standard chemical etching to obtain a smoother surface, followed by mild baking at 120 °C for two days. There is now excellent prognosis for reaching 35–40 MV/m. Many nine-cell, 1 m-long niobium structures have demonstrated performance above 40 MV/m in qualification tests, while basic research continues to push towards the theoretical limit of 55 MV/m.
SRF takes off
As gradients improved steadily from the mid-1980s, RF superconductivity grew into a key technology for accelerators at the energy and luminosity frontiers, as well as at the cutting edge of low- and medium-energy nuclear physics, nuclear astrophysics and basic materials science. SRF cavities are now routinely accelerating electron, proton and heavy-ion beams in a variety of frontier accelerators.
It was in the early 1990s that SRF took off to push the energy frontier in storage rings, with TRISTAN at KEK and HERA at DESY. In the late 1990s the energy of the Large Electron–Positron collider at CERN doubled, with 500 m of superconducting cavities built by sputtering niobium on copper. Nb-Cu superconducting cavities now meet the voltage and high current demands of the LHC at CERN. At the luminosity frontier, high-current, high-luminosity electron–positron storage rings have operated and continue to operate with SRF cavities for copious production of c and b quarks at the Cornell Electron Storage Ring in the US, the KEKB facility in Japan and the Beijing Electron Positron Collider in China.
At the cutting edge of nuclear physics, Jefferson Lab has installed a 1 GeV superconducting linac to achieve 6.5 GeV beam by re-circulation. The laboratory’s Continuous Electron Beam Accelerator Facility (CEBAF) has been operating for 15 years with more than 150 m of SRF cavities, the largest number in operation at one facility. Looking ahead, Jefferson Lab has also developed 20 MV/m cavities to upgrade CEBAF’s energy from 6.5 GeV to 12 GeV.
For heavy ions, the CW superconducting accelerator provides an optimized array of independently phased resonators, to accelerate a variety of ion species with different velocities and charge states. The Argonne Tandem Linac Accelerator System at Argonne National Laboratory and the ALPI machine at the Legnaro National Laboratory have been operating for several decades. TRIUMF has expanded its radioactive-beam facility ISAC by adding a superconducting heavy-ion linac to supply more than 40 MV. Heavy-ion linacs in New Delhi and Mumbai have also come online. More than 250 superconducting resonators are currently operating around the world. New radioisotope beam (RIB) facilities are under construction with the SPIRAL2 project at the GANIL laboratory, HIE-ISOLDE at CERN and the ReA3 re-accelerator at Michigan State University (MSU).
Electron storage rings working as light sources are having an enormous impact on materials and biological science. SRF accelerating systems have been used in upgrading storage-ring light sources, such as the Cornell High Energy Synchrotron Source and the Taiwan Light Source. The Canadian Light Source, DIAMOND in the UK, the Shanghai Light Source in China and SOLEIL in France also operate with SRF; the National Synchrotron Light Source II at Brookhaven and the Pohang Light Source in Korea are planning to use SRF cavities. The Swiss Light Source at PSI and ELETTRA in Trieste have both installed third-harmonic superconducting cavities to improve beam lifetime and stability.
Free-electron lasers (FELs) based on SRF linacs provide tunable, coherent radiation over a wide range of wavelengths. The Jefferson Lab FEL generates 14 kW of CW laser power in the infrared, with energy recovery by recirculating nearly 1 MW of beam power. This is an important milestone toward the use of energy-recovery linacs (ERLs) for future light sources and electron-cooling applications. SRF-based FELs have operated at the Japan Atomic Energy Research Institute and at the ELBE project in Germany. FLASH at DESY is a short-wavelength FEL based on the self-amplified stimulated emission (SASE) principle, delivering 6 nm wavelength light. Its SRF linac uses more than 60 cavities, each 1 m long to accelerate a 1 GeV electron beam. A variety of innovative linac-based light sources are also under study for FELs and ERLs to deliver orders of magnitude higher brightness and optical beam quality. High-intensity beams for ERLs have spurred explorations for electron-cooling applications and for electron-ion colliders, for example to upgrade the Relativistic Heavy Ion Collider at Brookhaven.
With many exciting prospects on the horizon, the world SRF community has expanded to include many new laboratories where extensive SRF facilities have been installed. In all, more than 1 km of superconducting cavities have been installed worldwide to provide more than 7 GeV of acceleration. The next big jump of 16 GeV is already under construction, with the largest SRF application underway on a superconducting linac for the European XFEL at DESY. It will be based on nearly 700 niobium cavities operating at a gradient of more than 22 MV/m. When completed in 2016 it will provide X-ray beams of unprecedented brilliance at sub-nanometre (Ångström) wavelengths.
A new Facility for Rare Isotope Beams (FRIB) is underway at MSU to allow the study of exotic isotopes related to stellar evolution and the formation of elements in the cosmos. FRIB will be based on more than 330 low-velocity resonators, doubling the number currently in operation.
The most ambitious future application under study is for the International Linear Collider (ILC), a 500 GeV superconducting linear accelerator. It will require 16 km of superconducting cavities operating at gradients of 31.5 MV/m. Intense research is underway to reach a high yield for high gradients: 30–40 MV/m. New vendors for niobium, for cavities and for associated components are being developed around the world. Improved techniques for performance reliability and cost reduction are emerging. New assembly and test facilities are coming together at DESY, Saclay, KEK and Fermilab; the experience of the DESY XFEL will be a key stepping stone. Future ILC energy upgrades toward 1 TeV would benefit from even higher gradients that would push niobium towards its ultimate potential of 55 MV/m and thus open the door for new materials with gradients of 100 MV/m. Nb3Sn is the most promising candidate offering the prospect of 100 MV/m gradients, but substantial research is needed to verify this potential and guide the development necessary to harness it.
With the success of the SNS and the upcoming ESS, high-intensity proton linacs are likely to fulfil future needs in a variety of arenas: upgrading injector chains of proton accelerators at Fermilab’s Tevatron (Project X) and CERN’s LHC (the SPL), transmutation applications for treatment of radioactive nuclear waste, nuclear-energy production using thorium fuel, high-intensity neutrino beamlines, high-intensity muon sources for neutrino factories based on muon storage rings and eventually a muon collider at the multi-tera-electron-volt energy scale. All of these far future prospects will of course depend on the success of on-going efforts.
The 2011 International SRF conference in Chicago hosted more than 350 SRF enthusiasts. We can remain confident that the RF superconductivity community has both the creativity and determination to face the upcoming challenges and successfully bring these exciting prospects to fruition.
The Japanese High-Energy Accelerator Research Organization, or KEK, was established (originally as the National Laboratory for High Energy Physics) in Tsukuba in 1971, around the same time that superconductivity – discovered 60 years earlier – was just beginning to find large-scale applications in physics. The laboratory became involved in superconducting technology almost from the start and KEK has continued to push frontiers in the field as its research programme has evolved. Two pioneering scientists, the late Hiromi Hirabayashi and Yuzo Kojima, deserve particular mention for their leading roles in starting research and development at KEK in the mid-1970s – on superconducting magnets and RF superconductivity for accelerator science, respectively.
Superconducting-magnet technology was first put to practical use at KEK in a secondary-particle beamline at the 12 GeV proton synchrotron. Two cosθ dipole magnets and one superconducting septum magnet formed major components in the beamline, while a large-aperture “window-frame” superconducting spectrometer-magnet was built for one of the physics experiments. Hirabayashi not only took the lead in this milestone project, he also used it to train the next generation of magnet scientists and engineers. They would take forward the various superconducting-magnet projects that were subsequently carried out at KEK and in collaborative international programmes, including R&D on the Superconducting Super Collider in the US and LHC project at CERN.
Frontier projects with superconducting magnets
The frontier project for the 1980s was an electron–positron collider, TRISTAN, which had a maximum beam energy of 30 GeV and operated between 1987 and 1995. KEK successfully developed large-aperture insertion-quadrupole magnets for the four interaction regions, to bring high-brightness beams into collision in the physics experiments.
Following on from TRISTAN, KEK constructed the accelerator for the B-factory, KEKB – an energy-asymmetric electron–positron collider with two rings handling 3.5 GeV positrons and 8 GeV electrons – built in the TRISTAN tunnel. Superconducting interaction-region quadrupole (IRQ) magnets were again developed. Based on a sophisticated coil design, with corrector-coils in additional coil layers, they were very closely integrated with the collider detector, BELLE (figure 1). The IRQs contributed to the highest beam luminosity ever achieved, as described later, enabling the KEKB accelerator and the BELLE experiment to help in establishing the Kobayashi-Maskawa theory for which the Nobel prize was awarded in 2008. A further sophisticated multiple-magnet system is now being developed for the interaction region at Super-KEKB, the upgraded B-factory, which was approved in 2010.
The experience acquired in these projects was to allow KEK to make important contributions to the LHC, in particular in a fundamental study of high-field dipoles to reach 10 T and in the construction of insertion quadrupoles with a design field gradient of 215 T/m at a coil aperture of 70 mm (figure 2). The quadrupole magnets were developed and supplied in collaboration with Fermilab.
More recently, KEK developed a primary proton-transport line at the Japan Proton Accelerator Research Complex (J-PARC) in Tokai, in a collaboration between KEK and the Japan Atomic Energy Agency (JAEA). To create and direct a neutrino beam towards the Kamioka neutrino observatory nearly 300 km away, an internally extracted proton beam from J-PARC has to bend through around 90°, with a much smaller bending radius than that of the main-ring accelerator. This requirement has been achieved using a series of uniquely fashioned superconducting magnets with combined-function coils having dipole and quadrupole field components within a single-layer coil (figure 3). The experience accumulated in the earlier projects contributed to achieving this distinctive superconducting-magnet design, which also involved important co-operation with Brookhaven National Laboratory. At J-PARC superconductivity has taken an essential role in providing high-intensity pulsed muon beams in the meson-science laboratory, as well as the superconducting solenoid beamlines for muon science and a superconducting magnetic spectrometer for particle physics.
For the future, KEK intends to contribute to upgrade programmes for the LHC, to the application of advanced high-field superconductors in co-operation with the National Institute of Materials Science and to high-temperature superconductors in co-operation with other laboratories and industry. Fundamental research on the effect of stress-strain on superconductor performance is crucially important for high-field superconducting magnets. Experimental studies of structural and stress analysis are in progress using neutron-diffraction techniques at the J-PARC neutron-beam facility in co-operation with JAEA.
KEK has also applied superconducting-magnet technology to particle-detector magnets. The TRISTAN collider’s three major particle detectors – TOPAZ, VENUS and AMY – and the BELLE detector at KEKB were based on superconducting solenoid magnets to provide the magnetic fields for momentum analysis in particle spectroscopy. In particular, these involved a great deal of development work on aluminium-stabilized superconductor technology.
The key feature of this technology is that it allows the maximum magnetic field for the minimum material – an important step in matching the physicists’ dream of having only a magnetic field, without additional material, in an experiment. It therefore leads to the possibility of “thin-walled” superconducting coils that are in effect transparent to particles passing through. The use of aluminium stabilizer instead of ordinary copper stabilizer allows for low density and low resistance but requires sufficiently high strength. It has become a fundamental technology in the construction of magnets for large-scale particle detectors, including – most recently – the magnet systems of the ATLAS and CMS experiments at the LHC. KEK provided the ATLAS central solenoid magnet, which had the extremely demanding requirement that it should be installed in a common cryostat with the liquid-argon calorimeter system in addition to employing the advanced high-strength aluminium-stabilized superconductor technology to meet the physics requirement for the magnetic field to be as transparent as possible.
KEK has also applied this technology in a variety of global collaborations, including the muon g-2 parameter measurement experiment (E-821) at Brookhaven National Laboratory and the WASA experiment at Uppsala University (now transferred to the Cooler Synchrotron (COSY) ring at the Forschungszentrum Jülich). A more extreme application is in the field of astroparticle physics. The Balloon-borne Experiment with a Superconducting Spectrometer (BESS) has successfully flown twice over Antarctica to search for primordial antiparticles in the universe, in collaboration with NASA in the framework of Japan–US co-operation in space science.
Superconducting acceleration
Turning now to RF superconductivity, TRISTAN was the first high-energy particle accelerator in the world to use superconducting RF cavities as the main acceleration components with a frequency of 500 MHz in the routine operation of the accelerator (figure 4). This is where Kojima took the lead and established a milestone by using superconducting RF to provide a high continuous-wave accelerating gradient in storage rings. He also trained many next-generation scientists in RF superconductivity, who have since extended the application in a variety of subsequent projects and global collaborations.
The technology pioneered at TRISTAN was extended for the KEKB accelerator, which was commissioned in 1998 with superconducting RF cavities as a major accelerating component. Eight single-cell cavities with sufficiently damped higher-order modes (HOM) accelerated the electron beam of 1.4 A, delivering the RF power of 350 kW per cavity. This technology was also applied to the Beijing Electron–Positron Collider II in co-operation with the Institute for High-Energy Physics in Beijing. Furthermore, collaboration with the National Synchrotron Radiation Research Center is under way to apply superconducting RF technology to its new synchrotron-light source, the Taiwan Photon Source. At the same time, a unique superconducting RF cavity, called the “crab cavity”, was successfully developed as a key component to maximize the peak luminosity of KEKB (figure 5). It was designed to reach the optimum beam-interaction efficiency by tilting the beam and then compensating the crossing angles. Once installed at KEKB, the crab cavity contributed to the facility’s world-record luminosity of 2.11 × 1034 cm2s–1 achieved in 2009. KEKB shut down in June 2010 to be upgraded to Super KEKB, so as to allow operation with a peak luminosity of 8 × 1035 cm–2s–1.
Looking to future applications of RF superconductivity in accelerator science, KEK is now undertaking research and development in two major directions. Energy-recovery linacs (ERLs), which in effect recycle energy from the beam, will inevitably be required for efficient acceleration, especially in applications of intense electron beams and in photon science. KEK is building a compact ERL facility as a prototype for a potential future ERL accelerator.
Aiming towards the high-energy frontier, research and development for the International Linear Collider (ILC) is being carried out in a global co-operation led by the Global Design Effort (GDE). The design, based on RF superconductivity, foresees more than 16,000 superconducting 9-cell 1.3 GHz cavities in series, operating with an average field gradient of 31.5 MV/m, to achieve a linear electron–positron collider based on two 250 GeV linear accelerators.
KEK is contributing to developing the advanced superconducting RF cavity technology for the ILC within the global collaboration. There has been successful progress towards demonstrating a field gradient of more than 40 MV/M in 9-cell cavities, based on accumulated long-term fundamental research and development. In a unique global effort, KEK has hosted a cavity-string test (the so-called S1-Global) with a cavity-string and a cryomodule system jointly contributed by DESY, Fermilab, INFN, SLAC and KEK (figure 6). The test facility has demonstrated how international collaboration can be possible in providing a plug-compatible cavity-string assembly, which would inevitably be required in constructing the ILC accelerator.
Applied superconductivity has been an essential and fundamental technology in all of the major experimental facilities for accelerator science and for physics programmes that have and will be carried out at KEK, as well as for international co-operation programmes, including the LHC and the ILC. The hope is that KEK will continue both to play an important role in contributing to advanced technology and to be a centre of excellence in applied superconductivity for fundamental physics and accelerator science.
Modern medical imaging of the human body often provides not only anatomical detail but also functional information or the biochemical status of a particular region of the body. The first example of the combined use of anatomical and functional imaging, now known as “hybrid imaging”, put positron emission tomography (PET) together with computed tomography (CT). David Townsend, a former CERN scientist working at the University Hospital in Geneva, first thought of incorporating an X-ray-based CT scanner in the same instrument as a PET camera in 1991. The first such instrument was in operation by the end of the 1990s and now all PET cameras that are commercially available from the major international companies are combined PET/CT scanners.
Although PET/CT has proved its value in oncology, CT still has some serious limitations related to soft-tissue contrast, which often needs additional injections of contrast agents for the patient. In CT imaging, high levels of radiation exposure are also a concern, particularly in paediatrics, in repeated scanning for therapy monitoring and in other non-oncology pathologies. An alternative approach for hybrid imaging has arisen recently with the emergence of systems that combine PET with magnetic-resonance imaging (MRI). The advantages of a PET/MRI scanner is evident from the table below, which illustrates the merits of the different medical-imaging techniques.
Unlike CT, MRI provides good contrast in soft tissue. This technique involves aligning the magnetic moments of hydrogen nuclei in a strong magnetic field and then using a temporary RF field to flip the spin of some of them. When these nuclei revert to their former state, they radiate at the same radio frequency. The key to providing an image is to apply an additional magnetic-field gradient so that the resonant frequency varies with position. A typical MRI scanner comprises a strong magnet to produce a static, homogeneous, longitudinal magnetic field (B0), three “gradient” coils that can be switched on and off, an RF transmitter, RF receiver coils and a computer-control and data-acquisition system (figure 1).
Superconducting magnets offer the optimum way to provide the necessary field strength over the volume required in a whole-body scanner and the commercial development of these magnets from the 1970s onwards has led to the wide medical use of MRI. Modern systems have fields of 1.5–3 T, although some with fields up to 7 T already exist. The gradient coils generate magnetic-field gradients in x, y and z directions and are used to encode position, while the RF-transmitter coil is used to excite nuclei by dragging longitudinal magnetization from the B0 direction to a desired predefined angle. Several RF-receiver coils are used – one is integrated into the MRI scanner and several “surface” coils can be placed closer to the patient to improve signal-to-noise ratios. These coils receive RF pulses transmitted from nuclei when they lose their excitation and their re-alignment with B0. Recently, the RF receiver–transmitter system has evolved to accommodate two parallel transmitters that improve the spatial homogeneity of acquired signals – this is important in 3 T systems for whole-body imaging.
A PET camera, by contrast, is used to detect and measure the distribution in the body of radioisotopes decaying via positron emission. For every positron annihilation event in tissue, an almost coplanar pair of 511 keV gamma-ray photons is produced. A PET detector consists of a pixelated scintillator ring connected to banks of photomultiplier tubes (PMTs) via optical guides. The PMTs convert the photons of visible light from the scintillators into voltage and the relative output voltages of pairs of PMTs determine the position of the photon pairs at the detector surface. To identify photon pairs, the PMTs operate in coincidence using a timing window of a few nanoseconds. The detected coincident pair defines a line-of-response (LOR), somewhere along which a positron annihilation event happened. Detectors with fast timing-resolution, typically less than 600 ps, can localize the annihilation on the LOR using time-of-flight technology (TOF).
PET meets MRI
PMTs, however, are inherently unable to operate inside a magnetic field. Early attempts to develop smaller animal PET/MRI scanners produced several innovative prototypes using different approaches to overcome the cross-talk between the PET and MRI systems. In 2008, both the Philips and Siemens medical companies developed their first PET/MRI prototypes for humans. The Philips system had two independent scanners and additional shielding to contain the magnetic field from its 3 T magnet, with a coaxial distance between the PET detector and the MRI scanner of 4.2 m (figure 2). Furthermore, each PMT was individually shielded and its photocathode aligned along the flux lines of the magnetic field. Apart from this change the PET detector was the same as the commercially available TOF scanner and this PET/MRI system was capable of acquiring whole-body images in a sequential fashion. The Siemens prototype had a PET scanner integrated into an MRI system with a 3 T scanner. A retractable PET detector used avalanche photodiodes (APDs) which are solid-state photon detectors coupled to lutetium oxyorthosilicate scintillator crystals (LSO). The system was designed to acquire simultaneous PET and MRI pictures of the brain.
Today, three large-imaging companies have PET/MRI whole-body scanners in their portfolios, although all three are significantly different. In 2010, Siemens announced a whole-body simultaneous PET/MRI scanner based on their original technology and this has already received medical devices registration (CE mark for Europe and 510(k) for the US). This latest model comprises a 70 cm diameter 3 T magnet with an integrated PET detector ring 60 cm in diameter. The PET detector consists of pixelated scintillators coupled via optical guides to an array of APDs (figure 3). APDs are insensitive to magnetic fields and have high gain (102–103) and timing resolution of the order of 1 ns (Lewellen 2008). The APD arrays are connected to front-end electronics for pre-amplification and digitization and have a cooling circuit to maintain constant temperature because their gain is temperature sensitive. Meanwhile, Philips has commercialized its sequential whole-body TOF-PET/MRI system (CE mark already received and 510(k) approval pending). A third company, General Electric, is proposing an arrangement where an MRI scanner is placed in a room adjacent to a PET/CT scanner and the patient is transferred from one system to the other using a shuttle couch arrangement.
Some concerns remain about the integration of MRI and PET. Photons travelling through a patient’s body are absorbed or attenuated and are not registered. In PET/CT systems, this is compensated for by using a low-dose CT scan, which provides an accurate attenuation map of the object being imaged; this CT attenuation map can then be used for attenuation correction of the PET image. This is not possible in PET/MRI systems and various methods for estimating attenuation coefficients are still under development. Another problem is cross-talk between PET and MRI because RF pulses from the MRI may cause the PET electronics to lose counts during transmission of the RF pulses. Nevertheless, the latest commercial systems seem to have overcome most of the problems and fine-tuning of the designs continues. Industry and the medical-imaging community are now actively collaborating to use and improve this new medical technology, as well as to demonstrate a true clinical utility for PET/MRI scanners. This has already resulted in a multitude of scientific publications on these topics in both journals and conferences on PET and MRI.
There is a remarkable similarity in design between these integrated PET/MRI clinical scanners and the large, general-purpose detector systems developed in particle physics. For example, the CMS experiment at the LHC in CERN has a central detector of 4 m × 15 m within an axial magnetic field of 4 T (figure 4). This can be compared with a commercial whole-body PET/MRI scanner, with a field of view of 0.6 m × 0.26 m and magnetic fields of up to 3 T. It is therefore reasonable to expect that the latest technologies now being used in particle physics detectors – e.g. silicon photomultipliers – will soon be incorporated by industry into newer and more sensitive combined PET/MRI scanners, with timing-resolution capabilities superior to even those of today’s state-of-the-art PET/CT scanners that are based on PMTs.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.