The reason why some stars are born in pairs while others are born singly has long puzzled astronomers. But a new study suggests that no special conditions are required: all stars start their lives as part of a binary pair. The result has implications not only in the field of star evolution but also for studies of binary neutron-star and binary black-hole formation. It also suggests that our own Sun was born together with a companion that has since disappeared.
Stars are born in dense molecular clouds measuring light-years across, within which denser regions can collapse under their own gravity to form high-density cores opaque to optical radiation, which appear as dark patches. When the densities reach the level where hydrogen fusion begins, the cores can form stars. Although young stars already emit radiation before the onset of the hydrogen-burning phase, it is absorbed in the dense clouds that surround them, making star-forming regions difficult to study. Yet, since clouds that absorb optical and infrared radiation re-emit it at much longer wavelengths, it is possible to probe them using radio telescopes.
Sarah Sadavoy of the Max Planck Institute for Astronomy in Heidelberg and Steven Stahler of the University of California at Berkeley used data from the Very Large Array (VLA) radio telescopes in New Mexico, together with micrometre-wavelength data from the James Clerk Maxwell Telescope (JCMT) in Hawaii, to study the dense gas clumps and the young stars forming in them in the Perseus cluster – a star-forming region about 600 light-years away. Data from the JCMT show the location of dense cores in the gas, while the VLA provides the location of the young stars within them.
Studying the multiplicity as well as the location of the young stars inside the dense regions, the researchers found a total of 19 binary systems, 45 single-star systems and five systems with a higher multiplicity. Focusing on the binary pairs, they observed that the youngest binaries typically have a large separation of 500 astronomical units (500 times the Sun–Earth distance). Furthermore, the young stars were aligned along the long axis of the elongated cloud. Older binary systems, with an age between 500,000 and one million years, were found typically to be closer together and separated around a random axis.
Subsequent to cataloguing all the young stars, the team compared the observed star multiplicity and the features seen in the binary pairs to simulations of stars being formed either as single or binary systems. The only way the model could reproduce the data was if its starting conditions contained no single stars but only stars that started out as part of wide binaries, implying that all stars are formed as part of a binary system. After formation, the stars either move closer to one another into a close binary system or move away from each other. The latter option is likely to be what happened in the case of the Sun, its companion having drifted away long ago.
If indeed all stars are formed in pairs, it would have big implications for models of stellar birth rates in molecular clouds as well as for the formation of binary systems of compact objects. The studied nearby Perseus cluster could, however, just be a special case, and further studies of other star-forming regions are therefore required to know if the same conditions exist elsewhere in the universe.
Particle physicists try to understand the environment that existed fractions of a second after the Big Bang by studying the behaviour of particles at high energies. Early studies relied on cosmic rays emanating from extraterrestrial sources, but the invention of the circular accelerator by Ernest Lawrence in 1931 revolutionised the field. Further advances in accelerator technology gave physicists more control over their experiments, in particular thanks to the invention of the synchrotron and the development of storage rings. By capturing particles via a ring of magnets and accelerating them with radio-frequency cavities, these facilities finally reached energies of a few hundred GeV. But storage rings are limited by the maximum magnetic field achievable with resistive magnets, which is around 2 T. To go further into the heart of matter, particle physicists required higher energies and a new technology to get them there.
The maximum field of an electromagnet is roughly determined by the amount of current in a conductor multiplied by the number of turns the conductor makes around its support structure. Over the years, the growing scale of accelerators and the large number of magnets needed to reach the highest energies demanded compact and affordable magnets. Conventional electromagnets, which are usually based on a copper conductor, are limited by two main factors: the amount of power required to operate them due to resistive losses and the size of the conductor. Typical conventional-magnet windings therefore tended to use conductors with a cross-sectional area of the order of a few square centimetres, which is not optimal for generating high magnetic fields.
Superconductivity, which allows certain materials at low temperatures to carry very high currents without any resistive loss, was just the transformational technology needed. It powered the Tevatron collider at Fermilab in the US to produce the top quark, and CERN’s Large Hadron Collider (LHC) to unearth the Higgs boson. Advanced superconducting magnets are already being developed for future collider projects that will take physicists into a new phase of subatomic exploration beyond the LHC (figure 1).
Maintaining the state
Discovered in 1911, superconductivity didn’t immediately lead to broad applications, particularly not high-field accelerator magnets. As far as accelerators were concerned, the possibility of using superconducting magnets to produce higher fields started to take root in the mid-1960s. The big challenge was to maintain the superconducting state in a bulk object in which tremendous forces are at work: the slightest microscopic movement of the conductor would cause it to transition to the normal state (a “quench”) and result in burn-up, unless the fault was detected quickly and the current turned off.
Early superconductors were mostly formed into high-aspect-ratio tapes measuring a few tenths of a millimetre thick and around 10 mm wide. These are not particularly useful for making magnets because precise geometry and current distribution are necessary to achieve a good field quality. Intense studies led to the development of multi-filamentary niobium-zirconium (NbZr), niobium-titanium (Nb-Ti) and niobium-tin (Nb3Sn) wires, propelling interest in superconducting technology. In 1961, Kunzler and colleagues at Bell Labs produced a 7 T field in a solenoid, a relatively simple coil geometry compared with the dipoles or quadrupoles needed for accelerators. This swiftly led to higher-field solenoids, and a number of efforts to utilise the benefits of superconductivity for magnets began. But it was only in the early 1970s that the first prototypes of superconducting dipoles and quadrupoles demonstrated the potential of superconducting magnet technology for accelerators.
A turning point came during a six-week-long study group at Brookhaven National Laboratory (BNL) in the US in the summer of 1968, during which 200 physicists and engineers from around the world discussed the application of superconductivity to accelerators (figure 2). Considerable focus was directed towards the possibility of using superconducting beam-handling magnets (such as dipoles and quadrupoles for transporting beams from accelerators to experimental areas) for the new 200–400 GeV accelerator being constructed at Fermilab. By that time, several high-field superconducting alloys and compounds had been produced.
Hitting the mainstream
It could be argued that the unofficial kick-off for superconducting magnets in accelerators was a panel discussion at the 1971 Particle Accelerator Conference held in Chicago, although there was a clear geographical divide on key issues. The European contingent was reluctant to delve into higher-risk technology when it was clear that conventional technology could meet their needs, while the Americans argued for the substantial cost savings promised by superconducting machines: they claimed that a 100 GeV superconducting synchrotron could be built in five or six years, while the Europeans estimated a more conservative seven to 10 years.
In the US, work on furthering the development of superconducting magnets for accelerators was concentrated in a few main laboratories: Fermilab, the Lawrence Radiation Laboratory, Brookhaven National Laboratory (BNL) and Argonne National Laboratory. In Europe, a consortium of three laboratories – CEA Saclay in France, Rutherford Appleton Laboratory in the UK and the Nuclear Research Center at Karlsruhe – was formed to enable future conversion of the recently approved 300 GeV accelerator, to become CERN’s Super Proton Synchrotron (SPS), to higher energies using superconducting magnets. Of particular historical note, a short paper written at this time referred to a “compacted fully transposed cable” produced at the Rutherford Lab, and the “Rutherford cable” has since become the standard conductor configuration for all accelerator magnets (figure 3).
Rapid progress followed, reaching a tipping point in the 1970s with the launch of several accelerator projects based on superconducting magnets and a rapidly growing R&D community worldwide. These included: the Fermilab Energy Doubler; Interaction Region (IR) quadrupoles (used to bring particles into collision for the experiments) for the Intersecting Storage Rings at CERN; and IR quadrupoles for TRISTAN at KEK in Japan and UNK in the former USSR. The UNK magnets were ambitious for their time, with a desired operating field of 5 T, but the project was cancelled in the years following the breakup of the USSR.
Although superconducting magnet technology was one of the initial options for the SPS, it was rapidly discarded in favour of resistive magnets. This was not the case at Fermilab, which at that time was pursuing a project to upgrade its Main Ring beyond 500 GeV. The project was initially presented as an Energy Doubler, but rapidly became known by the very modern name of Energy Saver, and is now known as the Tevatron collider for protons and antiprotons, which shut down in 2011. The Tevatron arc magnets were the result of years of intense and extremely effective R&D, and it was their success that triggered the application of superconductivity for accelerators.
As superconducting technology matured during the 1980s, its applications expanded. The electron–proton collider HERA was getting under way at DESY in Germany, while ISABELLE was reborn as the Relativistic Heavy Ion Collider (RHIC) at BNL. Thanks to intensive development by high-energy physics, Nb-Ti was readily available from industry. This allowed the construction of magnets with fields in the 5 T range, while multi-filamentary conductors made from niobium-titanium-tantalum (Nb-Ti-Ta) and Nb3Sn were being pursued for fields up to 10 T. The first papers on the proposed Superconducting Super Collider (SSC) in the US were published in the mid-1980s, with R&D for the SSC ramping up substantially by the start of the 1990s. Then, in 1991, the first papers on R&D for the LHC were presented. The LHC’s 8 T Nb-Ti dipole magnets operate close to the practical limit of the conductor, and the collider now represents the largest and most sophisticated use of superconducting magnets in an accelerator.
The niobium-tin challenge
With the success of the LHC, the international high-energy physics community has again turned its attention to further exploration of the energy frontier. CERN has launched a Future Circular Collider (FCC) study that envisages a 100 TeV proton–proton collider as the next step for particle physics, which would require a 100 km-circumference ring of superconducting magnets with operating fields of 16 T. This will be an unprecedented challenge for the magnet community, but one that they are eager to take on. Other future machines are based on linear accelerators that do not require magnets to keep the beams on track, but demand advanced superconducting radio-frequency structures to accelerate them over short distances.
Thanks to superconducting accelerator magnets wound with strands and cables made of Cu/Nb-Ti composites, the energy reach of particle colliders has steadily increased. After nearly half a century of dominance by Nb-Ti, however, other superconducting materials are finally making their way into accelerator magnets. Quadrupoles and dipoles using Nb3Sn will be installed as part of the high-luminosity upgrade for the LHC (the HL-LHC) in the next few years, for example, and the high-temperature superconductor Bi2Sr2CaCu2O8 (BSCCO), iron-based superconductors and rare-earth bismuth copper oxide (REBCO) have recently been added to the list of candidate materials. Proposals for new large circular colliders has boosted interest in high-field dipole magnets but, despite the tantalising potential for achieving dipole fields more than twice that of Nb-Ti, there are many problems that still need to be overcome.
Although Nb3Sn was one of the early candidates for high-field magnets, and has much better performance at high fields than Nb-Ti, its processing requirements, mechanical properties and costs present difficulties when building practical magnets. Nb3Sn comes as a round wire from industry vendors, which is excellent for making multi-wire cables but requires the reaction of a copper, niobium and tin composite at 650 °C to develop the superconducting Nb3Sn cable. Unfortunately, Nb3Sn is a brittle ceramic, unlike Nb-Ti, which requires only modest heat treatment and drawing steps and is mechanically very strong. Years of effort worldwide have overcome these limitations and fields in the range of 16 T have recently been achieved – first in 2004 by a US R&D programme and more recently at CERN – and this is close to the practical limit for this conductor. In addition to the near-term use in the HL-LHC, and despite currently costing 10 times more than Nb-Ti, it is the material of choice for a future high-energy hadron collider, and is also being used in enormous quantities for the toroidal-field magnets and central solenoid of the ITER fusion experiment (see “ITER’s massive magnets enter production”).
High-temperature superconductors represent a further leap in magnet performance, but they also raise major difficulties and could cost an additional factor of 10 more than Nb3Sn. For fields above 16 T there are currently only two choices for accelerator magnets: BSCCO and REBCO. Although these materials become superconductors at a higher temperature than niobium-based materials, their maximum current density is achieved at low temperatures (in the vicinity of 4.2 K). BSCCO has the advantage of being obtainable in round wire, which is perfect for making high-current cables but requires a fairly precise heat treatment at close to 900 °C in oxygen at high pressures. This is not a simple engineering task, especially when dealing with large coils. Much progress has been made recently, however, and there is a vibrant programme in industry and academia to tackle these challenges. REBCO has excellent high-field performance, high current density and requires no heat treatment, but it only comes in tape form, presenting difficulties in winding the required coil shapes and producing acceptable field quality. Nevertheless, the performance of this high-temperature superconductor is too tantalising to abandon it, and many people are working on it. Even after half a century, progress in the development of high-field accelerator magnet R&D continues, and indeed is critical for future discoveries in particle physics.
CERN breaks records with high-field magnets for High-Luminosity LHC
To keep the protons on a circular track at the record-breaking luminosities planned for the LHC upgrade (the HL-LHC) and achieve higher collision energies in future circular colliders, particle physicists need to design and demonstrate the most powerful accelerator magnets ever. The development of the niobium-titatnium LHC magnets, currently the highest-field dipole magnets used in a particle accelerator, followed a long road that offered valuable lessons. The HL-LHC is about to change this landscape by relying on niobium tin (Nb3Sn) to build new high-field magnets for the interaction regions of the ATLAS and CMS experiments. New quadrupoles (called MQFX) and two-in-one dipoles with fields of 11 T will replace the LHC’s existing 8 T dipoles in these regions. The main challenge that has prevented the use of Nb3Sn in accelerator magnets is its brittleness, which can cause permanent degradation under very low intrinsic strain. The tremendous progress of this technology in the past decade led to the successful tests of a full-length 4.5 m-long coil that reached a record nominal field value of 13.4 T at BNL. Meanwhile at CERN, the winding of 7.15 m-long coils has begun.Several challenges are still to be faced, however, and the next few years will be decisive for declaring production readiness of the MQFX and 11 T magnets. R&D is also ongoing for the development of a Nb3Sn wire with an improved performance that would allow fields beyond 11 T. It is foreseen that a 14–15 T magnet with real physical aperture will be tested in the US, and this could drive technology for a 16 T magnet for a future circular collider. Based on current experience from the LHC and HL-LHC, we know that the performance requirements for Nb3Sn for a future circular collider require a large industrial effort to make very large-scale production viable.
• Panagiotis Charitos, CERN.
To identify particles emerging from high-energy interactions between a beam and a fixed target, or between two counter-rotating beams, experimental physicists need to measure the particle tracks with high precision. Since charged particles are deflected in a magnetic field, incorporating a magnet in the detector system serves to determine both the charge and momentum of a particle. Momentum resolution is proportional to the sagitta of the detected track, which is proportional to the magnetic field and the square of the length of the track, so larger magnets and larger fields tend to deliver better performance. While being as large and as strong as possible, however, the magnet should not get in the way of the active detector materials.
These general constraints in high-energy physics experiments point to a need for more compact superconducting devices. But additional constraints such as cost, complexity and experiment schedules can lead to the choice of a conventional “warm” magnet if sufficient field and volume can be provided for acceptable power consumption. A detector magnet is one of a kind, and a field accuracy of one part in 1000 is usually sufficient. In contrast, accelerator magnets are typically many of a kind, and are required to deliver the highest possible field with an accuracy of one part in 10,000 or better in a long and narrow aperture. This leads to substantially different technological choices.
Following the discovery of superconductivity, people immediately thought of using it to produce magnetic fields. But the pure materials concerned (later to be called type-I superconductors) only worked up to a critical field of about 0.1 T. The discovery in 1961 of more practical (type-II) superconductivity in certain alloys and compounds which, unlike type-I, allow penetration of magnetic flux but exhibit critical fields of 10–20 T, immediately led to renewed interest. Physics laboratories in Europe and the US started R&D programmes to understand how to make superconducting magnets and to explore possible applications.
The first four years were difficult: small magnets were built but it was not possible to get scaled-up versions to operate at currents anywhere close to the level obtained for short samples of the superconducting wire available at the time. A breakthrough was presented at the first Particle Accelerator Conference in 1965, in a seminal paper by Steckly and Zar on cryogenic stability. Cryogenic stability ensures that, if a superconductor becomes normal due to coil motion or a flux jump (when magnetic flux penetrates a thick type-II material leading to instability, resistance and increased temperature), it will recover its superconductivity provided enough heat can be conducted away to the coolant for the material to drop back below its critical temperature in the region where superconductivity was lost. Several laboratories immediately started to build large helium-bath-cooled bubble-chamber magnets.
The bubble chamber, invented by Donald Glaser in 1952, consists of a tank of liquid hydrogen surrounded by a pair of Helmholtz coils: particles leave tracks in the supercritical liquid and their curvatures reveal the particle’s momentum. The first large (72 inch) bubble-chamber magnet at the University of California Radiation Laboratory was equipped with a 1.8 T water-cooled copper coil weighing 20 tonnes and dissipating a power of 2.5 MW. Larger magnets were desirable for improved resolution, but were clearly unrealistic with room-temperature copper coils due to the costs involved. This was therefore an obvious application for superconductivity, and the concept of cryogenic stability allowed large magnets to be built using a superconductor that was otherwise inherently unstable.
Recall that this was before seminal work at the Rutherford Appleton Laboratory (RAL) had revealed the need for fine filaments and twisting to ensure stability, and before we knew that practical superconductors had to be made in that way. Indeed, it is striking to observe the audacity of high-energy physicists in the late 1960s and the early 1970s in embarking on the construction of such large and costly devices so rapidly, based on so little experience and knowledge.
Thick filaments of niobium-titanium in a copper matrix were the superconducting material of choice at the time, with coils being cooled in a bath of liquid helium. Achievements included: the 1.8 T magnet at Argonne National Laboratory for its bubble-chamber facility; a 3 T magnet for a facility at Fermilab; and the 3.5 T Big European Bubble Chamber (BEBC) magnet at CERN. The stored energy of the BEBC magnet was almost 800 MJ – a level not exceeded for a large magnet until the Large Helical Device came on stream in Japan (for fusion experiments) in the late 1990s. This use of superconducting magnets for experiments preceded by several years their practical application to accelerators.
Discoveries
Following early experiments at CERN’s Intersecting Storage Rings, which were not well equipped to observe particles having large transverse momentum, the importance of detecting all of the particles produced in beam collisions in colliders was recognised, and a need emerged for magnets covering close to a full 4π solid angle. To improve momentum resolution it was also desirable to extend the measurement of tracks beyond the magnet winding, calling for thin coils. The goal was less than one radiation length in thickness, for which a high-performance superconductor with intrinsic stability was needed. This pointed towards a design based on the type of superconducting wire that had been developed in the accelerator community and had by now become a commodity for making MRI magnets (an industry that now consumes more than 90% of the superconductors produced), with the attendant reduction in cost.
Therefore by the early 1980s the development of detector magnets had shifted to conductors made of by then standard superconducting wires consisting of twisted fine filaments in a copper matrix, single or cabled, co-extruded with ultra-pure aluminium to provide stabilization, and wound in solenoidal coils inside a hard aluminium alloy mandrel for support. Pure aluminium is an excellent conductor at low temperature, and far more transparent than the copper that had been used previously. Moreover, rather than being bath cooled, these constant field magnets were indirectly cooled to about 5 K with helium flowing in pipes in good thermal contact with the mandrel. This allowed the 1–2 T detector solenoids to become larger, without power dissipation in the winding and with a low inventory of liquid helium. In this way the coils can be made thin and relatively transparent to certain classes of particles such as muons, so that detectors can be located both inside and outside. Examples of these magnets are those used for the ALEPH and DELPHI experiments at CERN’s Large Electron–Positron (LEP) collider, the D0 experiment at Fermilab and the BELLE experiment at KEK. Other prominent experiments over the years based on superconducting magnets include VENUS at KEK, ZEUS at DESY, and BaBAR at SLAC.
While this had become the standard approach to detector magnet design, the magnets in the ATLAS and CMS experiments at the LHC occupy new territory. ATLAS uses a large toroidal coil structure surrounding a thin 2 T solenoid, and the solenoid for CMS delivers an unprecedented 3.8 T (but is not required to be very thin). While both the CMS and ATLAS solenoids use the now traditional technology based on niobium-titanium superconductorco-extruded in aluminium, to allow the structure to withstand the substantial forces the pure aluminium stabiliser is reinforced. This is done either by welding aluminium-alloy flanges to the pure aluminium (CMS) or by strengthening the pure aluminium with a precipitate that improves its strength while not increasing inordinately the resistivity of the aluminium (ATLAS solenoid).
The next generation of magnets planned for the Compact Linear Collider (CLIC), the International Linear Collider (ILC) and Future Circular Colliders (FCC) will be larger, and may require more technological development to reach the desired magnetic fields. Based on a single detector at the interaction point, a new unified detector model has been developed for CLIC and the concepts explored for this detector are also of interest to the high-luminosity, as well as for a future circular electron–positron collider. Like the LHC with ATLAS and CMS, a future circular collider requires a “general-purpose” detector. Previous studies for a detector for a 100 TeV circular hadron collider were based on a twin solenoid paired with two forward dipoles, but these have now been dropped in favour of a simpler system comprising one main solenoid enclosed by an active shielding coil. This design achieves a similar performance while being much lighter and more compact, resulting in a significant scaling down in the stored energy of the magnet from 65 GJ to 11 GJ. The total diameter of the magnet is around 18 m, and the new design could benefit from the important lessons from the construction and installation of the LHC detectors.
Key to the choice of such magnets, in addition to their cost and complexity, is their ability to allow high-quality muon tracking. This is crucial for studying the properties of the Higgs boson, for example, and any additional new fundamental particles that await discovery. If the lengthy discussions surrounding the design of the ATLAS and CMS magnets many years ago are anything to go by we can look forward to intense and interesting debates about how to push these one-off magnet designs to the next level.
Behind the size, complexity and physics goals of particle accelerators such as the LHC lies a simple physics principle worked out by Maxwell more than 150 years ago: when a charged particle passes through an electric field it experiences an acceleration proportional to the electric-field strength divided by its mass. While the magnets of a circular accelerator keep the beams on track, it is this principle that shunts them to the high energies needed for particle-physics research. The first accelerators relied on electrostatic fields produced between high-voltage anodes and cathodes, but by the mid-1920s it was clear that radio technology was needed to reach the highest possible energies.
To transfer energy to a beam of charged particles, a space must be created where the beam can move along an electric field produced by high-power radio waves; the higher the field, the larger the energy gain per metre (accelerating gradient). An accelerating space, usually called a radio-frequency (RF) cavity, is a container crossed by the beam in which is stored a rotational electric field that, when the bunch of particles is passing through, is found to be properly orientated in the desired direction. Whatever geometry the cavity has, the power dissipated by the Joule effect is proportional to its surface resistance and to the square of the field inside it.
For the past 30 years, superconducting radio-frequency (SRF) cavities have been in routine operation in a variety of settings, from pushing frontier accelerators for particle physics to applications in nuclear physics and materials science. They were instrumental in pushing CERN’s LEP collider to new energy regimes and in driving the newly inaugurated European X-ray Free Electron Laser. Advanced SRF “crab cavities” are now under development for the high-luminosity upgrade of the LHC.
From Stanford to LEP
It was unclear at first whether superconductivity had much value for RF technology. When a superconductor is exposed to a time-varying electromagnetic field, the electrons that are not coupled as Cooper pairs lead to energy dissipation in the shallow layer of the superconductor surface in which the electric and magnetic fields are dancing together to sustain the rotational electric field that transfers the energy to the beam. But it was soon realised that in the practical frequency range of RF accelerators, from a few hundred MHz to a few GHz, the use of SRF cavities would produce in any case a significant breakthrough due to the increase in the conversion efficiency from plug- to beam-power, cryogenics included. It was simply a question of developing the technology, and this required investment and big projects.
The High-Energy Physics Lab at Stanford University in the US was a pioneer in applying SRF to accelerators, demonstrating the first acceleration of electrons with a lead-plated single-cell resonator in 1965. Also in Europe, in the late 1960s, SRF was considered for the design of proton and ion linacs at KFK in Karlsruhe. To be superior to the competing technology of normal-conducting RF, a moderate field of a few MV/m was necessary. By the early 1970s SRF had been introduced in the design of particle accelerators, but results were still modest and a number of limiting factors needed to be understood.
The first successful test of a complete SRF cavity at high gradient and with beam was performed at Cornell’s CESR facility at the end of 1984, involving a pair of 1.5 GHz, five-cell bulk niobium cavities with a gradient of 4.5 MV/m. This cavity design was then used as the basis for the CEBAF facility at Jefferson Lab. Cornell’s success also triggered activities at CERN, where some visionary people were already looking at SRF as a way to double the energy of the Large Electron–Positron (LEP) collider under construction in what is now the LHC tunnel. LEP’s nominal centre-of-mass energy of 90 GeV was the minimum required to produce the recently discovered Z boson, but almost double this energy was needed to test the Standard Model further: specifically, to produce pairs of W bosons.
The baseline accelerating system of LEP was already a jewel that had demanded all the knowledge and ingenuity available globally at that time. Furthermore, doubling its energy meant that researchers had to battle additional losses due to synchrotron radiation, which scales as the fourth power of the electron and positron energies. The dream was to develop an accelerating system that made use of the very low losses promised by superconductivity to deliver a factor 16 or so more energy per turn to the LEP beam, occupying more or less the same space as the machine’s original copper cavities.
Developing the SRF system for “LEP II” was a great challenge and success for the accelerator community. Owing to the relatively low resonant frequency – 352 MHz – of LEP’s underlying design, the superconducting cavities developed ended up being more than four times bigger than the ones successfully tested at Cornell. Since 1979, a small group at CERN had been developing the SRF technology, including all the cavity’s ancillaries necessary for its eventual working, but the best niobium superconducting material produced at that time was not sufficiently performant at such scales, and the first tests cast doubt as to whether the LEP-II dream could be realised. In 1989, a pilot project of 20 niobium cavities started to evaluate the feasibility of LEP-II SC cavities. In the meantime, the niobium-copper (Nb/Cu) technology developed at CERN by Cris Benvenuti became mature enough to justify the decision to base the LEP-II programme on Nb/Cu SRF cavities. Their inherent advantages, including the possibility to replace the Nb coating by a better one in the future, were decisive in making this choice. Soon the CERN technology was transferred to industry and more than 300 2.4 m-long SRF cavities were successfully produced by three companies in France, Germany and Italy. By 2 November 2000, when LEP II was switched off to allow the LHC to be constructed, the collider’s energy had topped 209 GeV. More than 17 million Z bosons were produced, and a large number of pairs of W bosons, allowing extremely precise tests of the Standard Model of particle physics. LEP missed the Higgs discovery, and we now know that a centre-of-mass energy just a few tens of GeV higher would have been sufficient to produce the fundamental particle. But this was never a realistic prospect: the machine was already pushed to its limit and any further energy increase would have eventually produced an irreparable failure.
While CERN and LEP-II were creating a valuable technology for very large SRF cavities, Jefferson Lab in the US was set to build the CEBAF accelerator. This required installing a large and complete infrastructure to develop the SRF technology based on bulk niobium, going beyond the needs of CEBAF – possibly unavoidable for the success of such a challenging project that was the first of its kind. The decision resulted in the large-scale production of 300 small cavities based on Cornell’s design, but with a marginal contribution from industry mainly limited to the mechanical fabrication of the cavities with no surface treatment. The experience of CEBAF was nevertheless important for the evolution of SRF technology and some of the techniques applied became standard. Among them was the development of electron-beam welding parameters for niobium, the use of clean-room assembly and ultrapure-water rinsing, and some optimisation of the surface treatment of the active internal surface of the SRF cavities. CEBAF was also the first SRF accelerator to be cooled by superfluid helium, operating at a temperature of 2 K. The large cryogenic plant designed and built to cool CEBAF has itself been a crucial step in the development of superconducting technology, not just SRF but also for accelerator magnets such as those used in the LHC.
Concluding this important chapter in SRF development in the mid- to late-1980s are two other major high-energy-physics projects: TRISTAN at KEK in Japan and HERA at DESY in Germany. Each produced, through a big national company, state-of-the-art SRF technology involving moderate cavities, typically 500 MHz, of four to five cells, in bulk niobium. The new technologies reached an accelerating electric field of about 5 MV/m, while substantially improving the performances of HERA and TRISTAN.
Linear adventure
All of these large accelerators were still to be completed when, in July 1990, a meeting was held at Cornell, organised by Ugo Amaldi and Hasan Padamsee, to discuss the possibility of developing SRF technology for a future TeV-scale linear collider (thereby avoiding the synchrotron-radiation losses suffered by circular colliders). The proposed name of this object was TESLA and, after three days battling with various figures, we were convinced that such technology was possible. Amaldi returned to CERN to gather support and, one and half years later, over a dinner in a restaurant in Blankenese in Hamburg hosted by Bjørn Wiik, a dozen or so colleagues, including Maury Tigner, Helen Edwards and Ernst Habel, proposed that DESY should host an international collaboration with the task of developing TESLA.
The great success of TESLA in opening a new era of SRF had a number of concomitant causes, in addition to the great enthusiasm, friendship and ingenuity of those involved. We had the recent experiences from LEP-II and CEBAF, for instance, plus cryogenic experience from DESY and Fermilab. The memorandum of understanding helped to inspire a pure scientific research style, with no secrets among the partner institutes and constructive competition to produce the best technology possible. Once the cavity frequency (1.3 GHz) and the number of cells per cavity (nine) had been agreed, we designed the TESLA Test Facility. This central infrastructure at DESY was to treat the active/internal surface of cavities, control and verify each step of the material and cavity production, and finally test the cavities and ancillaries in all conditions, naked and fully dressed, with and without beam. In contrast to the construction of LEP-II and CEBAF, the fabrication of the cavities themselves was handed over to industry. This turned out to be a crucial decision, leading researchers into collaboration with competing firms and taking advantage of their expertise and ingenuity. The test with beam brought about a prototype of the TESLA linac that, with the addition of some undulators, was renamed FLASH in 2003 – the harbinger of the European XFEL.
In 1996, we had the first eight-cavity cryomodule in operation with beam and a stable production of cavities performing a few times better than envisaged. The challenging objective of TESLA’s mission was now very close in terms of both accelerating gradient and cost. The factor 20 improvement required to compete for the linear-collider prize was almost there. By 2000, a novel chemical process called electropolishing, developed by Kenji Saito of KEK, and a final cavity baking at moderate temperature introduced at CEA-Saclay by Bernard Visentin, took TESLA over the finish line. The success of TESLA technology was not just due to the cavity performance, but also the parallel development of power couplers, frequency tuners and other ancillaries.
The ability to accelerate very-high-power beams of protons to produce a huge flux of neutrons had major implications for neutron spallation sources like SNS in the US and the ESS under construction in Sweden, for nuclear-waste transmutation, and for accelerators for heavy ions. It took some time, but most new accelerators are in some way based on TESLA technology.
Continuing application
In 2004 the International Technology Recommendation Panel gave momentum to the newly named International Linear Collider (ILC). But it was clear that the eventual construction would not begin for at least a decade, in any case after a better definition of the physics case expected from the LHC. In the meantime, the European X-ray Free Electron Laser (XFEL), which began construction in Hamburg in 2009 and was completed this year (CERN Courier July/August 2017 p25), was the best opportunity for the TESLA collaboration to continue with the development of SRF technology. The realisation with industry, on budget and on time, of the advanced-SRF European XFEL has possibly been the most important recent milestone toward new frontiers for high-energy physics. Nevertheless, its 800 nine-cell cavities represent only 5% of the total number required by the ILC.
While SRF has made fantastic progress towards a linear collider and achieved a degree of maturity with the European XFEL, future circular colliders present additional R&D challenges that are similar and also complementary to the quest for very large accelerating gradient. The total power to be transferred to the beams in case of a future electron-positron collider, for example, is 100 MW continuously. This challenges the present concepts of high-power couplers, requires new ideas to minimize dynamic cryogenic losses, and has triggered R&D on new materials and fabrication techniques.
Concluding this historical summary, SRF has now reached a high level of technological development, handled by advanced industry, similar to that reached for magnets 15 years ago. As in the case of superconducting magnets, physics – and high-energy physics in particular – has been the most significant driving force. As with accelerator magnets such as those in the Tevatron, HERA and the LHC, projects such as LEP-II, CEBAF and TESLA/ILC have played a crucial role to transform, through technology transfer and industrialisation, an exotic phenomenon into a promising and useful technology. So far, the existing technology is sufficient for today’s applications. But basic research always seeks the next paradigm shift, and R&D taking place in laboratories such as CERN will allow us to go beyond present limitations.
Three state-of-the-art SRF projects for the High Luminosity LHC and beyond
• Exotic cavity geometries and ancillaries to perform specific gymnastics on the beam to significantly improve the collider luminosity. For example, “crab cavities” (pictured right) are under development at CERN for the high-luminosity LHC with the support of a highly expert collaboration. Starting from existing advanced SRF cavities, the group developed two complementary cavity packages that will tilt the two LHC beams just before they collide, to maximise their overlap and then substantially increase the collision rate. After the collision the beam is returned to its original orbit and the challenge is to do all of this without perturbing the beam. So far, two (out a total of 16) superconducting crab cavities have been manufactured at CERN and RF tests at 2 K have been performed in a superfluid helium bath. The first cavity tests earlier this year demonstrated a maximum transverse kick voltage exceeding 5 MV, corresponding to extremely high electric and magnetic fields on the cavity surfaces. By the end of 2017, the two crab cavities will have been inserted into a specially designed cryomodule that will be installed in the Super Proton Synchrotron to undergo validation tests with proton beams.
• Doping the very thin layer on the cavity inner surface that sustains the electromagnetic accelerating field to reduce the power dissipation at cryogenic temperatures, using a minor quantity of gas such as nitrogen. This R&D project, led by Anna Grassellino at Fermilab, is giving very promising results and is being experimentally applied on the LCLS-II X-ray free-electron laser (XFEL) under construction at SLAC. Once the technology is stabilised, the benefit in terms of investment and operation costs will hopefully be very important for all large accelerators requiring a continuous beam, such as new circular colliders, continuous-wavelength XFELs approved or under construction, and accelerator-driven systems for new nuclear-power technology.
• Niobium-tin (Nb3Sn) coating of SRF cavities. This technology has been pursued in a few laboratories for some time, with moderate success. But recent results from Cornell and Fermilab on real single-cell elliptical cavities are close to those obtained with pure niobium, and this could be the starting point for possible application of Nb3Sn coatings in large accelerators. The coating technique, once properly developed, could have significant advantages, mainly because of the higher critical temperature and critical magnetic field of Nb3Sn with respect to those of pure Nb.
Heike Kamerlingh Onnes won his Nobel prize back in 1913 two years after the discovery of superconductivity; Georg Bednorz and Alexander Müller won theirs in 1987, just a year after discovering high-temperature superconductors. Putting these major discoveries into use, however, has been a lengthy affair, and it is only in the past 30 years or so that demand has emerged. Today, superconductors represent an annual market of around $1.5 billion, with a high growth rate, yet a plethora of opportunities remains untapped.
Developing new superconducting materials is essential for a possible successor to the LHC currently being explored by the Future Circular Collider (FCC) study, which is driving a considerable effort to improve the performance and feasibility of large-scale magnet production. Beyond fundamental research, superconducting materials are the natural choice for any application where strong magnetic fields are needed. They are used in applications as diverse as magnetic resonance imaging (MRI), the magnetic separation of minerals in the mining industry and efficient power transmission across long distances (currently being explored by the LIPA project in the US and AmpaCity in Germany).
The promise for future technologies is even greater, and overcoming our limited understanding of the fundamental principles of superconductivity and enabling large-quantity production of high-quality conductors at affordable prices will open new business opportunities. To help bring this future closer, CERN has initiated the European Advanced Superconductivity Innovation and Training project (EASITrain) to prepare the next generation of researchers, develop innovative materials and improve large-scale cryogenics (easitrain.web.cern.ch). From January next year, 15 early stage researchers will work on the project for three years, with the CERN-coordinated FCC study providing the necessary research infrastructure.
Global network
EASITrain establishes a global network of research institutes and industrial partners, transferring the latest knowledge while also equipping participants with business skills. The network will join forces with other EU projects such as ARIES, EUROTAPES (superconductors), INNWIND (a 10–20 MW wind turbine), EcoSWING (superconducting wind generator), S-PULSE (superconducting electronics) and FuSuMaTech (a working group approved in June devoted to the high-impact potential of R&D for the HL-LHC and FCC), and aims to profit from the well-established Test Infrastructure and Accelerator Research Area Preparatory Phase (TIARA) platform. EASITrain also links with the Marie Curie training networks STREAM and RADSAGA, both hosted by CERN.
Operating within the EU’s H2020 framework, one of EASITrain’s targets is energy sustainability. Performance and efficiency increases in the production and operation of superconductors could lead to 10–20 MW wind turbines, for example, while new efficient cryogenics could reduce the carbon footprint of industries, gas production and transport. EASITrain will also explore the use of novel superconductors, including high-temperature superconductors, in advanced materials for power-grid and medical applications, and bring together technical experts, industrial representatives and specialists in business and marketing to identify new superconductor applications. Following an extensive study, three specific application areas have been identified: uninterruptible power supplies; sorting machines for the fruit industry; and large loudspeaker systems. These will be further explored during a three-day “superconductivity hackathon” satellite event at EUCAS17, organised jointly with CERN’s KT group, IdeaSquare, WU Vienna and the Fraunhofer Institute.
Together with the impact that superconductors have had on fundamental research, these examples show the unexpected transformative potential of these still mysterious materials and emphasise the importance of preparing the next generation for the challenges ahead.
Hackathon application destinations
Uninterruptible power supply (UPS). UPS systems are energy-storage technologies that can take on and deliver power when necessary. Cloud-based applications are leading to soaring data volumes and an increasing need for secure storage, driving growth among large data centres and a shift towards more efficient UPS solutions that are expected to carve a slice of an almost $1 billion and growing market. Current versions are based on batteries with a maximum efficiency of 90%, but superconductor-based implementations based on flywheels will ensure a continuous and longer-lived power supply, minimising data loss and maximising server stability.
Sorting machines for the fruit industry. Tonnes of fruit have to be disposed of worldwide because current technologies based on spectroscopy are not able to determine the maturity level of fruit sufficiently accurately, with techniques also offering limited information about small-sized fruit. Superconductors would enable NMR-based scanning systems that allow producers to accurately and non-destructively determine valuable properties such as ripeness, absence of seeds and, crucially, the maturity of fruit. In 2016, sorting-machine manufacturers made profits of $360 million selling products analysing apples, pears and citrus fruit, and the market has experienced a growth of about 20% per year.
Large loudspeaker systems. The sound quality of powerful loudspeakers, particularly PA systems for music festivals and stadiums, could enter new dimensions by using superconductors. Higher electrical resistance leads to poorer sound quality, since speakers need to modify the strength of a magnetic field rapidly to adapt to different frequency ranges. Superconductivity also allows smaller magnets to be used, making them more compact and transportable. A major concern among European manufacturers has been the search for the next big step in loudspeaker evolution, to defend against competition from Asia, and the size and quality of large speakers is now a major driver of the $500 million industry.
Superconductivity is a mischievous phenomenon. Countless superconducting materials were discovered following Onnes’ 1911 breakthrough, but none with the right engineering properties. Even today, more than a century later, the basic underlying superconducting material from which magnet coils are made is a bespoke product that has to be developed for specific applications. This presents both a challenge and an opportunity for consumers and producers of superconducting materials.
According to trade statistics from 2013, the global market for superconducting products is dominated by the demands of magnetic resonance imaging (MRI) to the tune of approximately €3.5 bn per year, all of which is based on low-temperature superconductors such as niobium-titanium. Large laboratory facilities make up just under €1 bn of global demand, and there is a hint of a demand for high-temperature superconductors at around €0.3 bn.
Understanding the relationship between industry and big science, in particular large particle accelerators, is vital for such projects to succeed. When the first superconducting accelerator – the Tevatron proton–antiproton collider at Fermilab in the US, employing 774 dipole magnets to bend the beams and 216 quadrupoles to focus them – was constructed in the early 1980s, it is said to have consumed somewhere between 80–90% of all the niobium-titanium superconductor ever made. CERN’s Large Hadron Collider (LHC), by far the largest superconducting device ever built, also had a significant impact on industry: its construction in the early 2000s doubled the world output of niobium-titanium for a period of five to six years. The learning curve of high-field superconducting magnet production has been one of the core drivers of progress in high-energy physics (HEP) for the past few decades, and future collider projects are going to test the HEP–industry model to its limits.
The first manufacturers
About a month after the publication of the Bell Laboratories work on high-field superconductivity at the end of January 1961 describing the properties of niobium-tin, it was realised that the experimental conductor – despite being a very small coil consisting of merely a few centimetres of wire – could, with a lot of imagination, be described as an engineering material. The discovery catalysed research into other superconducting metallic alloys and compounds. Just four years later, in 1965, Avco-Everett in co-operation with 14 other companies built a 10 foot, 4 T superconducting magnet using a niobium-zirconium conductor embedded in a copper strip.
By the end of 1966, an improved material consisting of niobium-titanium was offered at $9 per foot bare and $13 when insulated. That same year, RCA also announced with great fanfare its entry into commercial high-field superconducting magnet manufacture using the newly developed niobium-tin “Vapodep” ribbon at $4.40 per metre. General Electric was not far behind, offering unvarnished “22CY030” tape at $2.90 per foot in quantities up to 10,000 feet. Kawecki Chemical Company, now Kawecki-Berylco, advertised “superconductive columbium-tin tape in an economical, usable form” in varied widths and minimum unit lengths of 200 m, while in Europe the former French firm CSF marketed the Kawecki product. In the US, Airco claimed the “Kryoconductor” to be pioneering the development of multi-strand fine-filament superconductors for use primarily in low- or medium-field superconducting magnets. Intermagnetics General (IGC) and Supercon were the two other companies with resources adequate to fulfil reasonably sized orders, the latter in particular providing 47,800 kg of copper-clad niobium-titanium conductor for the Argonne National Laboratory’s 12 foot-diameter hydrogen bubble chamber. The industrialisation of superconductor production was in full swing.
Niobium-tin in tape form was the first true engineering superconducting material, and was extensively used by the research community to build and experiment with superconducting magnets. With adequate funds, it was even possible to purchase a magnet built to one’s specifications. One interesting application, which did not see the light of day until many years later, was the use of superconducting tape to exclude magnetic fields from those regions in a beamline through which particle beams had to pass undeviated. As a footnote to this exciting period, in 1962 Martin Wood and his wife founded Oxford Instruments, and four years later delivered the first nuclear magnetic resonance spectroscopy system. In November last year, the firm sold its superconducting wire business to Bruker Energy and Supercon Technologies, a subsidiary of Bruker Corporation, for $17.5 m.
Beginning of a new industry
One might trace the beginning of the superconducting-magnet revolution to a five-week-long “summer study” at Brookhaven National Laboratory in 1968. Bringing the who’s who in the world of superconductivity together resulted not only in a burst of understanding of the many failures experienced in prior years by magnet builders, but also a deeper appreciation of the arcana of superconducting materials. Researchers at Rutherford Laboratory in the UK, in a series of seminal papers, sufficiently explained the underlying properties and proposed a collaboration with the laboratories at Karlsruhe and Saclay to develop superconducting accelerator magnets. The GESSS (Group for European Superconducting Synchrotron Studies) was to make the Super Proton Synchrotron (SPS) at CERN a superconducting machine, and this project was large enough to attract the interest of industry – in particular IMI in England. Although GESSS achieved many advances in filamentary conductors and magnet design, the SPS went ahead as a conventional warm-magnet machine. IMI stopped all wire production, but in the US the number of small wire entrepreneurs grew. Niobium-tin tape products gradually disappeared from the market as this superconductor was deemed to be unsuitable for all magnets and especially for accelerator magnet use.
In 1972 the 400 GeV synchrotron at Fermilab, constructed with standard copper-based magnets, became operational, and almost immediately there were plans for an upgrade – this time with superconducting magnets. This project changed the industrial scale, requiring a major effort from manufacturers. To work around the proprietary alloys and processing techniques developed by strand manufacturers, Fermilab settled on an Nb46.5Ti alloy, which was an arithmetic average of existing commercial alloys. This enabled the lab to save around one year in its project schedule.
At the same time, the Stanford Linear Accelerator Center was building a large superconducting solenoid for a meson detector, while CERN was undertaking the Big European Bubble Chamber (BEBC) and the Omega Project. This gave industry a reliable view of the future. Numerous large magnets were planned by the various research arms of governments and diverse industry. For example, under the leadership of the Oak Ridge National Laboratory a consortium of six firms constructed a large-scale model of a tokamak reactor magnet assembly using six differently designed coils, each with different superconducting materials: five with niobium-titanium and one with niobium-tin. At the Lawrence Livermore National Laboratory work was in progress to develop a tokamak-like fusion device whose coils were again made from niobium-titanium conductor. The US Navy had major plans for electric ship drives, while the Department of Defense was funding the exploration of isotope separation by means of cyclotron resonance, which required superconducting solenoids of substantial size.
It appeared that there would be no dearth of succulent orders from the HEP community, with the result that even more companies around the world ventured into the manufacture of superconductors. When the Tevatron was commissioned in 1984, two manufacturers were involved: Intermagnetics General Corporation (IGC) and Magnetic Corporation of America (MCA), in an 80/20 per cent proportion. As is common in particle physics, no sooner had the machine become operational than the need for an upgrade became obvious. However, the planning for such a new larger and more complex device took considerable time, during which the superconductor manufacturers effectively made no sales and hence no profits. This led to the disappearance of less well capitalised companies, unless they had other products to market, as did Supercon and Oxford Instruments. The latter expanded into MRI, and its first prototype MRI magnet built in 1979 became the foundation of a current annual world production that totals around 3500 units. MRI production ramped up as the Tevatron demand declined and the correspondingly large amount of niobium-titanium conductor that it required has been stable since then.
The demise of ISABELLE, a 400 GeV proton–proton collider at Brookhaven, in 1983, and then the Superconducting Super Collider a decade later, resulted in a further retrenchment of the superconductor industry, with a number of pioneering establishments either disappearing or being bought out. The industrial involvement in the construction of the superconducting machines HERA at DESY and RHIC at BNL somewhat alleviated the situation. The discovery of high-temperature superconductivity (HTS) in 1986 also helped, although it is not clear that great profits, if any, have been made so far in the HTS arena.
A cloudy crystal ball
The superconducting wire business in the Western world has undergone significant consolidation in recent years. Niobium-titanium wire is now a commodity with a very low profit margin because it has become a standard, off-the-shelf product used primarily for MRI applications. There are now more companies than the market can support for this conductor, but for HEP and other research applications the market is shifting to its higher-performing cousin: niobium-tin.
Following the completion of the LHC in the early 2000s, the US Department of Energy looked toward the next generation of accelerator magnets. LHC technology had pushed the performance of niobium-titanium to its limits, so investment was directed towards niobium-tin. This conductor was also being developed for the fusion community ITER (“ITER’s massive magnets enter production”), but HEP required a higher performance for use in accelerators. Over a period of a few years, the critical-current performance of niobium-tin almost doubled and the conductor is now a technological basis of the High Luminosity LHC (see “Powering the field forward”). Although this major upgrade is proceeding as planned, as always all eyes are on the next step – perhaps an even larger machine based on even more innovative magnet technology. For example, a 100 TeV proton collider under consideration by the Future Circular Collider study, co-ordinated by CERN, will require global-scale procurement of niobium-tin strands and cable similar in scale to the demands of ITER.
Beyond that, the view of the superconductor industry is into a cloudy crystal ball. The current political and economic environment does not give grounds for hope, at least not in the Western world, that a major superconducting project is to be built in the near future. More generally, other than MRI, the commercial applications of superconductivity have not caught on due to customer impressions of additional complexity and risk against marginal increases in performance. We also have the consequences of the challenges that ITER has faced regarding its costs, which can attract the undeserved opinion that scientists cannot manage large projects.
One facet of the superconductor industry that seems to be thriving is small-venture establishments, sometimes university departments, which carry out superconductor R&D quasi-independently of major industrial concerns. These establishments maintain themselves under various government-sponsored support, such as the SBIR and STTR programmes in the US, and stepwise and without much fanfare they are responsible for the improvement of current superconductors, be they low- or high-temperature. As long as such arrangements are maintained, healthy progress in the science is assured, and these results feed directly to industry. And as far as HEP is concerned, as long as there are beams to guide, bend and focus, we will continue to need manufacturers to make the wires and fabricate the superconducting magnet coils.
The production of the niobium-titanium conductor for the LHC’s 1800 or so superconducting magnets was of the highest standard, involving hundreds of individual superconducting strands assembled into a cable that had to be shaped to accommodate the geometry of the magnet coil. Three firms manufactured the 1232 main dipole magnets (each 15 m long and weighing 35 tonnes): the French consortium Alstom MSA–Jeumont Industries; Ansaldo Superconduttori in Italy; and Babcok Noell Nuclear in Germany. For the 400 main quadrupoles, full-length prototyping was developed in the laboratory (CEA–CERN) and the tender assigned to Accel in Germany. Once LHC construction was completed, the superconductor market dropped back to meet the base demands of MRI. There has been a similar experience with the niobium-tin conductor used for the ITER fusion experiment under construction in France: more than six companies worldwide made the strands before the procurement was over, after which demand dropped back to pre-project levels.
Transforming brittle conductors into high-performance coils at CERN
The manufacture of superconductors for HEP applications is in many ways a standard industrial flow process with specialised steps. The superconductor in round rod form is inserted into copper tubes, which have a round inside and a hexagonal outside perimeter (the image inset shows such a “billet” for the former HERA electron–proton collider at DESY). A number of these units are then stacked into a copper can that is vacuum sealed and extruded in a hydraulic press, and this extrusion is processed on a draw bench where it is progressively reduced in diameter.
The greatly reduced product is then drawn through a series of dies until the desired wire diameter in reached, and a number of these wires are formed into cables ready for use. The overall process is highly complex and often involves several countries and dozens of specialised industries before the reel of wire or cable arrives at the magnet factory. Each step must ultimately be accounted for and any sudden change to a customer’s source of funds can land the manufacturer with unsaleable stock. Superconductors are specified precisely for their intended end use, and only in rare instances is a stocked product applicable to another application.
Superconductivity is perhaps the most remarkable manifestation of quantum physics on the macroscopic scale. Discovered in 1911 by Kamerlingh Onnes, it preoccupied the most prominent physicists of the 20th century and remains at the forefront of condensed-matter physics today. The interest is partly driven by potential applications – superconductivity at room temperature would surely revolutionise technology – but to a large extent it reflects an intellectual fascination. Many ideas that emerged from the study of superconductivity, such as the generation of a photon mass in a superconductor, were later extended to other fields of physics, famously serving as paradigms to explain the generation of a Higgs mass of the electroweak W and Z gauge bosons in particle physics.
Put simply, superconductivity is the ability of a system of fermions to carry electric current without dissipation. Normally, fermions such as electrons scatter off any obstacle, including each other. But if they find a way to form bound pairs, these pairs may condense into a macroscopic state with a non-dissipative current. Quantum mechanics is the only way to explain this phenomenon, but it took 46 years after the discovery of superconductivity for Bardeen, Cooper and Schrieffer (BCS) to develop a verifiable theory. Winning the 1972 Nobel Prize in Physics for their efforts, they figured out that the exchange of phonons leads to an effective attraction between pairs of electrons of opposite momentum if the electron energy is less than the characteristic phonon energy (figure 1). Although electrons still repel each other, the effective Coulomb interaction becomes smaller at such frequencies (in a manner opposite to asymptotic freedom in high-energy physics). If the reduction is strong enough, the phonon-induced electron–electron attraction wins over Coulomb repulsion and the total interaction becomes attractive. There is no threshold for the magnitude of the attraction because low-energy fermions live at the boundary of the Fermi sea, in which case an arbitrary weak attraction is enough to create bound states of fermions at some critical temperature, Tc.
The formation of bound states, called Cooper pairs, is one necessary ingredient for superconductivity. The other is for the pairs to condense, or more specifically to acquire a common phase corresponding to a single macroscopic wave function. Within BCS theory, pair formation and locking of the phases of the pairs occur simultaneously at the same Tc, while in more recent strong-coupling theories bound pairs exist above this temperature. The common phase of the pairs can have an arbitrary value, and the fact that the system chooses a particular one below Tc is a manifestation of spontaneous symmetry breaking. The phase coherence throughout the sample is the most important physical aspect of the superconducting state below Tc, as it can give rise to a “supercurrent” that flows without resistance. Superconductivity can also be viewed as an emergent phenomenon.
The BCS electron–phonon mechanism of superconductivity has since been successfully applied to explain pairing in a large variety of materials
While BCS theory was a big success, it is a mean-field theory, which neglects fluctuations. To really trust that the electron–phonon mechanism was correct, it was necessary to develop theoretical tools based on Green functions and field-theory methods, and to move beyond weak coupling. The BCS electron–phonon mechanism of superconductivity has since been successfully applied to explain pairing in a large variety of materials (figure 2), from simple mercury and aluminium to the niobium-titanium and niobium-tin alloys used in the magnets for the Large Hadron Collider (LHC), in addition to the recently discovered sulphur hydrides, which become superconductors at a temperature of around 200 K under high pressure. But the discovery of high-temperature superconductors drove condensed-matter theorists to explore new explanations for the superconducting state.
Unconventional superconductors
In the early 1980s, when the record critical temperature for superconductors was of the order 20 K, the dream of a superconductor that works at liquid-nitrogen temperatures (77 K) seemed far off. In 1986, however, Bednorz and Müller made the breakthrough discovery of superconductivity in La1−xBaxCuO4 with Tc of around 40 K. Shortly after, a material with a similar copper-oxide-based structure with Tc of 92 K was discovered. These copper-based superconductors, known as cuprates, have a distinctive structure comprising weakly coupled layers made of copper and oxygen. In all the cuprates, the building blocks for superconductivity are the CuO2 planes, with the other atoms providing a charge reservoir that either supplies additional electrons to the layers or takes electrons out to leave additional hole states (figure 3).
From a theoretical perspective, the high Tc of the cuprates is only one important aspect of their behaviour. More intriguing is what mechanism binds the fermions into pairs. The vast majority of researchers working in this area think that, unlike low-temperature superconductors, phonons are not responsible. The most compelling reason is that the cuprates possess “unconventional” symmetry of the pair wave function. Namely, in all known phonon-mediated superconductors, the pair wave function has s-wave symmetry, or in other words, its angular dependence is isotropic. For the cuprates, it was proven in the early 1990s that the pair wave function changes sign under rotation by 90°, leading to an excitation spectrum that has zeros at particular points on the Fermi surface. Such symmetry is often called “d-wave”. This is the first symmetry beyond s-wave that is allowed by the antisymmetric nature of the electron wave functions when the total spin of the pair is zero. The observation of a d-wave symmetry in the cuprates was extremely surprising because, unlike s-wave pairs, d-wave Cooper pairs can potentially be broken by impurities.
The cuprates hold the record for the highest Tc for materials with an unconventional pair wave-function symmetry: 133 K in mercury-based HgBa2Ca2Cu3O8 at ambient pressure. They were not, however, the first materials of this kind: a “heavy fermion” superconductor CeCu2Si2 discovered in 1979 by Steglich, and an organic superconductor discovered by Jerome the following year, also had an unconventional pair symmetry. After the discovery of cuprates, a set of unconventional iron-based superconductors was discovered with Tc up to 60 K in bulk systems, followed by the discovery of superconductivity with an even higher Tc in a monolayer of FeSe. But even low-Tc, unconventional materials can be interesting. For example, some experiments suggest that Cooper pairs in Sr2RuO4 have total spin-one and p-wave symmetry, leading to the intriguing possibility that they can support edge modes that are Majorana particles, which have potential applications in quantum computing.
If phonon-mediated electron–electron interactions are ineffective for the pairing in unconventional superconductors, then what binds fermions together? The only other possibility is a nominally repulsive electron–electron interaction, but for this to allow pairing, the electrons must screen their own Coulomb repulsion to make it effectively attractive in at least one pairing channel (e.g. d-wave). Interestingly, quantum mechanics actually allows such schizophrenic behaviour of electrons: a d-wave component of a screened Coulomb interaction becomes attractive in certain cases.
Cuprate conundrums
There are several families of high-temperature cuprate superconductors. Some, like LaSrCuO, YBaCuO and BSCCO, show superconductivity upon hole doping; others, like NdCeCuO, show superconductivity upon electron doping. The phase diagram of a representative cuprate contains regions of superconductivity, regions of magnetic order, and a region (called the pseudogap) where Tc decreases but the system’s behaviour above Tc is qualitatively different from that in an ordinary metal (figure 4). At zero doping, standard solid-state physics says that the system should be a metal, but experiments show that it is an insulator. This is taken as an indication that the effective interaction between electrons is large, and such an interaction-driven insulator is called a Mott insulator. Upon doping, some states become empty and the system eventually recovers metallic behaviour. A Mott insulator at zero doping has another interesting property: spins of localised electrons order antiferromagnetically. Upon doping, the long-range antiferromagnetic order quickly disappears, while short-range magnetic correlations survive.
Since the superconducting region of the phase diagram is sandwiched between the Mott and metallic regimes, there are two ways to think about HTS: either it emerges upon doping of a Mott insulator (if one departs from zero doping), or it emerges from a metal with increased antiferromagnetic correlations if one departs from larger dopings. Even though it was known before the discovery of high-temperature superconductors that antiferromagnetically mediated interaction is attractive in the d-wave channel, it took time to develop various computational approaches, and today the computed value of Tc is in the range consistent with experiments. At smaller dopings, a more reliable approach is to start from a Mott insulator. This approach also gives d-wave superconductivity, with the value of Tc most likely determined by phase fluctuations and decreasing as a function of decreased doping. Because both approaches give d-wave superconductivity with comparable values of Tc, the majority of researchers believe that the mechanism of superconductivity in the cuprates is understood, at least qualitatively.
A more subtle issue is how to explain the so-called pseudogap phase in hole-doped cuprates (figure 4). Here, the system is neither magnetic nor superconducting, yet it displays properties that clearly distinguish it from a normal, even strongly correlated metal. One natural idea, pioneered by Philip Anderson, is that the pseudogap phase is a precursor to a Mott insulator that contains a soup of local singlet pairs of fermions: superconductivity arises if the phases of all singlet pairs are ordered, whereas antiferromagnetism arises if the system develops a mixture of spin singlets and spin triplets. Several theoretical approaches, most notably dynamical mean-field theory, have been developed to quantitatively describe the precursors to a Mott insulator.
The understanding of the pseudogap as the phase where electron states progressively get localised, leading to a reduction of Tc, is accepted by many in the HTS community. Yet, new experimental results show that the pseudogap phase in hole-doped cuprates may actually be a state with a broken symmetry, or at least becomes unstable to such a state at a lower temperature. Evidence has been reported for the breaking of time-reversal, inversion and lattice rotational symmetry. Improved instrumentation in recent years also led to the discovery of a charge-density wave and pair-density wave order in the phase diagram and perhaps even loop-current order. Many of us believe that the additional orders observed in the pseudogap phase are relevant to the understanding of the full phase diagram, but that these do not change the two key pillars of our understanding: superconductivity is mediated by short-range magnetic excitations, and the reduction of Tc at smaller dopings is due the existence of a Mott insulator near zero doping.
Participants at a special session of the 1987 March meeting of the American Physical Society in New York devoted to the newly discovered high-temperature superconductors. The hastily organised session, which later became known as the “Woodstock of Physics” lasted from the early evening to 3.30 a.m. the following morning, with 51 presenters and more than 1800 physicists in attendance. Bednorz and Müller received the Nobel prize in December 1987, one year after the discovery, which was the fastest award in the Nobel’s history.
Why cuprates still matter
The cuprates have motivated incredible advances in instrumentation and experimental techniques, with 1000-fold increases in accuracy in many cases. On the theoretical side, they have also led to the development of new methods to deal with strong interactions – dynamical mean-field theory and various metallic quantum-critical theories are examples. These experimental and theoretical methods have found their way into the study of other materials and are adding new chapters to standard solid-state physics books. Some of them may even one day find their way into other fields, such as strongly interacting quark–gluon matter. We can now theoretically understand a host of the phenomena in high-temperature superconductors, but there are still some important points to clarify, such as the mysterious linear temperature dependence of the resistivity.
The community is coming together to solve these remaining issues. Yet, the cynical view of the cuprate problem is that it lacks an obvious small parameter, and hence a universally accepted theory – the analogue of BCS – will never be developed. While it is true that serendipity will always have its place in science, we believe that the key criterion for “the theory” of the cuprates should not be a perfect quantitative agreement with experiments (even though this is still a desirable objective). Rather, a theory of cuprates should be judged by its ability to explain both superconductivity and a host of concomitant phenomena, such as the pseudogap, and its ability to provide design principles for new superconductors. Indeed, this is precisely the approach that allowed the recent discovery of the highest-Tc superconductor to date: hydrogen sulphide. At present, powerful algorithms and supercomputers allow us to predict quite accurately the properties of materials before they are synthesised. For strongly correlated materials such as the cuprates, these calculations profit from physical insight and vice versa.
From a broader perspective, studies of HTS have led to renewed thinking about perturbative and non-perturbative approaches to physics. Physicists like to understand particles or waves and how they interact with each other, like we do in classical mechanics, and perturbation theory is the tool that takes us there – QED is a great example that works because the fine-structure constant is small. In a single-band solid where interactions are not too strong, it is natural to think of superconductivity as being mediated by, for example, the exchange of antiferromagnetic spin fluctuations. When interactions are so strong that the wave functions become extremely entangled, it still makes sense to look at the internal dynamics of a Cooper pair to check whether one can detect traces of spin, charge or even orbital fluctuations. At the same time, perturbation theory in the usual sense does not work. Instead, we have to rely more heavily on large-scale computer calculations, variational approaches and effective theories. The question of what “binds” fermions into a Cooper pair still makes sense in this new paradigm, but the answer is often more nuanced than in a weak coupling limit.
Many challenges are left in the HTS field, but progress is rapid and there is much more consensus now than there was even a few years ago. Finally, after 30 years, it seems we are closing in on a theoretical understanding of this both useful and fascinating macroscopic quantum state.
A few years ago, triggered by conceptual studies for a post-LHC collider, CERN launched a collaboration to explore the use of high-temperature superconductors (HTS) for accelerator magnets. In 2013 CERN partnered with a European particle accelerator R&D project called EuCARD-2 to develop a HTS insert for a 20 T magnet. The project came to an end in April this year, with CERN having built an HTS demonstration magnet based on an “aligned-block” concept for which coil-winding and quench-detection technology had to be developed. Called Feather2, the magnet has a field of 3 T based on low-performance REBCO (rare-earth barium-copper-oxide) tape. The next magnet, based on high-performance REBCO tape, will approach a stand-alone field of 8 T. Then, once it is placed inside the aperture of the 13 T “Fresca2” magnet, the field should go beyond 20 T.
Now the collaborative European spirit of EuCARD-2 lives on in the ARIES project (Accelerator Research and Innovation for European Science and Society), which kicked off at CERN in May. ARIES brings together 41 participants from 18 European countries, including seven industrial partners, to help bring down the cost of the conductor, and is co-funded via a contribution of €10 million from the European Commission.
In addition, CERN is developing HTS-based transfer lines to feed the new superconducting magnets of the High Luminosity LHC based on magnesium diboride (MgB2), which can be operated in helium gas at temperatures of up to around 30 K and must be flexible enough to allow the power converters to be installed hundreds of metres away from the accelerator. The relatively low cost of MgB2 led CERN’s Amalia Ballarino to enter a collaboration with industry, which resulted in a method to produce MgB2 in wire form for the first time. The team has since achieved record currents that reached 20 kA at a temperature above 20 K, thereby proving that MgB2 technology is a viable solution for long-distance power transmission. The new superconducting lines could also find applications in the Future Circular Collider initiative.
This month more than 1000 scientists and engineers are gathering in Geneva to attend the biennial European Conference on Applied Superconductivity (EUCAS 2017). This international event covers all aspects of the field, from electronics and large-scale devices to basic superconducting materials and cables. The organisation has been assigned to CERN, home to the largest superconducting system in operation (the Large Hadron Collider, LHC) and where next-generation superconductors are being developed for the high-luminosity LHC upgrade (HL-LHC) and Future Circular Collider (FCC) projects.
When Karl H Onnes discovered superconductivity in 1911, Ernest Rutherford was just publishing his famous paper unveiling the structure of the atom. But superconductivity and nuclear physics, both with their own harvests of Nobel prizes, were unconnected for many years. Accelerators have brought the fields together, as this issue of CERN Courier demonstrates.
The constant evolution of high-voltage radio-frequency (RF) cavities and powerful magnets to accelerate and guide particles around accelerators drove a transformation of our understanding of fundamental physics. But by the 1970s, the limit of RF power and magnetic-field strength had nearly been reached and gigantism seemed the only option to reach higher energies. In the meantime, a few practical superconductors had become available: niobium-zirconium alloy, niobium-tin compound (Nb3Sn) and niobium-titanium alloy (Nb-Ti). Its reliability in processing and uniformity of production made Nb-Ti the superconductor of choice for all projects.
The first large application of Nb-Ti was for high-energy physics, driving the bubble-chamber solenoids for Argonne National Laboratory in the US (see “Unique magnets”). But it was accelerators, even more than detectors or fusion applications, that drove the development of technical superconductors. Following the birth of the modern Nb-Ti superconductor in 1968, rapid R&D took place for large high-energy physics projects such as the proposed but never born Superconducting SPS at CERN, the ill-fated Isabelle/CBA collider at BNL and the Tevatron at Fermilab (see “Powering the field forwards”). By the end of the 1980s, superconductors had to be produced on industrial scales, as did the niobium RF accelerating cavities (see “Souped up RF”) for LEPII and other projects. MRI, based on 0.5–3 T superconducting magnets, also took off at that time, today dominating the market with around 3000 items built per year.
The LHC is the summit of 30 years of improvement in Nb-Ti-based conductors. Its 8.3 T dipole fields are generated by 10 km-long, 1 mm-diameter wires containing 6000 well-separated Nb-Ti filaments, each 6 μm thick and protected by a thin Nb barrier, all embedded in pure copper and then coated with a film of oxidised tin-silver alloy. The LHC contains 1200 tonnes of this material, made by six companies worldwide, and five years ago it powered the LHC to produce the Higgs boson.
But the story is not finished. The increased collision rate of the HL-LHC requires us to go beyond the 10 T wall and, despite its brittleness, we are now able to exploit the superior intrinsic properties of Nb3Sn to reach 11 T in a dipole and almost 12 T peak field in a quadrupole. Wire developed for the LHC upgrade is also being used for high-resolution NMR spectroscopy and advanced proton therapy, and Nb3Sn is being used in vast quantities for the ITER fusion project (see “ITER’s massive magnets enter production”). Testing the Nb3Sn technology for the HL-LHC is also critical for the next jump in energy: 100 TeV, as envisaged by the CERN-coordinated FCC study. This requires a dipole field of 16 T, pushing Nb3Sn beyond its present limits, but the superconducting industry has taken up the challenge. Training young researchers will further boost this technology – for example, via the CERN-coordinated EASITrain network on advanced superconductivity for PhD students, due to begin in October this year (see “Get on board with EASITtrain”).
The virtuous spiral between high-energy physics and superconductivity is never ending (see “Superconductors and particle physics entwined”), with pioneering research also taking place at CERN to test the practicalities of high-temperature superconductors (see “Taming high-temperature superconductivity”) based on yttrium or iron. This may lead us to dream about a 20–25 T dipole magnet – an immense challenge that will not only give us access to unconquered lands of particle physics but expand the use of superconductors in medicine, energy and other areas of our daily lives.
This book aims to provide a self-contained and concise treatment of the main subjects in magnetostatics, which describes the forces and fields resulting from the steady flow of electrical currents.
The first three chapters briefly present the basics, including the theory of magnetic fields from conductors in free space and from magnetic materials, as well as the general solutions to the Laplace equation and boundary value problems. Then the author moves on to discuss transverse fields in two dimensions. In particular, he covers fields produced by line currents, current sheets and current blocks, and the application of complex variable methods. He also treats transverse field magnets where the shape of the field is determined by the shape of the iron surface and the conductors are used to excite the field in the iron.
The following chapters are dedicated to other field configurations, such as axial field arrangements and periodic magnetic arrangements. The properties of permanent magnets and multiple fields produced by assemblies of them are also discussed.
Finally, the author deals with phenomena where there are slow variations in current or magnetic flux. Since only a restricted group of magnetostatic problems have analytic solutions, in the last chapter numerical techniques for calculating magnetic fields are provided, accompanied by many examples taken from accelerator and beam physics.
Aimed at undergraduates in physics and electrical engineering, the book includes not only basic explanations but also many references for further study.
By R Peron, M Colpi, V Gorini and U Moschella (eds)
Springer
This book, a collection of expert contributions, provides an overview of the current knowledge in gravitational physics, including theoretical and experimental aspects.
After a pedagogical introduction to gravitational theories, several chapters explore gravitational phenomena in the realm of so-called weak-field conditions: the Earth (specifically, the laboratory environment) and the solar system.
The second part of the book is devoted to gravity in an astrophysical context, which is an important test-bed for general relativity. A chapter is dedicated to gravitational waves, the recent discovery of which is an impressive experimental result in this field. The importance of studying radio pulsars is also highlighted.
A section on research frontiers in gravitational physics follows. This explores the many open issues, especially related to astrophysical and cosmological problems, and the way that possible solutions impact the quest for a quantum theory of gravity and a unified theory of the forces.
The book’s origins lie in the 2009 edition of a school organised by the Italian Society of Relativity and Gravitation. As such, it is aimed at graduate students, but could also appeal to researchers working in the field.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.