Comsol -leaderboard other pages

Topics

Superconductors and particle physics entwined

Superconductivity is a mischievous phenomenon. Countless superconducting materials were discovered following Onnes’ 1911 breakthrough, but none with the right engineering properties. Even today, more than a century later, the basic underlying superconducting material from which magnet coils are made is a bespoke product that has to be developed for specific applications. This presents both a challenge and an opportunity for consumers and producers of superconducting materials.

According to trade statistics from 2013, the global market for superconducting products is dominated by the demands of magnetic resonance imaging (MRI) to the tune of approximately €3.5 bn per year, all of which is based on low-temperature superconductors such as niobium-titanium. Large laboratory facilities make up just under €1 bn of global demand, and there is a hint of a demand for high-temperature superconductors at around €0.3 bn.

Understanding the relationship between industry and big science, in particular large particle accelerators, is vital for such projects to succeed. When the first superconducting accelerator – the Tevatron proton–antiproton collider at Fermilab in the US, employing 774 dipole magnets to bend the beams and 216 quadrupoles to focus them – was constructed in the early 1980s, it is said to have consumed somewhere between 80–90% of all the niobium-titanium superconductor ever made. CERN’s Large Hadron Collider (LHC), by far the largest superconducting device ever built, also had a significant impact on industry: its construction in the early 2000s doubled the world output of niobium-titanium for a period of five to six years. The learning curve of high-field superconducting magnet production has been one of the core drivers of progress in high-energy physics (HEP) for the past few decades, and future collider projects are going to test the HEP–industry model to its limits.

The first manufacturers

About a month after the publication of the Bell Laboratories work on high-field superconductivity at the end of January 1961 describing the properties of niobium-tin, it was realised that the experimental conductor – despite being a very small coil consisting of merely a few centimetres of wire – could, with a lot of imagination, be described as an engineering material. The discovery catalysed research into other superconducting metallic alloys and compounds. Just four years later, in 1965, Avco-Everett in co-operation with 14 other companies built a 10 foot, 4 T superconducting magnet using a niobium-zirconium conductor embedded in a copper strip.

By the end of 1966, an improved material consisting of niobium-titanium was offered at $9 per foot bare and $13 when insulated. That same year, RCA also announced with great fanfare its entry into commercial high-field superconducting magnet manufacture using the newly developed niobium-tin “Vapodep” ribbon at $4.40 per metre. General Electric was not far behind, offering unvarnished “22CY030” tape at $2.90 per foot in quantities up to 10,000 feet. Kawecki Chemical Company, now Kawecki-Berylco, advertised “superconductive columbium-tin tape in an economical, usable form” in varied widths and minimum unit lengths of 200 m, while in Europe the former French firm CSF marketed the Kawecki product. In the US, Airco claimed the “Kryoconductor” to be pioneering the development of multi-strand fine-filament superconductors for use primarily in low- or medium-field superconducting magnets. Intermagnetics General (IGC) and Supercon were the two other companies with resources adequate to fulfil reasonably sized orders, the latter in particular providing 47,800 kg of copper-clad niobium-titanium conductor for the Argonne National Laboratory’s 12 foot-diameter hydrogen bubble chamber. The industrialisation of superconductor production was in full swing.

Niobium-tin in tape form was the first true engineering superconducting material, and was extensively used by the research community to build and experiment with superconducting magnets. With adequate funds, it was even possible to purchase a magnet built to one’s specifications. One interesting application, which did not see the light of day until many years later, was the use of superconducting tape to exclude magnetic fields from those regions in a beamline through which particle beams had to pass undeviated. As a footnote to this exciting period, in 1962 Martin Wood and his wife founded Oxford Instruments, and four years later delivered the first nuclear magnetic resonance spectroscopy system. In November last year, the firm sold its superconducting wire business to Bruker Energy and Supercon Technologies, a subsidiary of Bruker Corporation, for $17.5 m.

Beginning of a new industry

One might trace the beginning of the superconducting-magnet revolution to a five-week-long “summer study” at Brookhaven National Laboratory in 1968. Bringing the who’s who in the world of superconductivity together resulted not only in a burst of understanding of the many failures experienced in prior years by magnet builders, but also a deeper appreciation of the arcana of superconducting materials. Researchers at Rutherford Laboratory in the UK, in a series of seminal papers, sufficiently explained the underlying properties and proposed a collaboration with the laboratories at Karlsruhe and Saclay to develop superconducting accelerator magnets. The GESSS (Group for European Superconducting Synchrotron Studies) was to make the Super Proton Synchrotron (SPS) at CERN a superconducting machine, and this project was large enough to attract the interest of industry – in particular IMI in England. Although GESSS achieved many advances in filamentary conductors and magnet design, the SPS went ahead as a conventional warm-magnet machine. IMI stopped all wire production, but in the US the number of small wire entrepreneurs grew. Niobium-tin tape products gradually disappeared from the market as this superconductor was deemed to be unsuitable for all magnets and especially for accelerator magnet use.

In 1972 the 400 GeV synchrotron at Fermilab, constructed with standard copper-based magnets, became operational, and almost immediately there were plans for an upgrade – this time with superconducting magnets. This project changed the industrial scale, requiring a major effort from manufacturers. To work around the proprietary alloys and processing techniques developed by strand manufacturers, Fermilab settled on an Nb46.5Ti alloy, which was an arithmetic average of existing commercial alloys. This enabled the lab to save around one year in its project schedule.

At the same time, the Stanford Linear Accelerator Center was building a large superconducting solenoid for a meson detector, while CERN was undertaking the Big European Bubble Chamber (BEBC) and the Omega Project. This gave industry a reliable view of the future. Numerous large magnets were planned by the various research arms of governments and diverse industry. For example, under the leadership of the Oak Ridge National Laboratory a consortium of six firms constructed a large-scale model of a tokamak reactor magnet assembly using six differently designed coils, each with different superconducting materials: five with niobium-titanium and one with niobium-tin. At the Lawrence Livermore National Laboratory work was in progress to develop a tokamak-like fusion device whose coils were again made from niobium-titanium conductor. The US Navy had major plans for electric ship drives, while the Department of Defense was funding the exploration of isotope separation by means of cyclotron resonance, which required superconducting solenoids of substantial size.

It appeared that there would be no dearth of succulent orders from the HEP community, with the result that even more companies around the world ventured into the manufacture of superconductors. When the Tevatron was commissioned in 1984, two manufacturers were involved: Intermagnetics General Corporation (IGC) and Magnetic Corporation of America (MCA), in an 80/20 per cent proportion. As is common in particle physics, no sooner had the machine become operational than the need for an upgrade became obvious. However, the planning for such a new larger and more complex device took considerable time, during which the superconductor manufacturers effectively made no sales and hence no profits. This led to the disappearance of less well capitalised companies, unless they had other products to market, as did Supercon and Oxford Instruments. The latter expanded into MRI, and its first prototype MRI magnet built in 1979 became the foundation of a current annual world production that totals around 3500 units. MRI production ramped up as the Tevatron demand declined and the correspondingly large amount of niobium-titanium conductor that it required has been stable since then.

The demise of ISABELLE, a 400 GeV proton–proton collider at Brookhaven, in 1983, and then the Superconducting Super Collider a decade later, resulted in a further retrenchment of the superconductor industry, with a number of pioneering establishments either disappearing or being bought out. The industrial involvement in the construction of the superconducting machines HERA at DESY and RHIC at BNL somewhat alleviated the situation. The discovery of high-temperature superconductivity (HTS) in 1986 also helped, although it is not clear that great profits, if any, have been made so far in the HTS arena.

A cloudy crystal ball

The superconducting wire business in the Western world has undergone significant consolidation in recent years. Niobium-titanium wire is now a commodity with a very low profit margin because it has become a standard, off-the-shelf product used primarily for MRI applications. There are now more companies than the market can support for this conductor, but for HEP and other research applications the market is shifting to its higher-performing cousin: niobium-tin.

Following the completion of the LHC in the early 2000s, the US Department of Energy looked toward the next generation of accelerator magnets. LHC technology had pushed the performance of niobium-titanium to its limits, so investment was directed towards niobium-tin. This conductor was also being developed for the fusion community ITER (“ITER’s massive magnets enter production”), but HEP required a higher performance for use in accelerators. Over a period of a few years, the critical-current performance of niobium-tin almost doubled and the conductor is now a technological basis of the High Luminosity LHC (see “Powering the field forward”). Although this major upgrade is proceeding as planned, as always all eyes are on the next step – perhaps an even larger machine based on even more innovative magnet technology. For example, a 100 TeV proton collider under consideration by the Future Circular Collider study, co-ordinated by CERN, will require global-scale procurement of niobium-tin strands and cable similar in scale to the demands of ITER.

Beyond that, the view of the superconductor industry is into a cloudy crystal ball. The current political and economic environment does not give grounds for hope, at least not in the Western world, that a major superconducting project is to be built in the near future. More generally, other than MRI, the commercial applications of superconductivity have not caught on due to customer impressions of additional complexity and risk against marginal increases in performance. We also have the consequences of the challenges that ITER has faced regarding its costs, which can attract the undeserved opinion that scientists cannot manage large projects.

One facet of the superconductor industry that seems to be thriving is small-venture establishments, sometimes university departments, which carry out superconductor R&D quasi-independently of major industrial concerns. These establishments maintain themselves under various government-sponsored support, such as the SBIR and STTR programmes in the US, and stepwise and without much fanfare they are responsible for the improvement of current superconductors, be they low- or high-temperature. As long as such arrangements are maintained, healthy progress in the science is assured, and these results feed directly to industry. And as far as HEP is concerned, as long as there are beams to guide, bend and focus, we will continue to need manufacturers to make the wires and fabricate the superconducting magnet coils.

Snapshot: manufacturing the LHC magnets

The production of the niobium-titanium conductor for the LHC’s 1800 or so superconducting magnets was of the highest standard, involving hundreds of individual superconducting strands assembled into a cable that had to be shaped to accommodate the geometry of the magnet coil. Three firms manufactured the 1232 main dipole magnets (each 15 m long and weighing 35 tonnes): the French consortium Alstom MSA–Jeumont Industries; Ansaldo Superconduttori in Italy; and Babcok Noell Nuclear in Germany. For the 400 main quadrupoles, full-length prototyping was developed in the laboratory (CEA–CERN) and the tender assigned to Accel in Germany. Once LHC construction was completed, the superconductor market dropped back to meet the base demands of MRI. There has been a similar experience with the niobium-tin conductor used for the ITER fusion experiment under construction in France: more than six companies worldwide made the strands before the procurement was over, after which demand dropped back to pre-project levels.

Transforming brittle conductors into high-performance coils at CERN

The manufacture of superconductors for HEP applications is in many ways a standard industrial flow process with specialised steps. The superconductor in round rod form is inserted into copper tubes, which have a round inside and a hexagonal outside perimeter (the image inset shows such a “billet” for the former HERA electron–proton collider at DESY). A number of these units are then stacked into a copper can that is vacuum sealed and extruded in a hydraulic press, and this extrusion is processed on a draw bench where it is progressively reduced in diameter.

The greatly reduced product is then drawn through a series of dies until the desired wire diameter in reached, and a number of these wires are formed into cables ready for use. The overall process is highly complex and often involves several countries and dozens of specialised industries before the reel of wire or cable arrives at the magnet factory. Each step must ultimately be accounted for and any sudden change to a customer’s source of funds can land the manufacturer with unsaleable stock. Superconductors are specified precisely for their intended end use, and only in rare instances is a stocked product applicable to another application.

 

Taming high-temperature superconductivity

Superconductivity is perhaps the most remarkable manifestation of quantum physics on the macroscopic scale. Discovered in 1911 by Kamerlingh Onnes, it preoccupied the most prominent physicists of the 20th century and remains at the forefront of condensed-matter physics today. The interest is partly driven by potential applications – superconductivity at room temperature would surely revolutionise technology – but to a large extent it reflects an intellectual fascination. Many ideas that emerged from the study of superconductivity, such as the generation of a photon mass in a superconductor, were later extended to other fields of physics, famously serving as paradigms to explain the generation of a Higgs mass of the electroweak W and Z gauge bosons in particle physics.

Put simply, superconductivity is the ability of a system of fermions to carry electric current without dissipation. Normally, fermions such as electrons scatter off any obstacle, including each other. But if they find a way to form bound pairs, these pairs may condense into a macroscopic state with a non-dissipative current. Quantum mechanics is the only way to explain this phenomenon, but it took 46 years after the discovery of superconductivity for Bardeen, Cooper and Schrieffer (BCS) to develop a verifiable theory. Winning the 1972 Nobel Prize in Physics for their efforts, they figured out that the exchange of phonons leads to an effective attraction between pairs of electrons of opposite momentum if the electron energy is less than the characteristic phonon energy (figure 1). Although electrons still repel each other, the effective Coulomb interaction becomes smaller at such frequencies (in a manner opposite to asymptotic freedom in high-energy physics). If the reduction is strong enough, the phonon-induced electron–electron attraction wins over Coulomb repulsion and the total interaction becomes attractive. There is no threshold for the magnitude of the attraction because low-energy fermions live at the boundary of the Fermi sea, in which case an arbitrary weak attraction is enough to create bound states of fermions at some critical temperature, Tc.

The formation of bound states, called Cooper pairs, is one necessary ingredient for superconductivity. The other is for the pairs to condense, or more specifically to acquire a common phase corresponding to a single macroscopic wave function. Within BCS theory, pair formation and locking of the phases of the pairs occur simultaneously at the same Tc, while in more recent strong-coupling theories bound pairs exist above this temperature. The common phase of the pairs can have an arbitrary value, and the fact that the system chooses a particular one below Tc is a manifestation of spontaneous symmetry breaking. The phase coherence throughout the sample is the most important physical aspect of the superconducting state below Tc, as it can give rise to a “supercurrent” that flows without resistance. Superconductivity can also be viewed as an emergent phenomenon.

The BCS electron–phonon mechanism of superconductivity has since been successfully applied to explain pairing in a large variety of materials

While BCS theory was a big success, it is a mean-field theory, which neglects fluctuations. To really trust that the electron–phonon mechanism was correct, it was necessary to develop theoretical tools based on Green functions and field-theory methods, and to move beyond weak coupling. The BCS electron–phonon mechanism of superconductivity has since been successfully applied to explain pairing in a large variety of materials (figure 2), from simple mercury and aluminium to the niobium-titanium and niobium-tin alloys used in the magnets for the Large Hadron Collider (LHC), in addition to the recently discovered sulphur hydrides, which become superconductors at a temperature of around 200 K under high pressure. But the discovery of high-temperature superconductors drove condensed-matter theorists to explore new explanations for the superconducting state.

Unconventional superconductors

In the early 1980s, when the record critical temperature for superconductors was of the order 20 K, the dream of a superconductor that works at liquid-nitrogen temperatures (77 K) seemed far off. In 1986, however, Bednorz and Müller made the breakthrough discovery of superconductivity in La1−xBaxCuO4 with Tc of around 40 K. Shortly after, a material with a similar copper-oxide-based structure with Tc of 92 K was discovered. These copper-based superconductors, known as cuprates, have a distinctive structure comprising weakly coupled layers made of copper and oxygen. In all the cuprates, the building blocks for superconductivity are the CuO2 planes, with the other atoms providing a charge reservoir that either supplies additional electrons to the layers or takes electrons out to leave additional hole states (figure 3).

From a theoretical perspective, the high Tc of the cuprates is only one important aspect of their behaviour. More intriguing is what mechanism binds the fermions into pairs. The vast majority of researchers working in this area think that, unlike low-temperature superconductors, phonons are not responsible. The most compelling reason is that the cuprates possess “unconventional” symmetry of the pair wave function. Namely, in all known phonon-mediated superconductors, the pair wave function has s-wave symmetry, or in other words, its angular dependence is isotropic. For the cuprates, it was proven in the early 1990s that the pair wave function changes sign under rotation by 90°, leading to an excitation spectrum that has zeros at particular points on the Fermi surface. Such symmetry is often called “d-wave”. This is the first symmetry beyond s-wave that is allowed by the antisymmetric nature of the electron wave functions when the total spin of the pair is zero. The observation of a d-wave symmetry in the cuprates was extremely surprising because, unlike s-wave pairs, d-wave Cooper pairs can potentially be broken by impurities.

The cuprates hold the record for the highest Tc for materials with an unconventional pair wave-function symmetry: 133 K in mercury-based HgBa2Ca2Cu3O8 at ambient pressure. They were not, however, the first materials of this kind: a “heavy fermion” superconductor CeCu2Si2 discovered in 1979 by Steglich, and an organic superconductor discovered by Jerome the following year, also had an unconventional pair symmetry. After the discovery of cuprates, a set of unconventional iron-based superconductors was discovered with Tc up to 60 K in bulk systems, followed by the discovery of superconductivity with an even higher Tc in a monolayer of FeSe. But even low-Tc, unconventional materials can be interesting. For example, some experiments suggest that Cooper pairs in Sr2RuO4 have total spin-one and p-wave symmetry, leading to the intriguing possibility that they can support edge modes that are Majorana particles, which have potential applications in quantum computing.

If phonon-mediated electron–electron interactions are ineffective for the pairing in unconventional superconductors, then what binds fermions together? The only other possibility is a nominally repulsive electron–electron interaction, but for this to allow pairing, the electrons must screen their own Coulomb repulsion to make it effectively attractive in at least one pairing channel (e.g. d-wave). Interestingly, quantum mechanics actually allows such schizophrenic behaviour of electrons: a d-wave component of a screened Coulomb interaction becomes attractive in certain cases.

Cuprate conundrums

There are several families of high-temperature cuprate superconductors. Some, like LaSrCuO, YBaCuO and BSCCO, show superconductivity upon hole doping; others, like NdCeCuO, show superconductivity upon electron doping. The phase diagram of a representative cuprate contains regions of superconductivity, regions of magnetic order, and a region (called the pseudogap) where Tc decreases but the system’s behaviour above Tc is qualitatively different from that in an ordinary metal (figure 4). At zero doping, standard solid-state physics says that the system should be a metal, but experiments show that it is an insulator. This is taken as an indication that the effective interaction between electrons is large, and such an interaction-driven insulator is called a Mott insulator. Upon doping, some states become empty and the system eventually recovers metallic behaviour. A Mott insulator at zero doping has another interesting property: spins of localised electrons order antiferromagnetically. Upon doping, the long-range antiferromagnetic order quickly disappears, while short-range magnetic correlations survive.

Since the superconducting region of the phase diagram is sandwiched between the Mott and metallic regimes, there are two ways to think about HTS: either it emerges upon doping of a Mott insulator (if one departs from zero doping), or it emerges from a metal with increased antiferromagnetic correlations if one departs from larger dopings. Even though it was known before the discovery of high-temperature superconductors that antiferromagnetically mediated interaction is attractive in the d-wave channel, it took time to develop various computational approaches, and today the computed value of Tc is in the range consistent with experiments. At smaller dopings, a more reliable approach is to start from a Mott insulator. This approach also gives d-wave superconductivity, with the value of Tc most likely determined by phase fluctuations and decreasing as a function of decreased doping. Because both approaches give d-wave superconductivity with comparable values of Tc, the majority of researchers believe that the mechanism of superconductivity in the cuprates is understood, at least qualitatively.

A more subtle issue is how to explain the so-called pseudogap phase in hole-doped cuprates (figure 4). Here, the system is neither magnetic nor superconducting, yet it displays properties that clearly distinguish it from a normal, even strongly correlated metal. One natural idea, pioneered by Philip Anderson, is that the pseudogap phase is a precursor to a Mott insulator that contains a soup of local singlet pairs of fermions: superconductivity arises if the phases of all singlet pairs are ordered, whereas antiferromagnetism arises if the system develops a mixture of spin singlets and spin triplets. Several theoretical approaches, most notably dynamical mean-field theory, have been developed to quantitatively describe the precursors to a Mott insulator.

The understanding of the pseudogap as the phase where electron states progressively get localised, leading to a reduction of Tc, is accepted by many in the HTS community. Yet, new experimental results show that the pseudogap phase in hole-doped cuprates may actually be a state with a broken symmetry, or at least becomes unstable to such a state at a lower temperature. Evidence has been reported for the breaking of time-reversal, inversion and lattice rotational symmetry. Improved instrumentation in recent years also led to the discovery of a charge-density wave and pair-density wave order in the phase diagram and perhaps even loop-current order. Many of us believe that the additional orders observed in the pseudogap phase are relevant to the understanding of the full phase diagram, but that these do not change the two key pillars of our understanding: superconductivity is mediated by short-range magnetic excitations, and the reduction of Tc at smaller dopings is due the existence of a Mott insulator near zero doping.

Woodstock physics

Participants at a special session of the 1987 March meeting of the American Physical Society in New York devoted to the newly discovered high-temperature superconductors. The hastily organised session, which later became known as the “Woodstock of Physics” lasted from the early evening to 3.30 a.m. the following morning, with 51 presenters and more than 1800 physicists in attendance. Bednorz and Müller received the Nobel prize in December 1987, one year after the discovery, which was the fastest award in the Nobel’s history.

Why cuprates still matter

The cuprates have motivated incredible advances in instrumentation and experimental techniques, with 1000-fold increases in accuracy in many cases. On the theoretical side, they have also led to the development of new methods to deal with strong interactions – dynamical mean-field theory and various metallic quantum-critical theories are examples. These experimental and theoretical methods have found their way into the study of other materials and are adding new chapters to standard solid-state physics books. Some of them may even one day find their way into other fields, such as strongly interacting quark–gluon matter. We can now theoretically understand a host of the phenomena in high-temperature superconductors, but there are still some important points to clarify, such as the mysterious linear temperature dependence of the resistivity.

The community is coming together to solve these remaining issues. Yet, the cynical view of the cuprate problem is that it lacks an obvious small parameter, and hence a universally accepted theory – the analogue of BCS – will never be developed. While it is true that serendipity will always have its place in science, we believe that the key criterion for “the theory” of the cuprates should not be a perfect quantitative agreement with experiments (even though this is still a desirable objective). Rather, a theory of cuprates should be judged by its ability to explain both superconductivity and a host of concomitant phenomena, such as the pseudogap, and its ability to provide design principles for new superconductors. Indeed, this is precisely the approach that allowed the recent discovery of the highest-Tc superconductor to date: hydrogen sulphide. At present, powerful algorithms and supercomputers allow us to predict quite accurately the properties of materials before they are synthesised. For strongly correlated materials such as the cuprates, these calculations profit from physical insight and vice versa.

From a broader perspective, studies of HTS have led to renewed thinking about perturbative and non-perturbative approaches to physics. Physicists like to understand particles or waves and how they interact with each other, like we do in classical mechanics, and perturbation theory is the tool that takes us there – QED is a great example that works because the fine-structure constant is small. In a single-band solid where interactions are not too strong, it is natural to think of superconductivity as being mediated by, for example, the exchange of antiferromagnetic spin fluctuations. When interactions are so strong that the wave functions become extremely entangled, it still makes sense to look at the internal dynamics of a Cooper pair to check whether one can detect traces of spin, charge or even orbital fluctuations. At the same time, perturbation theory in the usual sense does not work. Instead, we have to rely more heavily on large-scale computer calculations, variational approaches and effective theories. The question of what “binds” fermions into a Cooper pair still makes sense in this new paradigm, but the answer is often more nuanced than in a weak coupling limit.

Many challenges are left in the HTS field, but progress is rapid and there is much more consensus now than there was even a few years ago. Finally, after 30 years, it seems we are closing in on a theoretical understanding of this both useful and fascinating macroscopic quantum state.

CERN puts high-temperature superconductors to use

A few years ago, triggered by conceptual studies for a post-LHC collider, CERN launched a collaboration to explore the use of high-temperature superconductors (HTS) for accelerator magnets. In 2013 CERN partnered with a European particle accelerator R&D project called EuCARD-2 to develop a HTS insert for a 20 T magnet. The project came to an end in April this year, with CERN having built an HTS demonstration magnet based on an “aligned-block” concept for which coil-winding and quench-detection technology had to be developed. Called Feather2, the magnet has a field of 3 T based on low-performance REBCO (rare-earth barium-copper-oxide) tape. The next magnet, based on high-performance REBCO tape, will approach a stand-alone field of 8 T. Then, once it is placed inside the aperture of the 13 T “Fresca2” magnet, the field should go beyond 20 T.

Now the collaborative European spirit of EuCARD-2 lives on in the ARIES project (Accelerator Research and Innovation for European Science and Society), which kicked off at CERN in May. ARIES brings together 41 participants from 18 European countries, including seven industrial partners, to help bring down the cost of the conductor, and is co-funded via a contribution of €10 million from the European Commission. 

In addition, CERN is developing HTS-based transfer lines to feed the new superconducting magnets of the High Luminosity LHC based on magnesium diboride (MgB2), which can be operated in helium gas at temperatures of up to around 30 K and must be flexible enough to allow the power converters to be installed hundreds of metres away from the accelerator. The relatively low cost of MgB2 led CERN’s Amalia Ballarino to enter a collaboration with industry, which resulted in a method to produce MgB2 in wire form for the first time. The team has since achieved record currents that reached 20 kA at a temperature above 20 K, thereby proving that MgB2 technology is a viable solution for long-distance power transmission. The new superconducting lines could also find applications in the Future Circular Collider initiative.

• Matthew Chalmers, CERN

Celebrating a super partnership

This month more than 1000 scientists and engineers are gathering in Geneva to attend the biennial European Conference on Applied Superconductivity (EUCAS 2017). This international event covers all aspects of the field, from electronics and large-scale devices to basic superconducting materials and cables. The organisation has been assigned to CERN, home to the largest superconducting system in operation (the Large Hadron Collider, LHC) and where next-generation superconductors are being developed for the high-luminosity LHC upgrade (HL-LHC) and Future Circular Collider (FCC) projects.

When Karl H Onnes discovered superconductivity in 1911, Ernest Rutherford was just publishing his famous paper unveiling the structure of the atom. But superconductivity and nuclear physics, both with their own harvests of Nobel prizes, were unconnected for many years. Accelerators have brought the fields together, as this issue of CERN Courier demonstrates.

The constant evolution of high-voltage radio-frequency (RF) cavities and powerful magnets to accelerate and guide particles around accelerators drove a transformation of our understanding of fundamental physics. But by the 1970s, the limit of RF power and magnetic-field strength had nearly been reached and gigantism seemed the only option to reach higher energies. In the meantime, a few practical superconductors had become available: niobium-zirconium alloy, niobium-tin compound (Nb3Sn) and niobium-titanium alloy (Nb-Ti). Its reliability in processing and uniformity of production made Nb-Ti the superconductor of choice for all projects.

The first large application of Nb-Ti was for high-energy physics, driving the bubble-chamber solenoids for Argonne National Laboratory in the US (see “Unique magnets”). But it was accelerators, even more than detectors or fusion applications, that drove the development of technical superconductors. Following the birth of the modern Nb-Ti superconductor in 1968, rapid R&D took place for large high-energy physics projects such as the proposed but never born Superconducting SPS at CERN, the ill-fated Isabelle/CBA collider at BNL and the Tevatron at Fermilab (see “Powering the field forwards”). By the end of the 1980s, superconductors had to be produced on industrial scales, as did the niobium RF accelerating cavities (see “Souped up RF”) for LEPII and other projects. MRI, based on 0.5–3 T superconducting magnets, also took off at that time, today dominating the market with around 3000 items built per year.

The LHC is the summit of 30 years of improvement in Nb-Ti-based conductors. Its 8.3 T dipole fields are generated by 10 km-long, 1 mm-diameter wires containing 6000 well-separated Nb-Ti filaments, each 6 μm thick and protected by a thin Nb barrier, all embedded in pure copper and then coated with a film of oxidised tin-silver alloy. The LHC contains 1200 tonnes of this material, made by six companies worldwide, and five years ago it powered the LHC to produce the Higgs boson.

But the story is not finished. The increased collision rate of the HL-LHC requires us to go beyond the 10 T wall and, despite its brittleness, we are now able to exploit the superior intrinsic properties of Nb3Sn to reach 11 T in a dipole and almost 12 T peak field in a quadrupole. Wire developed for the LHC upgrade is also being used for high-resolution NMR spectroscopy and advanced proton therapy, and Nb3Sn is being used in vast quantities for the ITER fusion project (see “ITER’s massive magnets enter production”). Testing the Nb3Sn technology for the HL-LHC is also critical for the next jump in energy: 100 TeV, as envisaged by the CERN-coordinated FCC study. This requires a dipole field of 16 T, pushing Nb3Sn beyond its present limits, but the superconducting industry has taken up the challenge. Training young researchers will further boost this technology – for example, via the CERN-coordinated EASITrain network on advanced superconductivity for PhD students, due to begin in October this year (see “Get on board with EASITtrain”).

The virtuous spiral between high-energy physics and superconductivity is never ending (see “Superconductors and particle physics entwined”), with pioneering research also taking place at CERN to test the practicalities of high-temperature superconductors (see “Taming high-temperature superconductivity”) based on yttrium or iron. This may lead us to dream about a 20–25 T dipole magnet – an immense challenge that will not only give us access to unconquered lands of particle physics but expand the use of superconductors in medicine, energy and other areas of our daily lives.

Principles of Magnetostatics

By Richard C Fernow
Cambridge University Press

51S369QAi2L._SY291_BO1,204,203,200_QL40_ML2_

This book aims to provide a self-contained and concise treatment of the main subjects in magnetostatics, which describes the forces and fields resulting from the steady flow of electrical currents.

The first three chapters briefly present the basics, including the theory of magnetic fields from conductors in free space and from magnetic materials, as well as the general solutions to the Laplace equation and boundary value problems. Then the author moves on to discuss transverse fields in two dimensions. In particular, he covers fields produced by line currents, current sheets and current blocks, and the application of complex variable methods. He also treats transverse field magnets where the shape of the field is determined by the shape of the iron surface and the conductors are used to excite the field in the iron.

The following chapters are dedicated to other field configurations, such as axial field arrangements and periodic magnetic arrangements. The properties of permanent magnets and multiple fields produced by assemblies of them are also discussed.

Finally, the author deals with phenomena where there are slow variations in current or magnetic flux. Since only a restricted group of magnetostatic problems have analytic solutions, in the last chapter numerical techniques for calculating magnetic fields are provided, accompanied by many examples taken from accelerator and beam physics.

Aimed at undergraduates in physics and electrical engineering, the book includes not only basic explanations but also many references for further study.

Gravity: Where Do We Stand?

By R Peron, M Colpi, V Gorini and U Moschella (eds)
Springer

41AaXI0C5lL._SX313_BO1,204,203,200_

This book, a collection of expert contributions, provides an overview of the current knowledge in gravitational physics, including theoretical and experimental aspects.

After a pedagogical introduction to gravitational theories, several chapters explore gravitational phenomena in the realm of so-called weak-field conditions: the Earth (specifically, the laboratory environment) and the solar system.

The second part of the book is devoted to gravity in an astrophysical context, which is an important test-bed for general relativity. A chapter is dedicated to gravitational waves, the recent discovery of which is an impressive experimental result in this field. The importance of studying radio pulsars is also highlighted.

A section on research frontiers in gravitational physics follows. This explores the many open issues, especially related to astrophysical and cosmological problems, and the way that possible solutions impact the quest for a quantum theory of gravity and a unified theory of the forces.

The book’s origins lie in the 2009 edition of a school organised by the Italian Society of Relativity and Gravitation. As such, it is aimed at graduate students, but could also appeal to researchers working in the field.

Physics Matters

By Vasant Natarajan
World Scientific

41KnKmsJQiL

This book is a collection of essays on various physics topics, which the author aims at presenting in a manner that is accessible to non-experts and, specifically, to non-physics science and arts students at the undergraduate level. The author is motivated by the conviction that understanding fundamental concepts of other subjects facilitates out-of-the-box thinking, which can result in making original contributions to one’s chosen field.

The selection of topics is very personal: some basic-physics concepts, such as standards for units and oscillation theory, are placed next to discussions about general relativity and the famous twin paradox. The author uses an informal style and has particular interest in dispelling some myths about science.

The final chapters cover topics from his area of research, atomic and optical physics, focusing on the Nobel Prizes assigned in the last two decades to scientists in these fields.

Even though the use of equations is kept to a minimum, some mathematics and physics background is required of the reader.

Raw Data: A Novel on Life in Science

By Pernille Rørth
Springer

CCboo2_06_17

Raw Data is a scientific novel that explores the moral dilemmas surrounding the accidental discovery of a case of scientific misconduct within a top US biomedical institute.

The choice of subject is interesting and unusual. Scientific misconduct is not an unprecedented topic for scientific novels, but the focus is usually on spectacular frauds that clearly violate the ethos of the scientific community. This story depicts a more nuanced situation. Readers may even find themselves understanding, if not condoning, the conscious decision of one of the co-protagonists to cheat.

This character chooses to “cut a corner” out of fear of being scooped, to satisfy an unreasonably picky reviewer who had requested an additional control experiment that she deems irrelevant. The stakes for her career are huge because she is competing with other groups on the same research line, and publishing second would cost her a great deal academically. When a co-worker accidentally finds hints of her fabrication and immediately alerts the laboratory’s principal investigator, both find themselves in a bitter no-win situation. “Doing the right thing” has a significant cost, but any other option potentially entails far worse consequences for their careers and their reputations.

Along the way, the author illustrates vividly how people in research think, feel, work and live. Work–life balance in science, especially for young female researchers, is a secondary theme of the book. Overall, the portrait of academia is not a flattering one, but definitely faithful. As someone who works in high-energy physics, I learnt about day-by-day practices in the biomedical sector and how it differs from mine. Although the author focuses on her own area of the scientific environment, some descriptions of “postdoc life” are quite general.

This relatively short novel is followed by a long Q&A section with the author, a former biomedical researcher who left the field after some considerable career achievements. There she makes her opinions explicit about several of the topics, including the “publish or perish” attitude, work–life balance, scientific integrity, and what she perceives as systemic dangers for the academic research world.

Although the author clearly made an effort to simplify the science to the minimum needed to understand the plot (and as a reader with no understanding of microbiology I found her effort successful), I am not sure that a reader with no previous interest in science would be hooked by the story. The book is well written, but the plot has a slow pace and, while Springer deserves credit for publishing it, the text contains many typographical errors.

Overall, I recommend the book to other scientists, regardless of their specialisation, and to the scientifically educated public who may appreciate this insider view of contemporary research life.

Trick or Truth? The Mysterious Connection Between Physics and Mathematics

By Anthony Aguirre, Brendan Foster, Zeeya Merali (eds)
Springer

CCboo1_06_17

One of the most intriguing works in the philosophy of science is Wigner’s 1960 paper titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”. Indeed the fact that so many natural laws can be formulated in this language, not to mention that some of these represent the most precise knowledge we have about our world, is a stunning mystery.

A related question is whether mathematics, which has largely developed overlapping or in parallel with physics, is constructed by the human mind or “discovered”. This question is worth asking again today, when modern theories of fundamental physics and contemporary mathematics have reached levels of abstraction that are unimaginable from the perspective of just 100 years ago.

This book is a collection of essays discussing the connection between physics and mathematics. They are written by the winners of the 2015 Foundational Questions Institute contest, which invited contributors – from professional researchers to members of the public – to propose an essay on the topic.

Since it appears primarily as a subject of the philosophy of science rather than of science itself, it is not a surprise that there are conflicting viewpoints that sometimes reach opposite conclusions.

A significant point of view is that the claimed effectiveness of mathematics is actually not that surprising. This is because we process information and generate knowledge about our world in an inadvertently biased way, namely as a result of the evolution of our mind in a specific physical world. For example, concepts of elementary geometry (such as straight lines, parabolas, etc) and the mechanics of classical physics are deeply imprinted in the human brain as evolutionary bias. In a fuzzy, chaotic world, such naive mathematical notions might not have developed, as they wouldn’t represent a good approximation to that world. In fact, in a drastically unstructured world it would have been less likely that life had evolved in the first place, so it may not seem such a surprise that we find ourselves in a world largely governed by relatively simple geometrical structures.

What remains miraculous, on the other hand, is the effectiveness of mathematics in the microscopic realm of quantum mechanics: it is not obvious how the mathematical notions on which it is based could be explained in terms of evolutionary bias. Actually, much of the progress of fundamental physics during the last 100 years or so crucially depended on abandoning the intuition of everyday common sense, in favour of abstract mathematical principles.

Another aspect is selection bias, in that failures of the mathematical description of certain phenomena tend simply to be ignored. A prime example is human consciousness – undoubtedly a real-world phenomenon – for which it is not at all clear whether its structure can ever be mapped to mathematical concepts in a meaningful way. A quite common reductionist point of view typical of particle physicists is that, since the brain is essentially chemistry (thus physics), a mathematical underpinning is automatic. But it may be that the way such complex phenomena emerge completely obfuscates the connection to the underlying, mathematically clean microscopic physics, rendering the latter useless for any practical purpose in this regard.

This raises the issue of the structure of knowledge per se, and some essays in this book argue that it may not necessarily be hierarchical but rather scale invariant with some, or many, distinguished nodes. One may think of these as local attractors to which “arrows of deeper explanation” point. It may be that only locally near such attractors does knowledge appear hierarchical, so that, for example, our mathematical description of fundamental physics is meaningful only near one particular such node. There might be other local attractors that are decoupled from our mathematical modelling, with no obvious chains of explanation linking them.

On a different tack, a vehemently dissimilar and extreme point of view is taken by adepts of Tegmark’s mathematical universe hypothesis, which has been directly addressed by various authors. This posits that there is actually no difference between mathematics and the physical world, so the role of mathematics in our physical world appears as a tautology.

Surveying all the thoughts in this collection of essays would be beyond the scope of this review. Suffice it to say that the book should be of great interest to anybody pondering the meaning of physical theories, although it appears more useful for scientists rather than for the general public. It is not an easy read, but the reader is rewarded with a great deal of food for thought.

LHC physics shines in Shanghai

The Large Hadron Collider Physics (LHCP) conference took place at Shanghai Jiao Tong University (SJTU) in China, on 15–20 May. One of the largest annual conferences in particle physics, the timing of LHCP2017 chimed with fresh experimental results from the ALICE, ATLAS, CMS and LHCb experiments based on 13 TeV LHC data recorded during 2015–2016. The conference saw many new results presented and also offered a broad overview of the scientific findings from Run 1, based on lower-energy data.

One of the main themes of the conference was the interplay between different results from various experiments, in particular those at the LHC, and the need to continue to work closely with the theory community. One such example concerns measurements of rare B-meson decays and in particular the decay B0 K*l+l, which is sensitive to new physics and could probe the presence of new particles through the study of the B0 helicity structure. The LHCb collaboration has found several discrepancies with Standard Model (SM) expectations, including a more than three standard-deviation discrepancy in the angular distributions of this B0 decay. New results presented by ATLAS and CMS have created further tension in the situation (see diagram), and more data from LHC Run 2 and continued theoretical developments will be critical in understanding these decays.

An exciting result from the ALICE experiment showed a surprising enhancement of strange-baryon production in proton–proton collisions (CERN Courier June 2017 p10). In nucleus–nucleus collisions, this enhancement is interpreted as a signature of the formation of a quark–gluon plasma (QGP) – the extreme state that characterised the early universe before the appearance of hadrons. The first observation of strangeness enhancement in high-multiplicity proton–proton collisions hints that the QGP is also formed in collisions of smaller systems and opens new directions for the study of this primordial state of matter.

From the Higgs sector, CMS reported an observation of Higgs decays to two particles with a significance of 4.9 standard deviations compared to SM backgrounds. Differential cross-sections for Higgs decays to two Z bosons, which test properties of the Higgs such as its spin and parity and also act as a probe of perturbative QCD, were shown by ATLAS. Throughout the conference, it was clear that precision studies of the Higgs sector are a critical element in elucidating the nature of the Higgs boson itself, as well as understanding electroweak symmetry breaking and searching for physics beyond the SM.

In addition to these highlights, a broad spectra of results were presented. These ranged from precision studies of the SM, such as new theoretical developments in electroweak production, to numerous new search results, such as searches for low-mass dark-sector mediators from the CMS experiment and searches for supersymmetry in very high-multiplicity jet events for ATLAS. The conclusion from the conference was clear: we have learnt a tremendous amount from the Run 2 LHC data but are left with many open questions. We therefore eagerly await the newest data from the LHC to help further dissect the SM, cast light on the nature of the Higgs, or to find an entirely new particle.

Accelerator experts meet in Copenhagen

The 8th International Particle Accelerator Conference (IPAC) took place in Copenhagen, Denmark, on 14–19 May and was attended by more than 1550 participants from 34 countries. Hosted by the European Spallation Source (ESS) and organised under the auspices of the European Physical Society (EPS) accelerator group and the International Union of Pure and Applied Physics, the event was also supported by the MAX-IV facility and Aarhus University.

Although accelerators were initially developed to understand the infinitesimal constituents of matter, they have evolved into sophisticated instruments for a wide range of fundamental and applied research. Today, particle accelerators serve society in numerous ways, ranging from medicine and energy to the arts and security. Advanced light sources are a case in point, following the steady improvement in their performance in terms of brilliance and temporal characteristics. MAX-IV and the ESS, which lie just across the Oresund bridge in Sweden, are two of the most powerful instruments available to life and material scientists, and are operating and under construction, respectively. Meanwhile, the most brilliant source of ultra-short flashes of X-rays – the European X-ray Free Electron Laser at DESY in Hamburg – has recently achieved first lasing and will soon be open to users (“Europe enters the extreme X-ray era”). Another X-ray free-electron laser, the SwissFEL at PSI, has just produced laser radiation for the first time in the soft X-ray regime and aims to achieve smaller wavelengths by the end of the year. New synchrotron light sources have also come into operation, such as the SOLARIS synchrotron in Poland, and major upgrades to the European Synchrotron Radiation Facility in France based on a new lattice concept are planned.

Particle physics remains one of the main drivers for new accelerator projects and for R&D in IPAC’s many fields. The big brother of all accelerators, CERN’s LHC, performed outstandingly well during 2016, exceeding nominal luminosity by almost 50% thanks to operations with more tightly spaced bunches and due to the higher brightness of the beams delivered by the LHC injectors. Mastering the effects of electron clouds and carrying out progressive “scrubbing” of the surfaces of the LHC beam screens have been key to this performance. Achieving nominal luminosity marks the completion of one of the most ambitious projects in science and bodes well for the High Luminosity LHC upgrade programme now under way. IPAC17 also heard the latest from experiments at CERN’s Antiproton Decelerator facility, including the trapping and subsequent spectroscopic measurements of antihydrogen atoms and the exciting studies that will still be carried out using the new ELENA facility there.

bright-rec iop pub iop-science physcis connect