Comsol -leaderboard other pages

Topics

NuPECC sets out long-range plan

On 19 June, the Nuclear Physics European Collaboration Committee (NuPECC) released its long-range plan for nuclear research in Europe, following 20 months of work involving extensive discussions with the scientific community. The previous long-range plan was issued in 2010.

Today, nuclear physics is a broad field covering nuclear matter in all its forms and exploring their possible applications. It encompasses the origin and evolution of the universe, such as the quark deconfinement at the Big Bang, the physics of neutron stars and nucleosynthesis, and addresses open questions in nuclear structure, among other topics.

A number of research programmes are at the interface between nuclear and particle physics, where CERN plays an important role. This can be seen clearly in the six chapters of the NuPECC report devoted to: hadron physics; properties of strongly interacting matter; nuclear structure and dynamics; nuclear astrophysics; symmetries and fundamental interactions; and applications. For symmetries and fundamental interactions, particular emphasis is given to experiments (such as those with antihydrogen) where nuclei are sensitive to physics beyond the Standard Model. Concerning broader societal benefits, CERN’s MEDICIS facility, based on expertise from ISOLDE, is of particular interest (CERN Courier October 2016 p28).

The Facility for Antiproton and Ion Research (FAIR), a major investment that just entered construction in Germany (CERN Courier July/August 2017 p41), also has high prominence. Three other prominent recommendations have particular relevance to CERN: support for world-leading isotope-separation facilities (ISOLDE at CERN together with SPIRAL2 in France and SPES in Italy); support for existing and emerging facilities (including the new ELENA synchrotron at CERN’s Antiproton Decelerator); and support for the LHC’s heavy-ion programme, in particular ALICE. Emerging facilities – the extreme-light source ELI-NP in Bucharest, and NICA and a superheavy element factory in Dubna – are also highlighted.

Research in nuclear physics involves several facilities of different sizes that produce complementary scientific results, and in Europe they are well co-ordinated. International collaborations beyond Europe, mainly in the US (JLAB in particular) and in Asia (Japan in particular), add much value to this field.

NuPECC’s latest report is expected to help co-ordinate and guide this rich field of physics for the next 6–7 years. Its recommendations were extensively discussed and can be read in full at nupecc.org/pub/lrp2017.pdf.

ATLAS finds evidence for Higgs to bb

CCnew7_07_17

Five years ago, the ATLAS and CMS collaborations at the LHC announced the discovery of a new particle with properties consistent with those of a Standard Model Higgs boson. Since then, based on proton–proton collision data collected at energies of 7 and 8 TeV during LHC Run 1 and at 13 TeV during Run 2, many measurements have confirmed this hypothesis. Several decay modes of the Higgs boson have been observed, but the dominant decay into pairs of b quarks, which is expected to contribute at a level of 58%, had up to now escaped detection – largely due to the difficulty in observing this decay mode at a hadron collider.

On 6 July, at the European Physical Society conference in Venice, the ATLAS collaboration announced that they had found evidence for H → bb, representing an immense analysis achievement. By far the largest source of Higgs bosons is their production via gluon fusion, gg  H  bb, but this is overwhelmed by the huge background of bb events, which are produced at a rate 10 million times higher. The associated production of a Higgs with a W or Z vector boson (jointly denoted V) offers the most sensitive alternative, despite having a production rate roughly 20 times lower than H bb, because the vector bosons are detected via their decay to leptons and therefore allow efficient triggering and background rejection. Nevertheless, the signal remains orders of magnitude smaller than the backgrounds, which arise from the associated production of vector bosons with jets and from top-quark production.

To find evidence for the H  bb decay in the VH production channel, it is necessary to use detailed information on the properties of the decay products. The jets arising from b quarks contain b hadrons, whose long lifetime can be used in sophisticated b-tagging algorithms to discriminate them from jets originating from the fragmentation of gluons or other quark species. These algorithms have benefitted significantly from the new innermost pixel layer installed in ATLAS before Run 2. The kinematic properties of the decay products can also be used to enhance the signal-over-background ratio. The property with the most discriminatory power is the invariant mass of the two-b-jet system, which for the signal accumulates at the mass of the Higgs boson (see figure). To increase the sensitivity of the analysis, this mass is used together with several other kinematic variables as input to a multivariate analysis.

Based on data collected during the first two years of LHC Run 2 in 2015 and 2016, evidence for the H  bb decay is obtained at the level of 3.5σ, slightly increased to 3.6σ after combination with the Run 1 results (compared to an expected significance of 4σ). The measured signal yield is in agreement with the Standard Model expectation, within an uncertainty of 30%. The associated VZ production, with Z  bb, allows for a powerful cross-check of the analysis, as the final states are very similar except for the location of the two-b-jet mass peak (see figure); VZ production is observed with a significance of 5.8σ in the Run 2 data, in agreement with the Standard Model prediction.

This analysis opens a way to study about 90% of the Higgs boson decays expected in the Standard Model, which is a sharp increase from the approximately 30% observed previously. With much more data expected by the end of Run 2 in 2018, a definitive 5σ observation of the H  bb decay may be in sight, with the increased precision providing new opportunities to challenge the Standard Model.

CMS expands scope of dark-matter search in dijet channel

A report from the CMS experiment

The quest to find dark matter (DM) has inspired new searches at CMS, specifically looking for interactions between DM and quarks mediated by particles of previously unexplored mass and width. If the DM mediator is a leptophobic vector resonance coupling only to quarks with a universal coupling gq, for instance, its decay also produces a dijet resonance (see bottom figure, left) and the value of gq determines the width of the mediator.

CMS has traditionally searched for peaks from narrow resonances on the steeply falling dijet invariant mass spectrum predicted by QCD. This search has been updated with the full 2016 data set and limits set on a DM mediator, constraining gq for resonances with a mass between 0.6 and 3.7 TeV and width less than 10% of the resonance mass. Two additional dijet searches have now been released: a boosted-dijet search sensitive to lower mediator masses, and an angular-distribution search sensitive to larger couplings and widths.

The first search gets round the limitations of the narrow-resonance search, which only applies above a minimum mass that satisfies the dijet trigger requirements, by requiring resonance production in association with a jet (bottom figure, middle). In such events the resonance is highly boosted and by analysing the jet substructure the QCD background can be highly suppressed, making the search sensitive in a lower mass range. The mass spectrum of the single jet was used to search for resonances over a mass range of 50–300 GeV, and the corresponding constraints on gq and the mediator width from boosted dijets explore the lowest mediator masses.

For large couplings and widths, the sensitivity of searches for dijet resonance peaks is strongly reduced. However, a search for a very wide resonance can be performed by studying dijet angular distributions such as the scattering angle between the incoming and outgoing partons. These distributions differ significantly, depending on whether a new particle is produced in the s-channel or from the QCD dijet background, which is dominated by t-channel production (bottom figure, right). Being sensitive to both large-width resonances and non-resonant signatures, this search also sets lower limits on the scale of contact interactions that may arise from quark compositeness in the range 6–22 TeV, as well as signatures of large extra dimensions and quantum black holes. The same search, when interpreted in the context of a vector mediator coupling to DM, excludes values of gq greater than 0.6, corresponding to widths higher than 20% of the resonance mass, and extending to mediator masses as high as 5 TeV.

Using these three complementary techniques, CMS has now explored a large range in mass, coupling and width, extending the scope of searches for DM mediators. The expected volume of data from the LHC in upcoming years will allow CMS to extend this reach even further, with the study of three-jet topologies allowing the uncovered mass range of 300–600 GeV to be explored.

Lead nuclei under scrutiny at LHCb

In 2016 the LHC collided protons and lead nuclei for the first time at a centre-of-mass energy of 8.16 TeV per nucleon–nucleon pair. In lead–lead collisions, the formation of the quark–gluon plasma (QGP), a deconfined system where quarks and gluons can move freely, is a subject of intense studies at the LHC. By contrast, proton–lead collisions represent the best available environment to quantify nuclear effects that are not related to the QGP.

LHCb

Our knowledge of the partonic content of nuclei suffers from large uncertainties, particularly at low momentum where large modifications of the partonic flux with respect to the free nucleon are expected. The particular design of the LHCb experiment, with its fully instrumented forward acceptance, offers a unique opportunity to access production processes in which one parton carries a momentum fraction of the incoming nucleon inside the lead nucleus of approximately 10–5–10–4 (covering the proton fragmentation region) and 10–3–10–1 for the lead fragmentation region.

The LHCb collaboration recently submitted the first paper at the LHC based on results obtained with the 2016 proton–lead data sample. This measurement of J/ψ production profits from an integrated luminosity about 20 times larger than the proton–lead sample collected by LHCb during the 2013 run. The nuclear modification factor RpPb as a function of transverse momentum is shown in the figure: J/ψ mesons produced in the interaction point (prompt) are found to be suppressed by about a factor two at low transverse momentum, while RpPb approaches unity at higher transverse momenta. Those arising from the decays of long-lived beauty hadrons (non-prompt) follow a similar pattern. This is the most precise measurement to date of inclusive beauty production in nuclear collisions.

The results can be compared with perturbative QCD calculations based on collinear nuclear parton distribution functions (nPDFs) or with calculations within the colour-glass condensate (CGC) framework, which takes into account gluon saturation. The large uncertainties on the nPDFs compared to the data show the importance of new experimental data to better constrain them, while the CGC-based calculation reproduces the observed dependence accurately.

The large 2016 data set will allow for a precise study of heavy-flavour production with different hadron species, and also of cleaner electromagnetic/electroweak probes. These measurements will test which frameworks adequately describe the modification of the partonic flux in nuclear collisions. Additionally, other mechanisms such as partonic energy loss due to gluon radiation, which is very relevant for nuclear modifications.

J/ψ mesons reveal stronger nuclear effects in pPb collisions

Quarkonium states, such as the J/ψ meson, are prominent probes of the quark–gluon plasma (QGP) formed in high-energy nucleus–nucleus (AA) collisions. That bulk J/ψ production is suppressed in AA collisions with respect to proton–proton collisions had been reported by ALICE five years ago. However, measurements of J/ψ production in proton–lead collisions, where the formation of the QGP is not expected, are essential to quantify effects that are present in AA collisions but not associated with the QGP. In a recent study, ALICE has shown that the production of J/ψ mesons in proton–lead collisions is strongly correlated with the total number of produced particles in the event (event multiplicity), and that this correlation varies as a function of rapidity.

ALICE

In ALICE, the J/ψ measurements are performed at forward (proton direction), mid- and at backward-rapidity (lead direction). An increase of the J/ψ yield relative to the event-averaged value with the relative charged-particle multiplicity is observed for all rapidity domains, with a similar slope at low multiplicities (see figure). At multiplicities a factor two above the event average, the trend at forward rapidity is very different from those at mid- and backward-rapidity. In the forward rapidity window, a saturation of the relative yield sets in at high multiplicities, which is interesting because the forward region with low parton fractional momentum is in the domain of gluon shadowing/saturation.

Models incorporating nuclear parton distribution functions with significant shadowing have previously been shown to describe J/ψ measurements performed in event classes selected according to the centrality of the collision. The present measurement, exploring significantly more “violent” events (below 1% of the total hadronic interaction cross-section), suggests that effective gluon depletion in the colliding lead nucleus is larger in high-multiplicity events. However, there are additional concepts to describe this regime of QCD, and it remains to be seen whether such models can also describe the saturation of the yields at forward rapidities.

Evidence suggests all stars born in pairs

The reason why some stars are born in pairs while others are born singly has long puzzled astronomers. But a new study suggests that no special conditions are required: all stars start their lives as part of a binary pair. The result has implications not only in the field of star evolution but also for studies of binary neutron-star and binary black-hole formation. It also suggests that our own Sun was born together with a companion that has since disappeared.

Stars are born in dense molecular clouds measuring light-years across, within which denser regions can collapse under their own gravity to form high-density cores opaque to optical radiation, which appear as dark patches. When the densities reach the level where hydrogen fusion begins, the cores can form stars. Although young stars already emit radiation before the onset of the hydrogen-burning phase, it is absorbed in the dense clouds that surround them, making star-forming regions difficult to study. Yet, since clouds that absorb optical and infrared radiation re-emit it at much longer wavelengths, it is possible to probe them using radio telescopes.

Sarah Sadavoy of the Max Planck Institute for Astronomy in Heidelberg and Steven Stahler of the University of California at Berkeley used data from the Very Large Array (VLA) radio telescopes in New Mexico, together with micrometre-wavelength data from the James Clerk Maxwell Telescope (JCMT) in Hawaii, to study the dense gas clumps and the young stars forming in them in the Perseus cluster – a star-forming region about 600 light-years away. Data from the JCMT show the location of dense cores in the gas, while the VLA provides the location of the young stars within them.

Studying the multiplicity as well as the location of the young stars inside the dense regions, the researchers found a total of 19 binary systems, 45 single-star systems and five systems with a higher multiplicity. Focusing on the binary pairs, they observed that the youngest binaries typically have a large separation of 500 astronomical units (500 times the Sun–Earth distance). Furthermore, the young stars were aligned along the long axis of the elongated cloud. Older binary systems, with an age between 500,000 and one million years, were found typically to be closer together and separated around a random axis.

Subsequent to cataloguing all the young stars, the team compared the observed star multiplicity and the features seen in the binary pairs to simulations of stars being formed either as single or binary systems. The only way the model could reproduce the data was if its starting conditions contained no single stars but only stars that started out as part of wide binaries, implying that all stars are formed as part of a binary system. After formation, the stars either move closer to one another into a close binary system or move away from each other. The latter option is likely to be what happened in the case of the Sun, its companion having drifted away long ago.

If indeed all stars are formed in pairs, it would have big implications for models of stellar birth rates in molecular clouds as well as for the formation of binary systems of compact objects. The studied nearby Perseus cluster could, however, just be a special case, and further studies of other star-forming regions are therefore required to know if the same conditions exist elsewhere in the universe.

Powering the field forward

Particle physicists try to understand the environment that existed fractions of a second after the Big Bang by studying the behaviour of particles at high energies. Early studies relied on cosmic rays emanating from extraterrestrial sources, but the invention of the circular accelerator by Ernest Lawrence in 1931 revolutionised the field. Further advances in accelerator technology gave physicists more control over their experiments, in particular thanks to the invention of the synchrotron and the development of storage rings. By capturing particles via a ring of magnets and accelerating them with radio-frequency cavities, these facilities finally reached energies of a few hundred GeV. But storage rings are limited by the maximum magnetic field achievable with resistive magnets, which is around 2 T. To go further into the heart of matter, particle physicists required higher energies and a new technology to get them there.

The maximum field of an electromagnet is roughly determined by the amount of current in a conductor multiplied by the number of turns the conductor makes around its support structure. Over the years, the growing scale of accelerators and the large number of magnets needed to reach the highest energies demanded compact and affordable magnets. Conventional electromagnets, which are usually based on a copper conductor, are limited by two main factors: the amount of power required to operate them due to resistive losses and the size of the conductor. Typical conventional-magnet windings therefore tended to use conductors with a cross-sectional area of the order of a few square centimetres, which is not optimal for generating high magnetic fields.

Superconductivity, which allows certain materials at low temperatures to carry very high currents without any resistive loss, was just the transformational technology needed. It powered the Tevatron collider at Fermilab in the US to produce the top quark, and CERN’s Large Hadron Collider (LHC) to unearth the Higgs boson. Advanced superconducting magnets are already being developed for future collider projects that will take physicists into a new phase of subatomic exploration beyond the LHC (figure 1).

Maintaining the state

Discovered in 1911, superconductivity didn’t immediately lead to broad applications, particularly not high-field accelerator magnets. As far as accelerators were concerned, the possibility of using superconducting magnets to produce higher fields started to take root in the mid-1960s. The big challenge was to maintain the superconducting state in a bulk object in which tremendous forces are at work: the slightest microscopic movement of the conductor would cause it to transition to the normal state (a “quench”) and result in burn-up, unless the fault was detected quickly and the current turned off.

Early superconductors were mostly formed into high-aspect-ratio tapes measuring a few tenths of a millimetre thick and around 10 mm wide. These are not particularly useful for making magnets because precise geometry and current distribution are necessary to achieve a good field quality. Intense studies led to the development of multi-filamentary niobium-zirconium (NbZr), niobium-titanium (Nb-Ti) and niobium-tin (Nb3Sn) wires, propelling interest in superconducting technology. In 1961, Kunzler and colleagues at Bell Labs produced a 7 T field in a solenoid, a relatively simple coil geometry compared with the dipoles or quadrupoles needed for accelerators. This swiftly led to higher-field solenoids, and a number of efforts to utilise the benefits of superconductivity for magnets began. But it was only in the early 1970s that the first prototypes of superconducting dipoles and quadrupoles demonstrated the potential of superconducting magnet technology for accelerators.

A turning point came during a six-week-long study group at Brookhaven National Laboratory (BNL) in the US in the summer of 1968, during which 200 physicists and engineers from around the world discussed the application of superconductivity to accelerators (figure 2). Considerable focus was directed towards the possibility of using superconducting beam-handling magnets (such as dipoles and quadrupoles for transporting beams from accelerators to experimental areas) for the new 200–400 GeV accelerator being constructed at Fermilab. By that time, several high-field superconducting alloys and compounds had been produced.

Hitting the mainstream

It could be argued that the unofficial kick-off for superconducting magnets in accelerators was a panel discussion at the 1971 Particle Accelerator Conference held in Chicago, although there was a clear geographical divide on key issues. The European contingent was reluctant to delve into higher-risk technology when it was clear that conventional technology could meet their needs, while the Americans argued for the substantial cost savings promised by superconducting machines: they claimed that a 100 GeV superconducting synchrotron could be built in five or six years, while the Europeans estimated a more conservative seven to 10 years.

In the US, work on furthering the development of superconducting magnets for accelerators was concentrated in a few main laboratories: Fermilab, the Lawrence Radiation Laboratory, Brookhaven National Laboratory (BNL) and Argonne National Laboratory. In Europe, a consortium of three laboratories – CEA Saclay in France, Rutherford Appleton Laboratory in the UK and the Nuclear Research Center at Karlsruhe – was formed to enable future conversion of the recently approved 300 GeV accelerator, to become CERN’s Super Proton Synchrotron (SPS), to higher energies using superconducting magnets. Of particular historical note, a short paper written at this time referred to a “compacted fully transposed cable” produced at the Rutherford Lab, and the “Rutherford cable” has since become the standard conductor configuration for all accelerator magnets (figure 3).

Rapid progress followed, reaching a tipping point in the 1970s with the launch of several accelerator projects based on superconducting magnets and a rapidly growing R&D community worldwide. These included: the Fermilab Energy Doubler; Interaction Region (IR) quadrupoles (used to bring particles into collision for the experiments) for the Intersecting Storage Rings at CERN; and IR quadrupoles for TRISTAN at KEK in Japan and UNK in the former USSR. The UNK magnets were ambitious for their time, with a desired operating field of 5 T, but the project was cancelled in the years following the breakup of the USSR.

Although superconducting magnet technology was one of the initial options for the SPS, it was rapidly discarded in favour of resistive magnets. This was not the case at Fermilab, which at that time was pursuing a project to upgrade its Main Ring beyond 500 GeV. The project was initially presented as an Energy Doubler, but rapidly became known by the very modern name of Energy Saver, and is now known as the Tevatron collider for protons and antiprotons, which shut down in 2011. The Tevatron arc magnets were the result of years of intense and extremely effective R&D, and it was their success that triggered the application of superconductivity for accelerators.

As superconducting technology matured during the 1980s, its applications expanded. The electron–proton collider HERA was getting under way at DESY in Germany, while ISABELLE was reborn as the Relativistic Heavy Ion Collider (RHIC) at BNL. Thanks to intensive development by high-energy physics, Nb-Ti was readily available from industry. This allowed the construction of magnets with fields in the 5 T range, while multi-filamentary conductors made from niobium-titanium-tantalum (Nb-Ti-Ta) and Nb3Sn were being pursued for fields up to 10 T. The first papers on the proposed Superconducting Super Collider (SSC) in the US were published in the mid-1980s, with R&D for the SSC ramping up substantially by the start of the 1990s. Then, in 1991, the first papers on R&D for the LHC were presented. The LHC’s 8 T Nb-Ti dipole magnets operate close to the practical limit of the conductor, and the collider now represents the largest and most sophisticated use of superconducting magnets in an accelerator.

The niobium-tin challenge

With the success of the LHC, the international high-energy physics community has again turned its attention to further exploration of the energy frontier. CERN has launched a Future Circular Collider (FCC) study that envisages a 100 TeV proton–proton collider as the next step for particle physics, which would require a 100 km-circumference ring of superconducting magnets with operating fields of 16 T. This will be an unprecedented challenge for the magnet community, but one that they are eager to take on. Other future machines are based on linear accelerators that do not require magnets to keep the beams on track, but demand advanced superconducting radio-frequency structures to accelerate them over short distances.

Thanks to superconducting accelerator magnets wound with strands and cables made of Cu/Nb-Ti composites, the energy reach of particle colliders has steadily increased. After nearly half a century of dominance by Nb-Ti, however, other superconducting materials are finally making their way into accelerator magnets. Quadrupoles and dipoles using Nb3Sn will be installed as part of the high-luminosity upgrade for the LHC (the HL-LHC) in the next few years, for example, and the high-temperature superconductor Bi2Sr2CaCu2O8 (BSCCO), iron-based superconductors and rare-earth bismuth copper oxide (REBCO) have recently been added to the list of candidate materials. Proposals for new large circular colliders has boosted interest in high-field dipole magnets but, despite the tantalising potential for achieving dipole fields more than twice that of Nb-Ti, there are many problems that still need to be overcome.

Although Nb3Sn was one of the early candidates for high-field magnets, and has much better performance at high fields than Nb-Ti, its processing requirements, mechanical properties and costs present difficulties when building practical magnets. Nb3Sn comes as a round wire from industry vendors, which is excellent for making multi-wire cables but requires the reaction of a copper, niobium and tin composite at 650 °C to develop the superconducting Nb3Sn cable. Unfortunately, Nb3Sn is a brittle ceramic, unlike Nb-Ti, which requires only modest heat treatment and drawing steps and is mechanically very strong. Years of effort worldwide have overcome these limitations and fields in the range of 16 T have recently been achieved – first in 2004 by a US R&D programme and more recently at CERN – and this is close to the practical limit for this conductor. In addition to the near-term use in the HL-LHC, and despite currently costing 10 times more than Nb-Ti, it is the material of choice for a future high-energy hadron collider, and is also being used in enormous quantities for the toroidal-field magnets and central solenoid of the ITER fusion experiment (see “ITER’s massive magnets enter production”).

High-temperature superconductors represent a further leap in magnet performance, but they also raise major difficulties and could cost an additional factor of 10 more than Nb3Sn. For fields above 16 T there are currently only two choices for accelerator magnets: BSCCO and REBCO. Although these materials become superconductors at a higher temperature than niobium-based materials, their maximum current density is achieved at low temperatures (in the vicinity of 4.2 K). BSCCO has the advantage of being obtainable in round wire, which is perfect for making high-current cables but requires a fairly precise heat treatment at close to 900 °C in oxygen at high pressures. This is not a simple engineering task, especially when dealing with large coils. Much progress has been made recently, however, and there is a vibrant programme in industry and academia to tackle these challenges. REBCO has excellent high-field performance, high current density and requires no heat treatment, but it only comes in tape form, presenting difficulties in winding the required coil shapes and producing acceptable field quality. Nevertheless, the performance of this high-temperature superconductor is too tantalising to abandon it, and many people are working on it. Even after half a century, progress in the development of high-field accelerator magnet R&D continues, and indeed is critical for future discoveries in particle physics.

CERN breaks records with high-field magnets for High-Luminosity LHC

To keep the protons on a circular track at the record-breaking luminosities planned for the LHC upgrade (the HL-LHC) and achieve higher collision energies in future circular colliders, particle physicists need to design and demonstrate the most powerful accelerator magnets ever. The development of the niobium-titatnium LHC magnets, currently the highest-field dipole magnets used in a particle accelerator, followed a long road that offered valuable lessons. The HL-LHC is about to change this landscape by relying on niobium tin (Nb3Sn) to build new high-field magnets for the interaction regions of the ATLAS and CMS experiments. New quadrupoles (called MQFX) and two-in-one dipoles with fields of 11 T will replace the LHC’s existing 8 T dipoles in these regions. The main challenge that has prevented the use of Nb3Sn in accelerator magnets is its brittleness, which can cause permanent degradation under very low intrinsic strain. The tremendous progress of this technology in the past decade led to the successful tests of a full-length 4.5 m-long coil that reached a record nominal field value of 13.4 T at BNL. Meanwhile at CERN, the winding of 7.15 m-long coils has begun.Several challenges are still to be faced, however, and the next few years will be decisive for declaring production readiness of the MQFX and 11 T magnets. R&D is also ongoing for the development of a Nb3Sn wire with an improved performance that would allow fields beyond 11 T. It is foreseen that a 14–15 T magnet with real physical aperture will be tested in the US, and this could drive technology for a 16 T magnet for a future circular collider. Based on current experience from the LHC and HL-LHC, we know that the performance requirements for Nb3Sn for a future circular collider require a large industrial effort to make very large-scale production viable.
• Panagiotis Charitos, CERN.

Unique magnets

To identify particles emerging from high-energy interactions between a beam and a fixed target, or between two counter-rotating beams, experimental physicists need to measure the particle tracks with high precision. Since charged particles are deflected in a magnetic field, incorporating a magnet in the detector system serves to determine both the charge and momentum of a particle. Momentum resolution is proportional to the sagitta of the detected track, which is proportional to the magnetic field and the square of the length of the track, so larger magnets and larger fields tend to deliver better performance. While being as large and as strong as possible, however, the magnet should not get in the way of the active detector materials.

These general constraints in high-energy physics experiments point to a need for more compact superconducting devices. But additional constraints such as cost, complexity and experiment schedules can lead to the choice of a conventional “warm” magnet if sufficient field and volume can be provided for acceptable power consumption. A detector magnet is one of a kind, and a field accuracy of one part in 1000 is usually sufficient. In contrast, accelerator magnets are typically many of a kind, and are required to deliver the highest possible field with an accuracy of one part in 10,000 or better in a long and narrow aperture. This leads to substantially different technological choices.

Following the discovery of superconductivity, people immediately thought of using it to produce magnetic fields. But the pure materials concerned (later to be called type-I superconductors) only worked up to a critical field of about 0.1 T. The discovery in 1961 of more practical (type-II) superconductivity in certain alloys and compounds which, unlike type-I, allow penetration of magnetic flux but exhibit critical fields of 10–20 T, immediately led to renewed interest. Physics laboratories in Europe and the US started R&D programmes to understand how to make superconducting magnets and to explore possible applications.

The first four years were difficult: small magnets were built but it was not possible to get scaled-up versions to operate at currents anywhere close to the level obtained for short samples of the superconducting wire available at the time. A breakthrough was presented at the first Particle Accelerator Conference in 1965, in a seminal paper by Steckly and Zar on cryogenic stability. Cryogenic stability ensures that, if a superconductor becomes normal due to coil motion or a flux jump (when magnetic flux penetrates a thick type-II material leading to instability, resistance and increased temperature), it will recover its superconductivity provided enough heat can be conducted away to the coolant for the material to drop back below its critical temperature in the region where superconductivity was lost. Several laboratories immediately started to build large helium-bath-cooled bubble-chamber magnets.

The bubble chamber, invented by Donald Glaser in 1952, consists of a tank of liquid hydrogen surrounded by a pair of Helmholtz coils: particles leave tracks in the supercritical liquid and their curvatures reveal the particle’s momentum. The first large (72 inch) bubble-chamber magnet at the University of California Radiation Laboratory was equipped with a 1.8 T water-cooled copper coil weighing 20 tonnes and dissipating a power of 2.5 MW. Larger magnets were desirable for improved resolution, but were clearly unrealistic with room-temperature copper coils due to the costs involved. This was therefore an obvious application for superconductivity, and the concept of cryogenic stability allowed large magnets to be built using a superconductor that was otherwise inherently unstable.

Recall that this was before seminal work at the Rutherford Appleton Laboratory (RAL) had revealed the need for fine filaments and twisting to ensure stability, and before we knew that practical superconductors had to be made in that way. Indeed, it is striking to observe the audacity of high-energy physicists in the late 1960s and the early 1970s in embarking on the construction of such large and costly devices so rapidly, based on so little experience and knowledge.

Thick filaments of niobium-titanium in a copper matrix were the superconducting material of choice at the time, with coils being cooled in a bath of liquid helium. Achievements included: the 1.8 T magnet at Argonne National Laboratory for its bubble-chamber facility; a 3 T magnet for a facility at Fermilab; and the 3.5 T Big European Bubble Chamber (BEBC) magnet at CERN. The stored energy of the BEBC magnet was almost 800 MJ – a level not exceeded for a large magnet until the Large Helical Device came on stream in Japan (for fusion experiments) in the late 1990s. This use of superconducting magnets for experiments preceded by several years their practical application to accelerators.

Discoveries

Following early experiments at CERN’s Intersecting Storage Rings, which were not well equipped to observe particles having large transverse momentum, the importance of detecting all of the particles produced in beam collisions in colliders was recognised, and a need emerged for magnets covering close to a full 4π solid angle. To improve momentum resolution it was also desirable to extend the measurement of tracks beyond the magnet winding, calling for thin coils. The goal was less than one radiation length in thickness, for which a high-performance superconductor with intrinsic stability was needed. This pointed towards a design based on the type of superconducting wire that had been developed in the accelerator community and had by now become a commodity for making MRI magnets (an industry that now consumes more than 90% of the superconductors produced), with the attendant reduction in cost.

Therefore by the early 1980s the development of detector magnets had shifted to conductors made of by then standard superconducting wires consisting of twisted fine filaments in a copper matrix, single or cabled, co-extruded with ultra-pure aluminium to provide stabilization, and wound in solenoidal coils inside a hard aluminium alloy mandrel for support. Pure aluminium is an excellent conductor at low temperature, and far more transparent than the copper that had been used previously. Moreover, rather than being bath cooled, these constant field magnets were indirectly cooled to about 5 K with helium flowing in pipes in good thermal contact with the mandrel. This allowed the 1–2 T detector solenoids to become larger, without power dissipation in the winding and with a low inventory of liquid helium. In this way the coils can be made thin and relatively transparent to certain classes of particles such as muons, so that detectors can be located both inside and outside. Examples of these magnets are those used for the ALEPH and DELPHI experiments at CERN’s Large Electron–Positron (LEP) collider, the D0 experiment at Fermilab and the BELLE experiment at KEK. Other prominent experiments over the years based on superconducting magnets include VENUS at KEK, ZEUS at DESY, and BaBAR at SLAC.

To the Higgs boson and beyond

While this had become the standard approach to detector magnet design, the magnets in the ATLAS and CMS experiments at the LHC occupy new territory. ATLAS uses a large toroidal coil structure surrounding a thin 2 T solenoid, and the solenoid for CMS delivers an unprecedented 3.8 T (but is not required to be very thin). While both the CMS and ATLAS solenoids use the now traditional technology based on niobium-titanium superconductor  co-extruded in aluminium, to allow the structure to withstand the substantial forces the pure aluminium stabiliser is reinforced. This is done either by welding aluminium-alloy flanges to the pure aluminium (CMS) or by strengthening the pure aluminium with a precipitate that improves its strength while not increasing inordinately the resistivity of the aluminium (ATLAS solenoid).

The next generation of magnets planned for the Compact Linear Collider (CLIC), the International Linear Collider (ILC) and Future Circular Colliders (FCC) will be larger, and may require more technological development to reach the desired magnetic fields. Based on a single detector at the interaction point, a new unified detector model has been developed for CLIC and the concepts explored for this detector are also of interest to the high-luminosity, as well as for a future circular electron–positron collider. Like the LHC with ATLAS and CMS, a future circular collider requires a “general-purpose” detector. Previous studies for a detector for a 100 TeV circular hadron collider were based on a twin solenoid paired with two forward dipoles, but these have now been dropped in favour of a simpler system comprising one main solenoid enclosed by an active shielding coil. This design achieves a similar performance while being much lighter and more compact, resulting in a significant scaling down in the stored energy of the magnet from 65 GJ to 11 GJ. The total diameter of the magnet is around 18 m, and the new design could benefit from the important lessons from the construction and installation of the LHC detectors.

Key to the choice of such magnets, in addition to their cost and complexity, is their ability to allow high-quality muon tracking. This is crucial for studying the properties of the Higgs boson, for example, and any additional new fundamental particles that await discovery. If the lengthy discussions surrounding the design of the ATLAS and CMS magnets many years ago are anything to go by we can look forward to intense and interesting debates about how to push these one-off magnet designs to the next level.

Souped up RF

Behind the size, complexity and physics goals of particle accelerators such as the LHC lies a simple physics principle worked out by Maxwell more than 150 years ago: when a charged particle passes through an electric field it experiences an acceleration proportional to the electric-field strength divided by its mass. While the magnets of a circular accelerator keep the beams on track, it is this principle that shunts them to the high energies needed for particle-physics research. The first accelerators relied on electrostatic fields produced between high-voltage anodes and cathodes, but by the mid-1920s it was clear that radio technology was needed to reach the highest possible energies.

To transfer energy to a beam of charged particles, a space must be created where the beam can move along an electric field produced by high-power radio waves; the higher the field, the larger the energy gain per metre (accelerating gradient). An accelerating space, usually called a radio-frequency (RF) cavity, is a container crossed by the beam in which is stored a rotational electric field that, when the bunch of particles is passing through, is found to be properly orientated in the desired direction. Whatever geometry the cavity has, the power dissipated by the Joule effect is proportional to its surface resistance and to the square of the field inside it.

For the past 30 years, superconducting radio-frequency (SRF) cavities have been in routine operation in a variety of settings, from pushing frontier accelerators for particle physics to applications in nuclear physics and materials science. They were instrumental in pushing CERN’s LEP collider to new energy regimes and in driving the newly inaugurated European X-ray Free Electron Laser. Advanced SRF “crab cavities” are now under development for the high-luminosity upgrade of the LHC.

From Stanford to LEP

It was unclear at first whether superconductivity had much value for RF technology. When a superconductor is exposed to a time-varying electromagnetic field, the electrons that are not coupled as Cooper pairs lead to energy dissipation in the shallow layer of the superconductor surface in which the electric and magnetic fields are dancing together to sustain the rotational electric field that transfers the energy to the beam. But it was soon realised that in the practical frequency range of RF accelerators, from a few hundred MHz to a few GHz, the use of SRF cavities would produce in any case a significant breakthrough due to the increase in the conversion efficiency from plug- to beam-power, cryogenics included. It was simply a question of developing the technology, and this required investment and big projects.

The High-Energy Physics Lab at Stanford University in the US was a pioneer in applying SRF to accelerators, demonstrating the first acceleration of electrons with a lead-plated single-cell resonator in 1965. Also in Europe, in the late 1960s, SRF was considered for the design of proton and ion linacs at KFK in Karlsruhe. To be superior to the competing technology of normal-conducting RF, a moderate field of a few MV/m was necessary. By the early 1970s SRF had been introduced in the design of particle accelerators, but results were still modest and a number of limiting factors needed to be understood.

The first successful test of a complete SRF cavity at high gradient and with beam was performed at Cornell’s CESR facility at the end of 1984, involving a pair of 1.5 GHz, five-cell bulk niobium cavities with a gradient of 4.5 MV/m. This cavity design was then used as the basis for the CEBAF facility at Jefferson Lab. Cornell’s success also triggered activities at CERN, where some visionary people were already looking at SRF as a way to double the energy of the Large Electron–Positron (LEP) collider under construction in what is now the LHC tunnel. LEP’s nominal centre-of-mass energy of 90 GeV was the minimum required to produce the recently discovered Z boson, but almost double this energy was needed to test the Standard Model further: specifically, to produce pairs of W bosons.

The baseline accelerating system of LEP was already a jewel that had demanded all the knowledge and ingenuity available globally at that time. Furthermore, doubling its energy meant that researchers had to battle additional losses due to synchrotron radiation, which scales as the fourth power of the electron and positron energies. The dream was to develop an accelerating system that made use of the very low losses promised by superconductivity to deliver a factor 16 or so more energy per turn to the LEP beam, occupying more or less the same space as the machine’s original copper cavities.

Developing the SRF system for “LEP II” was a great challenge and success for the accelerator community. Owing to the relatively low resonant frequency – 352 MHz – of LEP’s underlying design, the superconducting cavities developed ended up being more than four times bigger than the ones successfully tested at Cornell. Since 1979, a small group at CERN had been developing the SRF technology, including all the cavity’s ancillaries necessary for its eventual working, but the best niobium superconducting material produced at that time was not sufficiently performant at such scales, and the first tests cast doubt as to whether the LEP-II dream could be realised. In 1989, a pilot project of 20 niobium cavities started to evaluate the feasibility of LEP-II SC cavities. In the meantime, the niobium-copper (Nb/Cu) technology developed at CERN by Cris Benvenuti became mature enough to justify the decision to base the LEP-II programme on Nb/Cu SRF cavities. Their inherent advantages, including the possibility to replace the Nb coating by a better one in the future, were decisive in making this choice. Soon the CERN technology was transferred to industry and more than 300 2.4 m-long SRF cavities were successfully produced by three companies in France, Germany and Italy. By 2 November 2000, when LEP II was switched off to allow the LHC to be constructed, the collider’s energy had topped 209 GeV. More than 17 million Z bosons were produced, and a large number of pairs of W bosons, allowing extremely precise tests of the Standard Model of particle physics. LEP missed the Higgs discovery, and we now know that a centre-of-mass energy just a few tens of GeV higher would have been sufficient to produce the fundamental particle. But this was never a realistic prospect: the machine was already pushed to its limit and any further energy increase would have eventually produced an irreparable failure.

While CERN and LEP-II were creating a valuable technology for very large SRF cavities, Jefferson Lab in the US was set to build the CEBAF accelerator. This required installing a large and complete infrastructure to develop the SRF technology based on bulk niobium, going beyond the needs of CEBAF – possibly unavoidable for the success of such a challenging project that was the first of its kind. The decision resulted in the large-scale production of 300 small cavities based on Cornell’s design, but with a marginal contribution from industry mainly limited to the mechanical fabrication of the cavities with no surface treatment. The experience of CEBAF was nevertheless important for the evolution of SRF technology and some of the techniques applied became standard. Among them was the development of electron-beam welding parameters for niobium, the use of clean-room assembly and ultrapure-water rinsing, and some optimisation of the surface treatment of the active internal surface of the SRF cavities. CEBAF was also the first SRF accelerator to be cooled by superfluid helium, operating at a temperature of 2 K. The large cryogenic plant designed and built to cool CEBAF has itself been a crucial step in the development of superconducting technology, not just SRF but also for accelerator magnets such as those used in the LHC.

Concluding this important chapter in SRF development in the mid- to late-1980s are two other major high-energy-physics projects: TRISTAN at KEK in Japan and HERA at DESY in Germany. Each produced, through a big national company, state-of-the-art SRF technology involving moderate cavities, typically 500 MHz, of four to five cells, in bulk niobium. The new technologies reached an accelerating electric field of about 5 MV/m, while substantially improving the performances of HERA and TRISTAN.

Linear adventure

All of these large accelerators were still to be completed when, in July 1990, a meeting was held at Cornell, organised by Ugo Amaldi and Hasan Padamsee, to discuss the possibility of developing SRF technology for a future TeV-scale linear collider (thereby avoiding the synchrotron-radiation losses suffered by circular colliders). The proposed name of this object was TESLA and, after three days battling with various figures, we were convinced that such technology was possible. Amaldi returned to CERN to gather support and, one and half years later, over a dinner in a restaurant in Blankenese in Hamburg hosted by Bjørn Wiik, a dozen or so colleagues, including Maury Tigner, Helen Edwards and Ernst Habel, proposed that DESY should host an international collaboration with the task of developing TESLA.

The great success of TESLA in opening a new era of SRF had a number of concomitant causes, in addition to the great enthusiasm, friendship and ingenuity of those involved. We had the recent experiences from LEP-II and CEBAF, for instance, plus cryogenic experience from DESY and Fermilab. The memorandum of understanding helped to inspire a pure scientific research style, with no secrets among the partner institutes and constructive competition to produce the best technology possible. Once the cavity frequency (1.3 GHz) and the number of cells per cavity (nine) had been agreed, we designed the TESLA Test Facility. This central infrastructure at DESY was to treat the active/internal surface of cavities, control and verify each step of the material and cavity production, and finally test the cavities and ancillaries in all conditions, naked and fully dressed, with and without beam. In contrast to the construction of LEP-II and CEBAF, the fabrication of the cavities themselves was handed over to industry. This turned out to be a crucial decision, leading researchers into collaboration with competing firms and taking advantage of their expertise and ingenuity. The test with beam brought about a prototype of the TESLA linac that, with the addition of some undulators, was renamed FLASH in 2003 – the harbinger of the European XFEL.

In 1996, we had the first eight-cavity cryomodule in operation with beam and a stable production of cavities performing a few times better than envisaged. The challenging objective of TESLA’s mission was now very close in terms of both accelerating gradient and cost. The factor 20 improvement required to compete for the linear-collider prize was almost there. By 2000, a novel chemical process called electropolishing, developed by Kenji Saito of KEK, and a final cavity baking at moderate temperature introduced at CEA-Saclay by Bernard Visentin, took TESLA over the finish line. The success of TESLA technology was not just due to the cavity performance, but also the parallel development of power couplers, frequency tuners and other ancillaries.

The ability to accelerate very-high-power beams of protons to produce a huge flux of neutrons had major implications for neutron spallation sources like SNS in the US and the ESS under construction in Sweden, for nuclear-waste transmutation, and for accelerators for heavy ions. It took some time, but most new accelerators are in some way based on TESLA technology.

Continuing application

In 2004 the International Technology Recommendation Panel gave momentum to the newly named International Linear Collider (ILC). But it was clear that the eventual construction would not begin for at least a decade, in any case after a better definition of the physics case expected from the LHC. In the meantime, the European X-ray Free Electron Laser (XFEL), which began construction in Hamburg in 2009 and was completed this year (CERN Courier July/August 2017 p25), was the best opportunity for the TESLA collaboration to continue with the development of SRF technology. The realisation with industry, on budget and on time, of the advanced-SRF European XFEL has possibly been the most important recent milestone toward new frontiers for high-energy physics. Nevertheless, its 800 nine-cell cavities represent only 5% of the total number required by the ILC.

While SRF has made fantastic progress towards a linear collider and achieved a degree of maturity with the European XFEL, future circular colliders present additional R&D challenges that are similar and also complementary to the quest for very large accelerating gradient. The total power to be transferred to the beams in case of a future electron-positron collider, for example, is 100 MW continuously. This challenges the present concepts of high-power couplers, requires new ideas to minimize dynamic cryogenic losses, and has triggered R&D on new materials and fabrication techniques.

Concluding this historical summary, SRF has now reached a high level of technological development, handled by advanced industry, similar to that reached for magnets 15 years ago. As in the case of superconducting magnets, physics – and high-energy physics in particular – has been the most significant driving force. As with accelerator magnets such as those in the Tevatron, HERA and the LHC, projects such as LEP-II, CEBAF and TESLA/ILC have played a crucial role to transform, through technology transfer and industrialisation, an exotic phenomenon into a promising and useful technology. So far, the existing technology is sufficient for today’s applications. But basic research always seeks the next paradigm shift, and R&D taking place in laboratories such as CERN will allow us to go beyond present limitations.

Three state-of-the-art SRF projects for the High Luminosity LHC and beyond

Exotic cavity geometries and ancillaries to perform specific gymnastics on the beam to significantly improve the collider luminosity. For example, “crab cavities” (pictured right) are under development at CERN for the high-luminosity LHC with the support of a highly expert collaboration. Starting from existing advanced SRF cavities, the group developed two complementary cavity packages that will tilt the two LHC beams just before they collide, to maximise their overlap and then substantially increase the collision rate. After the collision the beam is returned to its original orbit and the challenge is to do all of this without perturbing the beam. So far, two (out a total of 16) superconducting crab cavities have been manufactured at CERN and RF tests at 2 K have been performed in a superfluid helium bath. The first cavity tests earlier this year demonstrated a maximum transverse kick voltage exceeding 5 MV, corresponding to extremely high electric and magnetic fields on the cavity surfaces. By the end of 2017, the two crab cavities will have been inserted into a specially designed cryomodule that will be installed in the Super Proton Synchrotron to undergo validation tests with proton beams.

• Doping the very thin layer on the cavity inner surface that sustains the electromagnetic accelerating field to reduce the power dissipation at cryogenic temperatures, using a minor quantity of gas such as nitrogen. This R&D project, led by Anna Grassellino at Fermilab, is giving very promising results and is being experimentally applied on the LCLS-II X-ray free-electron laser (XFEL) under construction at SLAC. Once the technology is stabilised, the benefit in terms of investment and operation costs will hopefully be very important for all large accelerators requiring a continuous beam, such as new circular colliders, continuous-wavelength XFELs approved or under construction, and accelerator-driven systems for new nuclear-power technology.

• Niobium-tin (Nb3Sn) coating of SRF cavities. This technology has been pursued in a few laboratories for some time, with moderate success. But recent results from Cornell and Fermilab on real single-cell elliptical cavities are close to those obtained with pure niobium, and this could be the starting point for possible application of Nb3Sn coatings in large accelerators. The coating technique, once properly developed, could have significant advantages, mainly because of the higher critical temperature and critical magnetic field of Nb3Sn with respect to those of pure Nb.

Get on board with EASITrain

Heike Kamerlingh Onnes won his Nobel prize back in 1913 two years after the discovery of superconductivity; Georg Bednorz and Alexander Müller won theirs in 1987, just a year after discovering high-temperature superconductors. Putting these major discoveries into use, however, has been a lengthy affair, and it is only in the past 30 years or so that demand has emerged. Today, superconductors represent an annual market of around $1.5 billion, with a high growth rate, yet a plethora of opportunities remains untapped.

Developing new superconducting materials is essential for a possible successor to the LHC currently being explored by the Future Circular Collider (FCC) study, which is driving a considerable effort to improve the performance and feasibility of large-scale magnet production. Beyond fundamental research, superconducting materials are the natural choice for any application where strong magnetic fields are needed. They are used in applications as diverse as magnetic resonance imaging (MRI), the magnetic separation of minerals in the mining industry and efficient power transmission across long distances (currently being explored by the LIPA project in the US and AmpaCity in Germany).

The promise for future technologies is even greater, and overcoming our limited understanding of the fundamental principles of superconductivity and enabling large-quantity production of high-quality conductors at affordable prices will open new business opportunities. To help bring this future closer, CERN has initiated the European Advanced Superconductivity Innovation and Training project (EASITrain) to prepare the next generation of researchers, develop innovative materials and improve large-scale cryogenics (easitrain.web.cern.ch). From January next year, 15 early stage researchers will work on the project for three years, with the CERN-coordinated FCC study providing the necessary research infrastructure.

Global network

EASITrain establishes a global network of research institutes and industrial partners, transferring the latest knowledge while also equipping participants with business skills. The network will join forces with other EU projects such as ARIES, EUROTAPES (superconductors), INNWIND (a 10–20 MW wind turbine), EcoSWING (superconducting wind generator), S-PULSE (superconducting electronics) and FuSuMaTech (a working group approved in June devoted to the high-impact potential of R&D for the HL-LHC and FCC), and aims to profit from the well-established Test Infrastructure and Accelerator Research Area Preparatory Phase (TIARA) platform. EASITrain also links with the Marie Curie training networks STREAM and RADSAGA, both hosted by CERN.

Operating within the EU’s H2020 framework, one of EASITrain’s targets is energy sustainability. Performance and efficiency increases in the production and operation of superconductors could lead to 10–20 MW wind turbines, for example, while new efficient cryogenics could reduce the carbon footprint of industries, gas production and transport. EASITrain will also explore the use of novel superconductors, including high-temperature superconductors, in advanced materials for power-grid and medical applications, and bring together technical experts, industrial representatives and specialists in business and marketing to identify new superconductor applications. Following an extensive study, three specific application areas have been identified: uninterruptible power supplies; sorting machines for the fruit industry; and large loudspeaker systems. These will be further explored during a three-day “superconductivity hackathon” satellite event at EUCAS17, organised jointly with CERN’s KT group, IdeaSquare, WU Vienna and the Fraunhofer Institute.

Together with the impact that superconductors have had on fundamental research, these examples show the unexpected transformative potential of these still mysterious materials and emphasise the importance of preparing the next generation for the challenges ahead.

Hackathon application destinations

Uninterruptible power supply (UPS). UPS systems are energy-storage technologies that can take on and deliver power when necessary. Cloud-based applications are leading to soaring data volumes and an increasing need for secure storage, driving growth among large data centres and a shift towards more efficient UPS solutions that are expected to carve a slice of an almost $1 billion and growing market. Current versions are based on batteries with a maximum efficiency of 90%, but superconductor-based implementations based on flywheels will ensure a continuous and longer-lived power supply, minimising data loss and maximising server stability.

Sorting machines for the fruit industry. Tonnes of fruit have to be disposed of worldwide because current technologies based on spectroscopy are not able to determine the maturity level of fruit sufficiently accurately, with techniques also offering limited information about small-sized fruit. Superconductors would enable NMR-based scanning systems that allow producers to accurately and non-destructively determine valuable properties such as ripeness, absence of seeds and, crucially, the maturity of fruit. In 2016, sorting-machine manufacturers made profits of $360 million selling products analysing apples, pears and citrus fruit, and the market has experienced a growth of about 20% per year.

Large loudspeaker systems. The sound quality of powerful loudspeakers, particularly PA systems for music festivals and stadiums, could enter new dimensions by using superconductors. Higher electrical resistance leads to poorer sound quality, since speakers need to modify the strength of a magnetic field rapidly to adapt to different frequency ranges. Superconductivity also allows smaller magnets to be used, making them more compact and transportable. A major concern among European manufacturers has been the search for the next big step in loudspeaker evolution, to defend against competition from Asia, and the size and quality of large speakers is now a major driver of the $500 million industry.

bright-rec iop pub iop-science physcis connect