Comsol -leaderboard other pages

Topics

Particle physics INSPIREs information retrieval

CCins1_04_10

Particle physicists thrive on information. They first create information by performing experiments or elaborating theoretical conjectures. Then they convey it to their peers by writing papers that are disseminated in a preprint form long before publication. Keeping track of this information has long been the task of libraries at the larger laboratories, such as at CERN, DESY, Fermilab and SLAC, as well as being the focus of indispensable services including arXiv and those of the Particle Data Group.

It is household knowledge that the web was born at CERN, and every particle physicist knows about SPIRES, the place where they can find papers, citations and information about colleagues. However, not everyone knows that the first US web server and the first database on the web came about at SLAC with just one aim: to bring scientific information to the fingertips of particle physicists through the SPIRES platform. SPIRES was hailed as the first “killer” application of the then nascent web.

No matter how venerable, the information tools currently serving particle physicists no longer live up to expectations and information management tools used elsewhere in the world have been catching up with those of the high-energy physics community. The soon to be released INSPIRE service will bring state-of-the-art information retrieval to the fingertips of researchers in high-energy physics once more, not only enabling more efficient searching but paving the way for modern technologies and techniques to augment the tried-and-tested tools of the trade.

Meeting demand

The INSPIRE project involves information specialists from CERN, DESY, Fermilab and SLAC working in close collaboration with arXiv, the Particle Data Group and publishers within the field of particle physics. “We separate the work such that we don’t duplicate things. Having one common corpus that everyone is working on allows us to improve remarkably the quality of the end product,” explains Tim Smith, head of the User and Document Services Group in the IT Department at CERN, which is providing the Invenio technology that lies at the core of INSPIRE.

In 2007, many providers of information in the field came together for a summit at SLAC to see how physics-information resources could be enhanced. The INSPIRE project emerged from that meeting and the vision behind it was built from a survey launched by the four labs to evaluate the real needs of the community (Gentil-Beccot et al. 2008.). A large number of physicists replied enthusiastically, even writing reams of details in the boxes that were made available to input free text. The bulk of the respondents noted that the SPIRES and arXiv services were together the dominant resources in the field. However, they pointed out that SPIRES in particular was “too slow” or “too arcane” to meet their current needs.

INSPIRE responds to this directive from the community by combining the most successful aspects of SPIRES (a joint project of DESY, Fermilab and SLAC) with the modern technology of Invenio (the CERN open-source digital-library software). “SPIRES’ underlying software was overdue for replacement, and adopting Invenio has given INSPIRE the opportunity to reproduce SPIRES’ functionality using current technology,” says Travis Brooks, manager of the SPIRES databases at SLAC. The name of the service, with the “IN” from Invenio augmenting SPIRES’ familiar name, underscores this beneficial partnership. “It reflects the fact that this is an evolution from SPIRES because the SPIRES service is very much appreciated by a large community of physicists. It is a sort of brand in the field,” says Jens Vigen, head of the Scientific Information Group at CERN.

However, INSPIRE takes its own inspiration from more than just SPIRES and Invenio. In searching for a paper, INSPIRE will not only fully understand the search syntax of SPIRES, but will also support free-text searches like those in Google. “From the replies we received to the survey, we could observe that young people prefer to just throw a text string in a field and push the search button, as happens in Google,” notes Brooks.

This service will facilitate the work of the large community of particle physicists. “Even more exciting is that after releasing the initial INSPIRE service, we will be releasing many new features built on top of the modern platform,” says Zaven Akopov of the DESY library. INSPIRE will enable authors and readers to help catalogue and sort material so that everyone will find the most relevant material quickly and easily. INSPIRE will also be able to store files associated with documents, including the full text of older or “orphaned” preprints. Stephen Parke, senior scientist at the Fermilab Theory Department looks forward to these enhancements: “INSPIRE will be a fabulous service to the high-energy-physics community. Not only will you be able to do faster, more flexible searching but there is a real need to archive all conference slides and the full text of PhD theses; INSPIRE is just what the community needs at this time.”

CCins2_04_10

Pilot users see INSPIRE already rising to meet these expectations, as remarked on by Tony Thomas, director of the Australian Research Council Special Research Centre for the Structure of Matter: “I tried the alpha version of INSPIRE and was amazed by how rapidly it responded to even quite long and complex requests.”

The Invenio software that underlies INSPIRE is a collaborative tool developed at CERN for managing large digital libraries. It is already inspiring many other institutes around the world. In particular, the Astrophysics Data System (ADS) – the digital library run by the Harvard-Smithsonian Center for Astrophysics for NASA – recently chose Invenio as the new technology to manage its collection. “We can imagine all sorts of possible synergies here,” Brooks anticipates. “ADS is a resource very much like SPIRES, but focusing on the astronomy/astrophysics and increasingly astroparticle community, and since our two fields have begun to do a lot of interdisciplinary work the tighter collaboration between these resources will benefit both user communities.”

Invenio is also being used by many other institutes around the world and many more are considering it. “In the true spirit of CERN, Invenio is an open-source product and thus it is made available under the GNU General Public Licence,” explains Smith. “At CERN, Invenio currently manages about a million records. There aren’t that many products that can actually handle so many records,” he adds.

Invenio has at the same time broadened its scope to include all sorts of digital records, including photos, videos and recordings of presentations. It makes use of a versatile interface that makes it possible, for example, to have the site available in 20 languages. Invenio’s expandability is being exploited to the full for the INSPIRE project where a rich set of back-office tools are being developed for cataloguers. “These tools will greatly ease the manual tasks, thereby allowing us to get papers faster and more accurately into INSPIRE,” explains Heath O’Connell from the Fermilab library. “This will increase the search accuracy for users. Furthermore, with the advanced Web 2.0 features of INSPIRE, users will have a simpler, more powerful way to submit additions, corrections and updates, which will be processed almost in real time”.

Researchers in high-energy physics were once the beneficiaries of world-leading information management. Now INSPIRE, anchored by the Invenio software, aims once again to give the community a world-class solution to its information needs. The future is rich with possibilities, from interactive PDF documents to exciting new opportunities for mining this wealth of bibliographic data, enabling sophisticated analyses of citations and other information. The conclusion is easy: if you are a physicist, just let yourself be INSPIREd!

• The INSPIRE service is available at http://inspirebeta.net/.

CAST’s first decade of solar-axion research

In 1983, when I was thinking about how axions may be produced and detected by their conversion to photons in a magnetic field, it struck me suddenly that there is no need to produce axions because the Sun does that for us. The solar axion flux is much larger than any that we could produce on Earth, and it is here free of charge. Our job is simply to detect these solar axions.

– Pierre Sikivie of University of Florida.

CCcas1_04_10

Axions are one of the favoured candidates for the mysterious dark matter created in the early universe. A variety of observatories located on Earth and in outer space form a quasi-network that can target specific places in the search for these particles, such as the galactic centre, the inner Earth and the Sun’s hot core. The CERN Axion Solar Telescope (CAST) points at the Sun – its aim being the direct detection of axions or other exotic particles with similar properties.

While relic axions from the early universe should propagate with a velocity of about one thousandth of the speed of light, solar axions – with a broad spectral shape of around 4–5 keV kinetic energy – are relativistic. The open window for the axion rest mass is currently in the micro-electron-volt to electron-volt range. The several orders of magnitude difference in kinetic energy associated with the two origins make for different experimental search techniques: microwave cavities for relic axions versus X-ray detectors for solar axions. However, both techniques use a magnetic field as the catalyst that allows axions to become photons

Accelerator laboratories, with their powerful magnets are natural locations for axion helioscopes – the instruments used to search for axions from the Sun. The first experiment to look at the Sun, which incorporated a 2.2-m iron-core magnet, was set up by a Rochester-Brookhaven-Fermilab (RBF) collaboration in the early 1990s. It was followed by the Sumico experiment based on a 2.3-m long superconducting magnet at the University of Tokyo, which is still in operation. The CAST helioscope at CERN uses a decommissioned LHC-dipole test magnet, with a field of 9 T and two tubes – originally designed to house the beam pipes – that are 9.2 m long and have an aperture of 43 mm. The dipole is one of four original prototypes and was rescued at the last minute before it was about to be scrapped along with the others. A comparison of CAST’s performance with its two predecessors in Brookhaven and Tokyo shows that the LHC magnet was good choice.

The possibility that a bending magnet could be used to make visible the “dark” Sun was – and still is – inspiring and motivating. To transform the multi-tonne superconducting, superfluid-helium-cooled magnet from a static LHC prototype dipole into a helioscope that can track the Sun with millimetre precision involved delicate engineering work and cryo-expertise. Thankfully, Louis Walckiers in the Accelerator Technology Division supported the idea, even though we had both just failed to prove with the same magnet that the biomechanics of cell-structure formation becomes confused in a 9 T environment.

Recycling space technology

CCcas2_04_10

Position-sensitive X-ray detectors of the MicroMegas type, invented by Georges Charpak and Ioannis Giomataris at CERN, now cover three of the ends of the tubes through the magnet, making CAST the only axion helioscope to have implemented such technology. For the fourth exit, together with Dieter Hoffmann and Joachim Jacoby of TU Darmstadt we were able to recover an excellent X-ray imaging telescope from the German space programme, which was delivered by Heinrich Bräuninger from the Max Planck Institute for Extraterrestrial Physics in Garching. With state-of-the-art X-ray optics and low-noise X-ray pixel detectors at the focal plane, this not only improves the signal-to-noise ratio substantially but also allows for the unambiguous identification of the axion signal. Its CCD imaging camera simultaneously measures the expected solar-axion signal spot and the surrounding background. This is an important feature that makes CAST unique as an axion helioscope. With most of the components located, CAST received formal approval at CERN in April 2000.

In the same way that much of the CAST equipment was recycled from particle physics so, too, was its working principle: the Primakoff effect, known since 1951, which regards the production of neutral pions by the interaction of high-energy photons with the high electric field of the nucleus as the reverse of the decay into two photons. The expectation is that the quasi-stable axion should “decay” in the presence of a magnetic field into a photon emitted exactly along the axion’s trajectory. In principle this allows for a perfect axion telescope thanks to the spatial resolution of the X-ray telescope.

The Primakoff effect deserves to be a textbook example of macroscopic quantum-mechanical coherence, which, in astrophysical magnetic fields, can extend over kiloparsecs – although only for very small axion rest masses. For CAST, coherence holds over the whole length of the magnet, around 9 m, provided that the particle rest mass is below 0.02 eV/c2 when the two pipes are vacuum-pumped. To extend the detection sensitivity to higher masses, adding a certain amount of helium as a refractive gas to the 1.8 K cold magnetic pipes restores coherence for a rest mass up to around 1 eV/c2 from a few millimetres up to 9 m but for a narrow range in solar axion rest mass. With this adaptation, suggested in 1988 by two collaboration members Karl van Bibber and Georg Raffelt, and implemented during 2005 and 2006, CAST has become a scanning experiment. The rest-mass range for solar axions that will be scanned by the end of 2010 fits the cosmologically derived upper limit of about 1 eV/c2, from the Wilkinson Microwave Anisotropy Probe (WMAP) data, and the lower limit around 1 μeV/c2, which arises because axions with lower rest mass would be produced earlier in the early universe, with a total mass exceeding that of the critical density (“overclosure”).

CCcas4_04_10

The precise pressure settings for the helium gas and controlled changes in the very cold magnet pipes are highly demanding and are not without risk. CAST has benefited greatly from CERN’s world-class cryogenic expertise in this respect, with its reliable user-friendly gas system designed by Tapio Niinikoski and his PhD student Nuno Elias. At present an extensive thermodynamic simulation is being performed with the aim of reconstructing the changing conditions of the helium gas as the magnet tracks the Sun. For example, to achieve the homogeneity in gas density necessary to keep coherence, the temperature variations along the 9-m long pipes should be in the milli-kelvin range; this is made possible by the surrounding bath of superfluid liquid helium at about 1.8 K.

CAST is also a “special” experiment when compared with others because its highly sensitive magnet and low-background detectors must operate while in motion, even though the speed of about 2 m an hour is almost imperceptible. In addition, CAST’s equipment must withstand quenches of the superconducting magnet. After each quench the gas control system must cope with extreme conditions within seconds. However, during 15,000 hours of operation with the magnet on, and more than 2000 hours of solar tracking, CAST has survived potentially catastrophic events because its safety features have – thanks to the careful work of CERN’s Martyn Davenport – never failed simultaneously.

Scientific return

CCcas3_04_10

While CAST has failed so far to find direct evidence for solar axions, it has been able to provide new robust limits on the interaction of solar axions with a magnetic field, i.e. the sea of virtual photons (figure 1). Its experimentally derived limit dominates the relevant phase space and competes with the best astrophysically derived lower value for the coupling constant, g. CAST is now moving into a theoretically motivated region, having almost fulfilled the original expectations set a decade ago with all of the input uncertainties at that time.

Moving beyond the initial proposal, CAST has in parallel explored – for the first time for a solar axion search – the region of high-energy solar axions, following the proposal of collaboration member Juan Collar. It has also made the first measurements below 1 keV, covering so far the range of around 1–3 eV. Moving to energies above this is possible; however, it will require larger energy steps and some new state-of-the-art detector technology to explore this interesting energy region that covers of most Sun’s puzzling X-ray activity.

Without detecting any solar-axion signature so far, the question arises: what is the scientific return from CAST? Certainly, the first benefit is educational, with students completing some 10 PhD theses and an equal number of diploma theses. There have also been several CAST summer students at CERN. On the research side, CAST has helped to revive axion activities around the world, fitting between pure axion searches in the laboratory and a variety of astrophysical/cosmological observatories that usually did not have axions in their original list of objectives. The state-of-the-art detectors in these observatories cover photon energies from micro-electron-volts upwards. With CAST, the implementation of X-ray optics in axion helioscopy has become widely accepted as a necessary ingredient for future scaled-up versions.

While CAST’s results have became a reference in the relevant field, they have also been used by other teams to search, for example, for “paraphotons” – sterile massive photons from the “hidden sector”. Furthermore, two members of the CAST collaboration, Milica Krĉmar and Biljana Lakić, have used the experiment’s results to explore theories of large extra dimensions, which predict “massive” axions of the Kaluza-Klein type. Interestingly, such massive exotica could be gravitationally trapped in the Sun and could build a bright halo, as a result of their spontaneous decay, as we have suggested with Luigi Di Lella of CERN.

The axion signal that the CAST collaboration aims to observe while tracking the Sun consists of excess X-rays emerging from the magnet tubes. Interestingly, there is abundant solar X-ray emission of otherwise unknown origin, which is further enhanced just above the magnetized photosphere. For more than 70 years, known physics has failed to explain this intriguing behaviour, which could, however, arise from the conversion or decay of axions or other similar exotica near the Sun’s restless surface. The outermost solar layers, i.e. the photosphere, might act occasionally as scaled-up and highly effective catalysts of axions or similar particles, emitting large numbers of X-rays (like a fine-tuned CAST might do one day). Then, extending Sikivie’s original idea, the otherwise mysterious solar surface makes these axions visible as X-rays. New X-ray observatories in space are already providing more and more exciting evidence that something new and interesting is going on in the Sun’s outer layers. The complete axion scheme may make the Sun even more special than it already is.

Such a solar scenario might eventually point to a “superCAST”, which in 5 to 10 years may well make the present CAST look like an old fashioned miniature device – provided that Sikivie’s pioneering idea behind CAST is not replaced by a novel conceptual design. For example, together with Andrzej Siemko of CERN we have proposed using a quadrupole magnet as a potentially better axion catalyst than the dipole magnets used at present in almost all axion experiments. This idea, which was also discussed theoretically by Eduardo Guendelman in 2008, is motivated observationally because otherwise puzzling solar X-ray activity correlates not only with magnetic fields but even more with places of varying field vector.

Alvaro De Rújula commented in 1998 that “axion searches are mandatory, fun, creative – and proceeding”. His words are just as true today, as the CAST project continues into its second decade.

• I am very grateful to all members of the CAST collaboration, to CERN for its hospitality and support, including the librarians, and to my colleagues at the University of Patras for their real help.

This article is dedicated to the memory of the following members of the CAST collaboration who have sadly passed away since the project’s inception: Engin Abat, Engin Arik, Fatma Senel Boydag, Ozgen Berkol Dogan, Angel Morales and Julio Morales.

CERN prepares for long 7 TeV run

CCnew1_02_10

The Chamonix workshop, held on 25–29 January, once again proved its worth as a place where all of the stakeholders in the LHC can come together, take difficult decisions and reach a consensus on important issues. This time the most important decision taken was to run the LHC for 18 to 24 months at a collision energy of 7 TeV (3.5 TeV per beam) before a long shutdown, which will allow time for all of the work necessary for the machine to reach the design collision energy of 14 TeV. As beam returns in the LHC this February it marks the start of the longest phase of accelerator operation in CERN’s history, running into summer or autumn 2011.

What is the reasoning behind this decision? First, the LHC a cryogenic facility, so each run is accompanied by lengthy cool-down and warm-up phases. Second, there is still essential work to be done to prepare the LHC for running at energies significantly higher than the collision energy of 7 TeV chosen for the first physics run. These facts led to a simple choice: run for a few months now and programme successive short shutdowns to step up in energy; or run for a long time now and schedule a single long shutdown before allowing a total energy of 14 TeV (7 TeV per beam). A long run gives the machine teams time to prepare carefully for the work that will be needed before running at 14 TeV. For the experiments, 18 to 24 months will bring enough data across all of the potential discovery areas.

Before the 2009 running period began, all of the necessary preparations to run the LHC at the collision energy of 1.18 TeV per beam had been carried out. The goal of the technical stop, scheduled to end in mid-February, was to prepare the machine for running at 3.5 TeV per beam, which requires a current of 6 kA in the LHC magnets.

The main work during the stop was on the new quench-protection system (nQPS), which is designed to improve the electrical reliability of the connection between the instrumentation feedthrough systems on the magnets and the nQPS equipment. There are around 500 of these connectors for each of the eight sectors in the LHC. An intensive effort ensured that this work was undertaken and completed in the first three weeks of January, so that the hardware-commissioning teams could proceed with testing the magnets up to 6 kA.

Several other teams took advantage of the stop to carry out other technical verifications and efficiency tests, for example, on some vacuum pumping units, the kicker system, the oxygen-deficiency hazard detectors, and on some ventilation components. At the same time as this work on the LHC, repairs took place on the water-cooling system of the CMS experiment.

All work, both in the LHC and in the CMS experiment, was scheduled to be completed by mid-February. The machine operations team will then begin to re-commission the LHC at 450 GeV per beam, building on the experience gained after the restart last year and completing investigations of machine parameters at this energy (CERN Courier January/February 2010 p24). The team will then prepare for the first ramps to 3.5 TeV per beam. Collisions at 3.5 TeV will follow, but only after the operators have established the appropriate running conditions.

Workshop pushes proton-driven plasma wakefield acceleration

CCnew3_02_10

PPA09, a workshop held at CERN on proton-driven plasma wakefield acceleration, has launched discussions about a first demonstration experiment using a proton beam. Steve Myers, CERN’s director for Accelerators and Technology, opened the event and described its underlying motivation. Reaching higher-energy collisions for future particle-physics experiments beyond the LHC requires a novel accelerator technology, and “shooting a high-energy proton beam into a plasma” could be a promising first step. The workshop, which brought together participants from Germany, Russia, Switzerland, the UK and the US, was supported by the EuCARD AccNet accelerator-science network.

Plasmas, which are gases of free ions and electrons, can support large electric fields – a property that can be exploited to accelerate particles to relativistic energies over much shorter distances than is possible with current technologies. Past research has focused on creating large-amplitude plasma waves by injecting a short, intense laser pulse or an electron bunch into the plasma. Indeed, accelerating gradients up to 100 GV/m have been established over a centimetre with laser excitation and up to 50 GV/m over a metre with a short electron bunch as driver.

A recent proposal is to excite the plasma through a more energetic proton bunch. The maximum energy gain of electrons accelerated in a single plasma wake is limited to roughly twice the energy of the particles in the driving bunch. Given that protons can be accelerated to tera-electron-volt energies in conventional accelerators, it should be possible to accelerate electron bunches in the wake of a proton driving-bunch to energies up to the tera-electron-volt regime in one pass through the plasma.

The plasma wake produced by a 1 TeV proton bunch has been already investigated in computer simulations (Caldwell et al. 2009). The simulated electric fields are a factor of 100 higher than those considered for the International Linear Collider, and could lead to the acceleration of a bunch of electrons to several hundred giga-electron-volts within a few hundred metres (starting with a 1 TeV short proton bunch as driver).

So far there have been no beam tests with proton-driven plasmas. The primary goal of the PPA09 workshop was, therefore, to start the discussion on a pioneering experiment – using a proton beam from CERN’s Proton Synchrotron or Super Proton Synchrotron to demonstrate the generation of strong wakefields by a proton bunch. The preparation of a letter of intent for such experimentation at CERN was discussed in the workshop. One of the questions left open is the method for generating the required long, dense plasma. The workshop identified two options, which are now being pursued in parallel.

The workshop concluded that a first round of beam measurements, possibly in 2012, would search for modulations of a long proton bunch (rms bunch length around 15 cm). This effect is predicted by particle-in-cell simulations and its observation would provide an excellent benchmarking test. The goals for subsequent rounds of experimentation would include generating stronger electric fields in the plasmas by first longitudinally compressing or otherwise pre-modulating the proton bunch, and eventually, in 2014 or later, demonstrating the acceleration of an electron bunch in the wake of the proton bunch.

Chan Joshi, from University of California, Los Angeles, and one of the prominent researchers participating in PPA09, defined the medium-term goal of a CERN proton-driven plasma wake-field experiment as the demonstration of 1- GeV proton-driven acceleration in less than 5 m; its ultimate goal would be to accomplish 100- GeV acceleration over a distance of 100 m.

• More details of the workshop and all presentations can be found at http://indico.cern.ch/conferenceDisplay.py?confId=74552 and http://accnet.lal.in2p3.fr.

Profiting from the long view

CCvie1_02_10

We can measure and analyse accumulated superconducting RF (SRF) operating experience in broad, high-level terms using the “cryomodule century”, or CC. Ten cryomodules operating for a decade, or 50 of them operating for two years, yield 1 CC. In the past, Tristan at KEK and HERA at DESY each accumulated more than 1 CC, and LEP-II accumulated nearly 4 CC. KEK-B, Cornell, and the Tesla Test Facility/FLASH have each accumulated a large fraction of 1 CC. In addition, well over half of the world’s SRF operating experience has taken place at two US Department of Energy nuclear-physics facilities: ATLAS at Argonne National Laboratory and the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab.

Although a mere 25 years old – including more than 15 years of running SRF in CEBAF – Jefferson Lab has, in this sense, accumulated many centuries’ worth of operating experience, about 6 CC. This experience has made it possible for CEBAF to operate at energies exceeding 6 GeV, 50% above its design energy. These energies resulted from a refurbishment programme that sought to improve the 10 lowest-performing cryomodules of the 42¼ installed at CEBAF. Refurbishment involved fixing problems in the SRF accelerating cavities inside the cryomodules, applying the latest advances in SRF science and technology. Each of the 10 refurbished cryomodules has performed at least 50% better than the best of the original complement – at two and a half times the original specification.

In Hamburg, the European XFEL project will in its first decade yield more than 10 CC, roughly comparable to today’s combined world total. The main linacs of the International Linear Collider (ILC), however, will require cryomodules for some 16,000 SRF cavities. ILC’s first decade will yield some 186 CC – more than an order of magnitude greater than the world’s present total or XFEL’s projected total. What challenges will confront those who seek to operate the ILC and other future machines over long periods?

At Jefferson Lab, ILC’s order-of-magnitude scale-up calls to mind the SRF pioneering of CEBAF, itself an order-of-magnitude scale-up from seminal SRF R&D that was conducted mainly at Cornell, KEK, DESY, and earlier at Stanford University. In the effort to head off or pre-compensate for operational difficulties, CEBAF’s scale-up challenges included higher-order modes, overall reliability in a many-cryomodule system and the fact that the beams to be accelerated had distinct properties not previously engaged. Yet, even though these and countless other pre-operational questions were attacked, actual practice, year in and year out, has turned up much that was simply unforeseen, and was probably unforeseeable. As a result, in CEBAF’s decade and a half of operating, about 1.5 refurbishments have been necessary per CC. Extrapolated, that would imply about 30 per year for the ILC.

Of course, extrapolations about the ILC and other future SRF machines are inevitably subject to errors. For one thing, experience to date involves operating gradients significantly lower than those planned for the ILC (and for the XFEL, as well). And at CEBAF and other operating SRF machines, most of the post-construction problems have already been corrected. For example, in SRF cavity processing, future accelerator builders won’t have to re-learn the value of high-pressure rinsing, which removes the performance limitation of field emission – and which is helping the ILC high-gradient R&D programme to achieve significantly higher accelerating gradients than past machines have reached.

But both the XFEL and the ILC will push the (current) state of the art just as CEBAF pushed the (then) state of the art. So it is certain that the problems that these future SRF machines are sure to encounter will be new and different. Nevertheless, past experience is all that we have, and we should try to learn from it. Despite the uncertainties, strategies for spares will need to be developed. To maintain the operating gradient, failure rates will need to be estimated. CEBAF had one cryomodule failure per CC, but the failures appeared only after the first 7 years, or the first 3 CC. The failures exposed flaws but new problems are surely coming. CEBAF has also had gradient degradation of 1% per year from new field-emission sites caused by particulates inside the vacuum system. In sum, from our experience, any SRF machine needs to plan for refurbishments at a rate of 1–2 per CC.

In current SRF accelerators, cryomodules are independent, standalone entities that can (with some difficulty) be pulled out for refurbishment. In future SRF accelerators, the need to minimize static heat losses pushes the design toward more integrated accelerator systems, even at the cost of making replacement harder. Yet if extrapolation from current operating experience is valid, it will be important to have the ability to refurbish, which means that it will be necessary to avoid having cryomodules that are difficult to extract. It’s the continuation of a longstanding design conflict: tight integration of systems improves performance, but makes repair harder.

SRF operating experience now has a long standing – many cryomodule centuries of it, in fact. This experience base constitutes an imperfect yet vital tool. And for all of us, there’s profit in looking back in order to see forward.

The LHC is back: four remarkable weeks

Telltale dots

The moment that particle physicists – and many others – around the world had been waiting for finally arrived on 20 November 2009. Bunches of protons circulated once again round CERN’s Large Hadron Collider (LHC), a little more than a year after a damaging incident brought commissioning to a standstill in September 2008. As the operators put the machine through its initial paces, the collider passed a number of milestones – from the first collisions in the LHC detectors at 450 GeV per beam to collisions with “squeezed” multibunch beams at the world-record energy of 1.18 TeV. In addition, the collaborations collected sufficient data to calibrate their detectors and assess how well they perform before the real attack on high-energy physics begins later this year.

"Mountain range" plot

“It has been remarkable,” Steve Myers, CERN’s director for accelerators and technology, commented in a presentation to CERN Council and staff on 18 December. “Things have moved so quickly that it has been hard to keep up with the progress.” It was also the tip of an iceberg – a pinnacle of highly visible success built on a year of unstinting effort on repairs and consolidation work, painstaking hardware commissioning and the final preparation for operation with beam.

The restart finally got underway with the injection of both beams into the LHC on Friday 20 November and their careful threading round the machine, step by step, as on the famous start-up day in September 2008. There was jubilation in the CERN Control Centre as Beam 1 made its first clockwise circuits of the machine at 8.40 p.m. A little over an hour later, it had made several hundred circuits, captured by the RF. It was then the turn of Beam 2, which completed the first anticlockwise circuit at 11.40 p.m. and had also been captured successfully by the RF at a little after midnight.

Screens in the CCC

During the following hours the four experiments were treated to special “splash” events, in which a single beam strikes a collimator nearby. These events produce an avalanche of particles that leave a host of tracks and allow the collaborations to check the relative timing of the detectors, for example.

HCb reports finding clear collision

The first day already demonstrated that vital elements of beam instrumentation, such as the beam-position monitors and beam-loss monitors, were working well. Over the following weekend, the operators continued commissioning, in particular on Beam 1, including fine-tuning of the RF. This work already led to a good beam lifetime of around 10 hours, as measured from the decay of the beam current. Other key studies included measurements and refinements of the betatron tune (the frequency of transverse oscillations about the nominal orbit) and chromaticity (variations in the tune as a function of the momentum deviation). The tune of the machine immediately showed itself to be remarkably good, a testament to the many years of effort involved in the design and construction of the thousands of magnets that guide the beams round the 27 km ring.

CMS shows a di-photon plot

Monday 23 November saw the LHC reach a brand-new milestone when the two beams circulated simultaneously for the first time at 1.25 p.m. – just in time for an announcement at a press conference about the restart that was held at CERN at 2.00 p.m. The operators then adjusted machine parameters to provide the experiments with the first, real beam–beam collisions, each in turn.

ATLAS was first, with a collision event recorded at around 2.22 p.m. Four hours later it was the turn of ALICE, which immediately saw the trigger rate rise from about 0.001 to 0.1 Hz. Over the next 40 minutes the experiment recorded nearly 300 events. LHCb followed at about 5.45 p.m. This experiment found it less easy to confirm collisions because only the larger and more distant parts of the detector were switched on, but nevertheless the events collected showed indications of good-looking vertices. Soon after 7.00 p.m. the operators tried again for collisions in ATLAS and CMS, this time at a slightly higher intensity and with improved beam steering. CMS bagged its first collision at 7.40 p.m.

Both beams reach 1.18 TeV.

These first collisions were all obtained with a low-intensity “probe” beam, so called because it allows the operators to probe the limits of safe operation of the LHC with a single bunch per beam of only about 3 × 109 protons. Over the following days, probe beams were used in continued commissioning to ensure that higher intensities could be safely handled and stable conditions could be guaranteed for the experiments over sustained periods. Higher intensities would be needed for the experiments to acquire a meaningful amount of data but nevertheless the first period of collisions provided plenty to report on in presentations to a packed main auditorium at CERN on 26 November, just six days after the restart. There were measurements of timings, tracking, calorimetry, missing energy and plenty more from all four of the big LHC experiments, as well as reconstructions, including π0 peaks from LHCb and CMS.

ALICE submits paper

During the first three days the LHC operated as a storage ring and as a collider, but at a beam energy of only 450 GeV – the injection energy from the SPS. An important next step was to begin tests to ramp the current and hence the field in the dipole magnets in synchrony with increasing beam energy (supplied by the RF). On 24 November, Beam 1 underwent the first ramp, reaching 560 GeV before it died away after encountering resonances in the betatron oscillations. Nevertheless, the LHC had worked as an accelerator for the first time.

Stable beams at 450 GeV

Further commissioning ensued, including energy matching between the SPS and the LHC on 27 November. Two days later, the operators were ready to try the first ramp to a world-record energy and at 9.48 p.m. on 29 November they accelerated Beam 1 from 450 GeV to 1.04 TeV. This exceeded the previous world-record beam energy of 0.98 TeV, which had been held by Fermilab’s Tevatron collider since 2001. Within three hours, the LHC had broken its own record, as both beams were successfully accelerated to 1.18 TeV at 0.44 a.m. on 30 November. This was the maximum energy for this first LHC run, corresponding to 2 kA in the dipole magnets – the limit to which the safety systems had been tested before the restart.

An event recorded by LHCb

Later that same day tests began to study any effects that the solenoid magnets in the experiments might have on the beam orbit, which would need compensatory adjustments. ALICE was the first to ramp the solenoid field, followed by ATLAS and finally the biggest of the three, the “S” in CMS with its full field of 3.8 T. The effects were all small; indeed, changes in the orbit arising from earth tides at the time of the ramp in CMS proved to have a bigger effect than the field of the giant solenoid.

Dumping 16 bunches

December began with a “first” of a different kind, when the ALICE collaboration, having analysed the 284 events recorded on 23 November, submitted the first paper based on collision data at the LHC for publication in the European Physical Journal C. The collaboration analysed the events to measure the pseudorapidity density of charged primary particles in the central region. The results are consistent with previous measurements made at the same centre-of-mass energy a quarter of a century ago, when CERN’s SPS ran as a pulsed proton–antiproton collider. The paper was accepted for publication two days later.

CMS sees a candidate di-muon.

From 1 to 6 December the operations team continued with beam commissioning at 450 GeV, in particular with aperture scans to determine the operational space for beam manoeuvres and collimator scans to indicate the best settings for these devices, which are used to “clean” the beam by removing particles forming a halo around the main core. These studies are important for setting the parameters for the safe running of the machine – safe in the sense that the halo particles do not go off course into the LHC magnets or sensitive parts of the experiments.

Silicon detector planes

Other studies concern aborting a run safely and depositing the beams in the beam dump near Point 6 on the ring. During normal running, if the beam becomes unstable the beam-loss monitors should sense this and trigger a set of fast pulsed magnets to eject the beams along a tunnel to the beam stop. To avoid dumping all of the energy in a single spot on the dump face – which at full intensity would be around 360 MJ per beam – magnets along the tunnel spread out the beam so that it “paints” a circle when it arrives at the stop.

LHCf

Preliminary investigations of this kind are all undertaken at low intensities with the probe beam. On 5 December the operators took a first small but significant step to higher intensity when they injected two bunches per beam into the LHC. Beam with four bunches each followed in the early hours of 6 December and, at 6.46 a.m., the operators declared the first period of “stable beams” at 450 GeV, with some 1010 protons per beam. This meant that the collaborations could switch on all parts of their detectors, including the most sensitive, collecting data at a rate of about 0.5 Hz. Ultimately the LHC will run with 2808 bunches per beam

Steps in intensity

While the operators continued to take steps to increase the intensity – both through more bunches and with more protons per bunch injected from the SPS – stable running at 1.18 TeV also remained an important goal. A test ramp with two bunches per beam on 8 December gave ATLAS the chance to record a first collision at a total energy of 2.36 TeV, although at the time the experiment was in “safe” mode and many parts were turned off.

Periods of steady beams

The continued careful studies with higher intensities led to a first period of stable beams at 450 GeV with higher bunch intensities on 11 December, this time with four bunches per beam and 2 × 1010 protons per bunch. This increased the event rates in the experiments to about 10 Hz, some 100 times higher than in the first tests on 23 November. Ultimately, at 9.00 p.m. on 14 December, the LHC began to run with stable beams with 16 bunches, providing some 1.85 × 1011 protons per beam – and trigger rates of around 50 Hz.

The LHC is back

The four big experiments were eventually able to observe significant numbers of collisions with all of the subdetectors operational at a beam energy of 450 GeV under stable conditions, accumulating a grand total of 1.6 million events. LHCf, the small experiment that sits in the forward direction close to the ATLAS detector, amassed enough events for the collaboration to begin the first physics. This experiment, which is to study the production of showers of particles similar to those created in cosmic-ray showers, collected some 6000 showers at 900 GeV in the centre of mass.

The first collision

In addition, progress with ramping on 14 December allowed the experiments to record collisions at a total energy of 2.36 TeV for the first time during a 90-minute period of stable beams, with two bunches per beam. Altogether, the four big experiments recorded some 125,000 events in this new energy region.

With the LHC run scheduled to end on the evening of 16 December for a shutdown for further consolidation work in preparation for running at higher energies, the last two days saw the machine revert to the operators for further commissioning studies. First there were tests on 15 December in which one of the TOTEM experiment’s delicate Roman pots was moved closer towards the beam to record the first track in the “edgeless” silicon detectors (CERN Courier September 2009 p19).

Finally, in the early hours of 16 December the beam experts were able to test the “squeeze” at the interaction regions. A squeeze involves reducing the beam size at the collision points by reducing (“squeezing”) the betatron function, β, which describes the amplitude of the betatron oscillations. With four bunches per beam, the machine ramped once again to 1.18 TeV, a squeeze to 7 m was successfully applied at interaction region 5, where the CMS experiment is located.

After further beam studies, at 6.00 p.m. the operators prepared to dump the beam for the last time in 2009, just as planned. This ended the first, highly successful full commissioning run for the LHC, which is being followed by a technical stop until February. While the LHC remains on stand-by, work continues to implement protection systems to allow high-energy running at up to 3.5 TeV per beam, as well as to make other modifications and repairs in the machine and the experiments. The first four weeks of running had brought plenty of success, auguring well for the future. After some time for celebrations over the festive season, it would be time to prepare for the next step in this great adventure.

NICA targets the mixed phase in hadron matter

CCnic1_01_10

Dynamic scientific projects with daring research programmes involving high technology can often trigger breakthroughs in innovation and industrial development. A team at the Joint Institute for Nuclear Research (JINR) at Dubna has conceived of one such project: the Nuclotron-based Ion Collider fAcility (NICA), a superconducting accelerator complex for colliding beams of heavy ions in the energy range of 4–11 GeV per nucleon in the centre of mass. It is this kind of project that is vital if Russia is to become a leader in innovation development.

The aim of NICA is to study an intricate and mysterious phenomenon: the mixed phase of quark–gluon matter. Conceived by the research group led by Alexei Sissakian, head of the NICA project, the facility is based on the Nuclotron, the superconducting ion synchrotron that already operates at JINR’s Veksler and Baldin Laboratory of High-Energy Physics. This latest project builds on the scientific schools and traditions of the scientists who founded this international centre for research in nuclear physics on Russian territory. The result is a collaboration between physicists at Dubna and other Russian scientific centres: the Institute for Nuclear Research of the Russian Academy of Sciences (RAS); the State Scientific Centre; the Institute for High-Energy Physics (IHEP) in Protvino; the Budker Institute of Nuclear Physics (BINP); the Scientific Research Institute for Nuclear Physics of Moscow State University; and the Institute for Theoretical and Experimental Physics in Moscow.

New lease of life

The NICA project has been under development since 2006, in close co-operation with leading institutions of the RAS, the Rosatom State Atomic Energy Corporation, the Federal Agency for Science and Innovation, the Federal Agency for Education, Moscow State University and the Russian Scientific Centre “Kurchatov Institute”. It will culminate in a unique accelerator complex – a cascade of four accelerators that includes the existing Nuclotron – which should be completed by 2015. Constructed at JINR with much effort and hardship in the period of change in Russia during the 1990s, the Nuclotron has been useful for world science but owing to insufficient financing, this superconducting accelerator has not achieved the planned beam parameters. The capacity of the vacuum and cryogenic equipment that was affordable a decade ago did not allow further energy increases. Today, however, the NICA project is breathing new life into the Nuclotron and has opened up new prospects for high-energy physics.

Studying the properties of nuclear matter is the fundamental task for modern high-energy physicists, with experimental research conducted at an extremely small scale – around a millionth of a nanometre. Achieving this task not only opens new horizons in perceptions of the world and enables researchers to decipher the evolution of the universe, it also lays the foundation for the development of new techniques on the super-small scale.

CCnic2_01_10

According to modern ideas, quark–gluon matter has a mixed phase – like boiling water that exists simultaneously with vapour. The mixed phase of hadronic matter should include free quarks and gluons simultaneously with protons and neutrons, inside which quarks are already constrained – or “glued” – by gluons. In the phase diagram of temperature and baryon density, the border between the hadronic state and quark–gluon plasma is not a thin line but a domain the size and shape of which is still difficult to determine. It is here, in what we call “the Dubna meadow”, where the mixed phase of hadron matter should exist.

NICA begins with the heavy-ion source, KRION, which propels nuclei into the linear accelerator that will be constructed by specialists from IHEP in Protvino. The beam then enters the booster-synchrotron, where particles are accelerated to the required energy. Thirty-four bunches, each consisting of 10,000 million nuclei, are transported into the Nuclotron. Once aligned by the superconducting magnets to form a thin thread approximately 30 cm long, they are split into two colliding beams of 17 bunches, each with its own ring in the 251-m circumference ion collider.

These two collider rings intersect at two points equipped with detectors. At one collision point, the MultiPurpose Detector (MPD) will detect the existence of the mixed phase and a number of other features in this energy range, such as chiral-symmetry restoration, critical phenomena and the modification of hadron properties in the hot, dense quark–hadron medium. The MPD is designed to spot particles that shoot out from the collision point in every direction. It will be necessary to apply mainly new technological approaches to develop a device with a sufficiently high level of sensitivity. Another detector is planned for the spin programme – the Spin Physics Detector (SPD) – which will be located at the second collision point. Particle polarization is another mystery of the universe, which Dubna’s theoreticians hope to unravel through experiments for NICA that have been designed together with specialists from BINP in Novosibirsk, who are pioneers in colliding-beam-accelerator technology.

The upgrade of the Nuclotron in Dubna is fully underway. The vacuum in the ring has been improved and the cryogenic complex – the heart of the superconducting accelerator – has been completely upgraded, as well as the power system. Modern diagnostic equipment is currently being installed and a new ion source is under development. The technical project for the NICA accelerator complex and the project concept are being developed at the same time.

Several groups of high-quality specialists from different JINR laboratories work in the NICA/MPD centre, where they are implementing the project for the new accelerator complex and experimental facilities. These include theoreticians, computer programmers, accelerator technologists, co-ordinators and experimentalists. Alexander Sorin, co-supervisor of the NICA project, is the centre’s general leader. Igor Meshkov heads the activities on the development of the accelerator complex and his former student, Grigory Trubnikov, now deputy-chief engineer of JINR, is leading the Nuclotron upgrade. Vladimir Kekelidze, the director of the Laboratory of High-Energy Physics, heads the team designing the MPD.

The construction of any modern experimental facility is impossible without detailed technical planning, so JINR has sought to involve the best-qualified engineers and designers in the process. Nikolai Topilin has returned to Dubna from CERN – where he was responsible for the development of the front-end calorimetry for the ATLAS experiment at the LHC – as chief designer of the NICA complex. It is a good sign for Dubna that engineering designers are returning, having left for the West when Russian science was in decline. Their high-level abilities have always been – and still are – in demand in western countries, so the fact that physicists and engineers are returning to Dubna shows that JINR has chosen the right way forwards.

The development of an accelerator is always linked with the course of events elsewhere, so the physics programme for such a facility and the concept of its construction elements are dynamically interrelated from the outset of the erection of this large-scale machine. The NICA project’s White Book, published in spring 2009, contains the physics basis of the experimental programme at the accelerator complex. It is constantly being replenished with new pages and is open to everyone wanting to contribute to the project.

Because Dubna is an integral part of the worldwide scientific community, the research and quality of the facilities must be of the highest level for it to attract partners. On 9–12 September 2009, the Laboratory of Theoretical Physics held the fourth round-table discussion on the programme, “Physics at the NICA Collider”, with 82 experts in heavy-ion physics from leading nuclear centres in 16 countries (including six JINR member states and four JINR associate members) invited to take part. Representatives from experimental collaborations of leading large facilities for similar research – JINR’s friends and scientific rivals – also showed interest in the programme for NICA, including RHIC at Brookhaven in the US, the Super Proton Synchrotron at CERN and the future Facility for Antiproton and Ion Research (FAIR) at GSI in Germany. The delegation from Germany was the largest, with nine experts, including Boris Sharkov, director-designate of FAIR, and Peter Senger, leader of the Compressed Baryonic Matter collaboration at FAIR.

The specifications of the NICA collider formed the main topic of discussion. Experts analysed the main aspects: nuclear-matter research in experiments with relativistic heavy-ion collisions; new states of nuclear matter at high baryonic densities; local P- and CP-violation in hot nuclear matter (the chiral magnetic effect); electromagnetic interactions and restoration of the chiral symmetry; mechanisms of multiparticle production; correlation femtoscopy and fluctuations; and polarization effects and spin physics at the NICA accelerator. Participants also discussed details of the strategy to develop the MPD and the SPD, based on the physics programme. Representatives from institutes in Russia and elsewhere took an active part in developing the programme. Russian scientists working abroad, including those originally from Dubna, proved to be eager supporters of the NICA project – for example, Brookhaven was represented by the leader of the Nuclear Theory Group, Dmitri Kharzeev.

In summary, there have been considerable qualitative evaluations at the level of world scientific expertise of the expediency and feasibility of the NICA project. “We strongly support the implementation of the NICA collider project and we are sure that if the project is completed in time it will make an outstanding contribution to our knowledge about the properties of the superdense matter…The unique opportunity to put the NICA project into action in Dubna must not be missed,” reads the joint memorandum on the results of the round-table discussions.

The coming year will see further contributions from Dubna to the scene of heavy ions. A new scientific journal, Heavy Ion, will accompany the research in heavy-ion physics at JINR, with the first issue scheduled for this year. On 23–29 August Dubna will take the baton from Brookhaven when it hosts an important international conference on heavy-ion collisions at high energies, the “6th International Workshop on Critical Point and Onset of Deconfinement”.

BELLA will boost plasma accelerator research

CCnew5_01_10

Lawrence Berkeley National Laboratory is set to explore further the high-gradient acceleration of electron beams using ultra-short pulse lasers with the construction of a new facility – BELLA, the Berkeley Lab Laser Accelerator. The primary goal is to provide researchers within the laboratory’s Laser Optical Systems Integrated Studies (LOASIS) programme with a petawatt-class, ultra-short pulse laser system for experiments aimed at demonstrating a 10 GeV electron beam from a metre-long plasma channel.

A laser plasma accelerator (LPA) of this kind relies on creating an electron-density wave in an ionized medium (i.e., plasma) by displacing the electrons away from the ions with an intense laser pulse. The charge separation results in a strong electric field (up to 1010 V/m) that co-propagates with the laser pulse (like a wake behind a boat) and is capable of accelerating electrons to very high energies in a short distance. Electrons pulled out of the background plasma into the wake can then “surf” on it to reach high energies. Typical electric fields generated in an LPA can be more than a 1000 times larger than in conventional RF accelerators, enabling the acceleration of electrons to giga-electron-volt energies in distances of centimetres instead of tens of metres.

BELLA will build on previous results from the LOASIS programme, which is led by Wim Leemans, one of six recipients of the US Department of Energy’s Ernest Orlando Lawrence Award for 2009. In 2004, researchers with LOASIS showed that high-quality electron beams with an energy spread of a few per cent could be produced at energies of 100 MeV from a structure only 2 mm long. Two years later, the team demonstrated that beams of 1 GeV can be produced from a 3 cm-long plasma structure. One of the key elements of these experiments was the guiding of the laser beams in plasma channels over distances that are long compared with their natural diffraction distance, much as an optical fibre guides a low-power beam.

The aim with the BELLA facility is to scale up these experiments to produce electron beams with energies exceeding 10 GeV in a metre-scale plasma channel. Such devices could form the building blocks of a future-generation linear collider for particle physics, provided that technology is developed to cascade many of these modules and to produce high-quality electron beams with high efficiency. While it could take decades to match the output of the highest-energy RF-based machines, BELLA represents an essential step in investigating how more powerful accelerators of the future might become not only more compact and but much less expensive. Such systems also hold the promise of making possible a table-top accelerator operating in the range of tens of giga-electron-volts, which would be small and cheap enough for universities and hospitals.

The development of a compact linear accelerator with the BELLA project will have also several short-term applications. Among the unique features of LPA-produced electron beams are their duration of a few femtoseconds and their intrinsic synchronization to a conventional laser. A high-quality 10 GeV electron beam could be used to build a soft X-ray free-electron laser, which would be a valuable tool for biologists, chemists, materials scientists and biomedical researchers, allowing them to observe and time-resolve ultrashort (femtosecond) phenomena. A multigiga-electron-volt electron beam could also be used to produce highly collimated, mega-electron-volt photons that could penetrate cargo in a nondestructive way and be highly useful for remote detection of nuclear material. Such high-energy photon beams can be produced by scattering an intense (low-energy photon) laser pulse off the high-energy electron beam.

BELLA will be housed in an existing building at Berkeley. The space will be reconfigured and upgraded to include a clean room, new laser laboratory space and additional shielding. The project is funded largely by the American Recovery and Reinvestment Act (commonly known as economic stimulus funding), which is providing $20 million towards BELLA’s construction. The facility will be completed in about three and a half years.

UK’s ALICE facility collides beams to make X-rays

CCnew4_01_10

Physicists working on an R&D prototype for the next generation of accelerator-based light sources – Accelerators and Lasers in Combined Experiments (ALICE) at the Daresbury Laboratory in the UK – are celebrating after successfully colliding electrons and a powerful laser beam to produce short-pulsed X-rays. This is the first time this has been done in the UK and the first time that the concept of using an accelerator and laser source together has been demonstrated on ALICE.

The Compton Back Scattering project saw a team of scientists from the Cockcroft Institute, the University of Manchester, the Max Born Institute and the Science and Technology Facilities Council (STFC) accelerate bunches of electrons and then collide them head-on with a high-energy, short-pulse multi-terawatt laser photon beam. The technique converts the optical laser light to X-rays, as the electrons transfer energy to the photons.

ALICE is the first accelerator in Europe to operate using energy recovery, where the energy used to create its high-energy beam is captured and reused after each circuit of the accelerator for further acceleration of fresh particles. The recent success comes just one year after the facility first achieved energy recovery.

US niobium-tin superconducting magnet reaches 200 T/m

A focusing magnet based on niobium-tin superconductor, built by members of the US LHC Accelerator Research Program (LARP), has reached the design gradient of 200 T/m. The US group is working on strategies to upgrade the inner triplet quadrupole magnets that perform the final focusing of the particle beams close to the interaction points.

In an upgraded, higher-luminosity LHC the inner triplets will be subjected to still more radiation and heat than the current magnets are designed to withstand. One of the goals of LARP is to develop upgraded magnets using niobium tin (Nb3Sn), which is superconducting at a higher temperature than the niobium titanium (NbTi) currently used. Nb3Sn therefore has a greater tolerance for heat and can remain superconducting at a magnetic field more than twice as strong. However, it is brittle and sensitive to pressure and to become a superconductor when cold, it must first be reacted at temperatures of 650–700 °C.

The LARP effort initially centred on a series of short quadrupole models at Fermilab and Berkeley and, in parallel, a 4-m long magnet based on racetrack coils, built at Brookhaven and Berkeley. The next step involved the combined resources of all three laboratories on the fabrication of a long, large-aperture quadrupole magnet. In 2005 the US Department of Energy (DOE), CERN and LARP set a goal of reaching, before the end of 2009, a gradient of 200 T/m in a 4-m long superconducting quadrupole magnet with a 90 mm bore for housing the beam pipe.

This goal was met on 4 December 2009 by LARP’s first “long quadrupole shell” model magnet. The magnet’s superconducting coils performed well, as did its mechanical structure, based on a thick aluminium cylinder (shell) that supports the superconducting coils against the large forces generated by high magnetic fields and electrical currents. The magnet’s ability to withstand quenches – sudden transitions to normal conductivity with resulting heating – was also excellent.

• LARP is a collaboration of Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory and the SLAC National Accelerator Laboratory, founded by the DOE in 2003 to address the challenge of planned upgrades to the LHC’s luminosity.

bright-rec iop pub iop-science physcis connect