A stunning image of the nearby Andromeda galaxy (M31) captured by the Subaru Telescope’s Hyper Suprime-Cam (HSC) has demonstrated the instrument’s capability of fulfilling the goal to use the ground-based telescope to produce a large-scale survey of the universe. The combination of a large mirror, wide field of view and sharp imaging represents a major step into a new era of observational astronomy and will contribute to answering questions about the nature of dark energy and matter. The image marks a successful stage in the HSC’s commissioning process, which involves checking all of its capabilities before it is ready for open use.
The Subaru Telescope, which saw first light in 1999, is an 8.2-m optical-infrared telescope at the summit of Mauna Kea, Hawaii, and is operated by the National Astronomical Observatory of Japan (NAOJ). The HSC – which was installed on the telescope in August last year – substantially increases the field of view beyond that which is available with the present instrument, the Subaru Prime Focus Camera, Suprime-Cam. The 3-tonnes, 3-m high HSC mounted at the prime focus contains 116 innovative, highly sensitive CCDs. Its field of view with a diameter of 1.5° is seven times that of the Suprime-Cam and with the 8.2-m primary mirror enables the high-resolution images that will underpin what will be the largest-ever galaxy survey.
First conceived of in 2002, the HSC Project was established in 2008. The major research partners are NAOJ, the Kavli Institute for the Physics and Mathematics of the Universe, the School of Science at the University of Tokyo, KEK, Academia Sinica Institute of Astronomy and Astrophysics and Princeton University, with collaborators from industry, Hamamatsu Photonics KK, Canon Inc. and Mitsubishi Electric Corporation.
The LHC’s Long Shutdown 1 (LS1) is an opportunity that the ATLAS collaboration could not miss to improve the performance of its huge and complex detector. Planning began almost three years ago to be ready for the break and to produce a precise schedule for the multitude of activities that are needed at Point 1 – where ATLAS is located on the LHC. Now, a year after the famous announcement of the discovery of a “Higgs-like boson” on 4 July 2012 and only six months after the start of the shutdown, more than 800 different tasks have been already accomplished in more than 250 work packages. But what is ATLAS doing and why this hectic schedule? The list of activities is long, so only a few examples will be highlighted here.
The inner detector
One of the biggest interventions concerns the insertion of a fourth and innermost layer of the pixel detector – the IBL. The ATLAS pixel detector is the largest pixel-based system at the LHC. With about 80 million pixels, until now it has covered a radius from 12 cm down to 5 cm from the interaction point. At its conception, the collaboration already thought that it could be updated after a few years of operation. An additional layer at a radius of about 3 cm would allow for performance consolidation, in view of the effects of radiation damage to the original innermost layer at 5 cm (the b-layer). The decision to turn this idea into reality was taken in 2008, with the aim of installation around 2016. However, fast progress in preparing the detector and moving the long shutdown to the end of 2012 boosted the idea and the installation goal was moved forward by two years.
To make life more challenging, the collaboration decided to build the IBL using not only well established planar sensor technology but also novel 3D sensors. The resulting highly innovative detector is a tiny cylinder that is about 3 cm in radius and about 70 cm long but it will provide the ATLAS experiment with another 12 million detection channels. Despite its small dimensions, the entire assembly – including the necessary services – will need an installation tool that is nearly 10 m long. This has led to the so-called “big opening” of the ATLAS detector and the need to lift one of the small muon wheels to the surface.
The “big opening” of ATLAS is a special configuration where at one end of the detector one of the big muon wheels is moved as far as possible towards the wall of the cavern, the 400-tonne endcap toroid is moved laterally towards the surrounding path structure, the small muon wheel is moved as far as the already opened big wheel and then the endcap calorimeter is moved out by about 3 m. But that is not the end of the story. To make more space, the small muon wheel must be lifted to the surface to allow the endcap calorimeter to be moved further out against the big wheels.
In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP
This opening up – already foreseen for the installation of the IBL – became more worthwhile when the collaboration decided to use LS1 to repair the pixel detector. During the past three years of operation, the number of pixel modules that have stopped being operational has risen continuously from the original 10–15 up to 88 modules, at a worryingly increasing rate. Back in 2010, the first concerns triggered a closer look at the module failures and it was clear that in most of the cases the modules were in a good state but that something in the services had failed. This first glance was then augmented by substantial statistics after up to 88 modules had failed by mid-2012.
In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP for “new service quarter panels”. In January 2013, the collaboration decided to deploy the nSQP not only to fix the failures of the pixel modules and to enhance the future read-out capabilities for two of the three layers but also to ease the task of inserting the IBL into the pixel detector. This decision implied having to extract the pixel detector and take it to the clean-room building on the surface at Point 1 to execute the necessary work. The “big opening” therefore became mandatory.
The extraction of the pixel detector was an extremely delicate operation but it was performed perfectly and a week in advance of the schedule. Work on both the original pixels and the IBL is now in full swing and preparations are under way to insert the enriched four-layer pixel detector back into ATLAS. The pixel detector will then contain 92 million channels – some 90% of the total number of channels in ATLAS.
But that is not the end of the story for the ATLAS inner detector. Gas leaks appeared last year during operation of the transition radiation tracker (TRT) detector. Profiting from the opening of the inner detector plates to access the pixel detector, a dedicated intervention was performed to cure as many leaks as possible using techniques that are usually deployed in surgery.
Further improvements
Another important improvement for the silicon detectors concerns the cooling. The evaporative cooling system that was based on a complex compressor plant has been satisfactory, even if it has created a variety of problems and interventions. The system allowed operating temperatures to be set to –20 °C with the possibility of going down to –30 °C, although the lower value has not been used so far as radiation damage to the detector is still in its infancy. However, the compressor plant needed continual attention and maintenance. The decision was therefore taken to build a second plant that was based on the thermosyphon concept, where the pressure that is required is obtained without a compressor, using instead the gravity advantage offered by the 90-m-deep ATLAS cavern. The new plant has been built and is now being commissioned, while the original plant has been refurbished and will serve as a redundant (back-up) system. In addition, the IBL cooling is based on CO2 cooling technology and a new redundant plant is being built to be ready for the IBL operations.
Both the semiconductor tracker and the pixel detector are also being consolidated. Improvements are being made to the back-end read-out electronics to cope with the higher luminosities that will go beyond twice the LHC design luminosity.
Lifting the small muon wheel to the surface – an operation that had never been done before – was a success. The operation was not without difficulties because of the limited amount of space for manoeuvering the 140-tonne object to avoid collisions with other detectors, crates and the walls of the cavern and access shaft. Nevertheless, it was executed perfectly thanks to highly efficient preparation and the skill of the crane drivers and ATLAS engineers, with several dry runs done on the surface. Not to miss the opportunity, the few problematic cathode-strip chambers on the small wheel that was lifted to the surface will be repaired. A specialized tool is being designed and fabricated to perform this operation in the small space that is available between the lifting frame and the detector.
Many other tasks are foreseen for the muon spectrometer. The installation of a final layer of chambers – the endcap extensions – which was staged in 2003 for financial reasons has already been completed. These chambers were installed on one side of the detector during previous mid-winter shutdowns. The installation on the other side has now been completed during the first three months of LS1. In parallel, a big campaign to check for and repair leaks has started on the monitored drift tubes and resistive-plate chambers, with good results so far. As soon as access allows, a few problematic thin-gap chambers on the big wheels will be exchanged. Construction of some 30 new chambers has been under way for a few months and their installation will take place during the coming winter.
At the same time, the ATLAS collaboration is improving the calorimeters. New low-voltage power supplies are being installed for both the liquid-argon and tile calorimeters to give a better performance at higher luminosities and to correct issues that have been encountered during the past three years. In addition, a broad campaign of consolidation of the read-out electronics for the tile calorimeter is ongoing because it is many years since it was constructed. Designing, prototyping, constructing and testing new devices like these has kept the ATLAS calorimeter community busy during the past four years. The results that have been achieved are impressive and life for the calorimeter teams during operation will become much better with these new devices.
Improvements are also under way for the ATLAS forward detectors. The LUCID luminosity monitor is being rebuilt in a simplified way to make it more robust for operations at higher luminosity. All of the four Roman-pot stations for the absolute luminosity monitor, ALFA, located at 240 m from the centre of ATLAS in the LHC tunnel, will soon be in laboratories on the surface. There they will undergo modifications to implement wake-field suppression measures that will fight against the beam-induced increase in temperature that was suffered during operations in 2012. There are other plans for the beam-conditions monitor, the diamond-beam monitor and the zero-degree calorimeters. The activities are non-stop everywhere.
The infrastructure
All of the above might seem to be an enormous programme but it does not touch on the majority of the effort. The consolidation work spans the improvements to the evaporative cooling plants that have already been mentioned to all aspects of the electrical infrastructure and more. Here are a few examples from a long list.
Installation of a new uninterruptible power supply is ongoing at Point 1, together with replacement of the existing one. This is to avoid power glitches, which have affected the operation of the ATLAS detector on some occasions. Indeed, the whole electrical installation is being refreshed.
The cryogenic infrastructure is being consolidated and improved to allow completely separate operation of the ATLAS solenoid and toroid magnets. Redundancy is implemented everywhere in the magnet systems to limit downtime. Such downtime has, so far, been small enough to be unnoticeable in ATLAS data-taking but it could create problems in future.
All of the beam pipes will be replaced with new ones. In the inner detector, a new beryllium pipe with a smaller diameter to allow space for the IBL has been constructed and installed already in the IBL support structure. All of the other stainless-steel pipes will be replaced with aluminium ones to improve the level of background everywhere in ATLAS and minimize the adverse effects of activation.
A back-up for the ATLAS cooling towers is being created via a connection to existing cooling towers for the Super Proton Synchrotron. This will allow ATLAS to operate at reduced power, even during maintenance of the main cooling towers. The cooling infrastructure for the counting rooms is also undergoing complete improvement with redundancy measures inserted everywhere. All of these tasks are the result of a robust collaboration between ATLAS and all CERN departments.
LS1 is not, then, a period of rest for the ATLAS collaboration. Many resources are being deployed to consolidate and improve all possible aspects of the detector, with the aim of minimizing downtime and its impact on data-taking efficiency. Additional detectors are being installed to improve ATLAS’s capabilities. Only a few of these have been mentioned here. Others include, for example, even more muon chambers, which are being installed to fill any possible instrumental cracks in the detector.
All of this effort requires the co-ordination and careful planning of a complicated gymnastics of heavy elements in the cavern. ATLAS will be a better detector at the restart of LHC operations, ready to work at higher energies and luminosities for the long period until LS2 – and then the gymnastics will begin again.
Research in high-energy physics at particle accelerators requires highly complex detectors to observe the particles and study their behaviour. In the EU-supported project on Advanced European Infrastructure for Detectors at Accelerators (AIDA), more than 80 institutes from 23 European countries have joined forces to boost detector development for future particle accelerators in line with the European Strategy for Particle Physics. These include the planned upgrade of the LHC, as well as new linear colliders and facilities for neutrino and flavour physics. To fulfil its aims, AIDA is divided into three main activities: networking, joint research and transnational access, all of which are progressing well two years after the project’s launch.
Networking
AIDA’s networking activities fall into three work packages (WPs): the development of common software tools (WP2); microelectronics and detector/electronics integration (WP3); and relations with industry (WP4).
Building on and extending existing software and tools, the WP2 network is creating a generic geometry toolkit for particle physics together with tools for detector-independent reconstruction and alignment. The design of the toolkit is shaped by the experience gained with detector-description systems implemented for the LHC experiments – in particular LHCb – as well as by lessons learnt from various implementations of geometry-description tools that have been developed for the linear-collider community. In this context, the Software Development for Experiments and LHCb Computing groups at CERN have been working together to develop a new generation of software for geometry modellers. These are used to describe the geometry and material composition of the detectors and as the basis for tracking particles through the various detector layers.
Enabling the community to access the most advanced semiconductor technologies is an important aim for AIDA
This work uses the geometrical models in Geant4 and ROOT to describe the experimental set-ups in simulation or reconstruction programmes and involves the implementation of geometrical solid primitives as building blocks for the description of complex detector arrangements. These include a large collection of 3D primitives, ranging from simple shapes such as boxes, tubes or cones to more complex ones, as well as their Boolean combinations. Some 70–80% of the effort spent on code maintenance in the geometry modeller is devoted to improving the implementation of these primitives. To reduce the effort required for support and maintenance and to converge on a unique solution based on high-quality code, the AIDA initiative has started a project to create a “unified-solids library” of the geometrical primitives.
Enabling the community to access the most advanced semiconductor technologies – from nanoscale CMOS to innovative interconnection processes – is an important aim for AIDA. One new technique is 3D integration, which has been developed by the microelectronic industry to overcome limitations of high-frequency microprocessors and high-capacity memories. It involves fabricating devices based on two or more active layers that are bonded together, with vertical interconnections ensuring the communication between them and the external world. The WP3 networking activity is studying 3D integration to design novel tracking and vertexing detectors based on high-granularity pixel sensors.
Interesting results have already emerged from studies with the FE-Ix series of CMOS chips that the ATLAS collaboration has developed for the read-out of high-resistivity pixel sensors – 3D processing is currently in progress on FE-I4 chips. Now, some groups are evaluating the possibility of developing new electronic read-out chips in advanced CMOS technologies, such as 65 nm and of using these chips in a 3D process with high-density interconnections at the pixel level. Once the feasibility of such a device is demonstrated, physicists should be able to design a pixel detector with highly aggressive and intelligent architectures for sensing, analogue and digital processing, storage and data transmission (figure 1).
The development of detectors using breakthrough technologies calls for the involvement of hi-tech industry. The WP4 networking activity aims to increase industrial involvement in key detector-developments in AIDA and to provide follow-up long after completion of the project. To this end, it has developed the concept of workshops tailored to maximize the attendees’ benefits while also strengthening relations with European industry, including small and medium-sized enterprises (SMEs). The approach is to organize “matching events” that address technologies of high relevance for detector systems and gather key experts from industry and academia with a view to establish high-quality partnerships. WP4 is also developing a tool called “collaboration spotting”, which aims to monitor through publications and patents the industrial and academic organizations that are active in the technologies under focus at a workshop and to identify the key players. The tool was used successfully to invite European companies – including SMEs – to attend the workshop on advanced interconnections for chip packaging in future detectors that took place in April at the Laboratori Nazionali di Frascati of INFN.
Test beams and telescopes
The development, design and construction of detectors for particle-physics experiments are closely linked with the availability of test beams where prototypes can be validated under realistic conditions or production modules can undergo calibration. Through its transnational access and joint research activities, AIDA is not only supporting test-beam facilities and corresponding infrastructures at CERN, DESY and Frascati but is also extending them with new infrastructures. Various sub-tasks cover the detector activities for the LHC and linear collider, as well as a neutrino activity, where a new low-energy beam is being designed at CERN, together with prototype detectors.
One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN
One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN. These are essential for the selection and qualification of materials, components and full detectors operating in the harsh radiation environments of future experiments. AIDA has strongly supported the initiatives to construct GIF++ – a powerful γ irradiation facility combined with a test beam in the North Area – and EAIRRAD, which will be a powerful proton and mixed-field irradiation facility in the East Area. AIDA is contributing to both projects with common user-infrastructure as well as design and construction support. The aim is to start commissioning and operation of both facilities following the LS1 shutdown of CERN’s accelerator complex.
The current shutdown of the test beams at CERN during LS1 has resulted in a huge increase in demand for test beams at the DESY laboratory. The DESY II synchrotron is used mainly as a pre-accelerator for the X-ray source PETRA III but it also delivers electron or positron beams produced at a fixed carbon-fibre target to as many as three test-beam areas. Its ease of use makes the DESY test beam an excellent facility for prototype testing because this typically requires frequent access to the beam area. In 2013 alone, 45 groups from more than 30 countries with about 200 users have already accessed the DESY test beams. Many of them received travel support from the AIDA Transnational Access Funds and so far AIDA funding has enabled a total of 130 people to participate in test-beam campaigns. The many groups using the beams include those from the ALICE, ATLAS, Belle II, CALICE, CLIC, CMS, Compass, LHCb, LCTPC and Mu3e collaborations.
About half of the groups using the test beam at DESY have taken advantage of a beam telescope to provide precise measurements of particle tracks. The EUDET project – AIDA’s predecessor in the previous EU framework programme (FP6) – provided the first beam telescope to serve a large user community, which was aimed at detector R&D for an international linear collider. For more than five years, this telescope, which was based on Minimum Ionizing Monolithic Active pixel Sensors (MIMOSA), served a large number of groups. Several copies were made – a good indication of success – and AIDA is now providing continued support for the community that uses these telescopes. It is also extending its support to the TimePix telescope developed by institutes involved in the LHCb experiment.
The core of AIDA’s involvement lies in the upgrade and extension of the telescope. For many users who work on LHC applications, a precise reference position is not enough. They also need to know the exact time of arrival of the particle but it is difficult to find a single system that can provide both position and time at the required precision. Devices with a fast response tend to be less precise in the spatial domain or put too much material in the path of the particle. So AIDA combines two technologies: the thin MIMOSA sensors with their spatial resolution provide the position; while the ATLAS FEI4 detectors provide time information with the desired LHC structure.
The first beam test in 2012 with a combined MIMOSA-FEI4 telescope was an important breakthrough. Figure 2 shows the components involved in the set-up in the DESY beam. Charged particles from the accelerator – electrons in this case – first traverse three read-out planes of the MIMOSA telescope, followed by the device under test, then the second triplet of MIMOSA planes and then the ATLAS-FEI4 arm. The DEPFET pixel-detector international collaboration was the first group to use the telescope, so bringing together within a metre pixel detectors from three major R&D collaborations.
While combining the precise time information from the ATLAS-FEI4 detector with the excellent spatial resolution of MIMOSA provides the best of both worlds, there is an additional advantage: the FEI4 chip has a self-triggering capability because it can issue a trigger signal based on the response of the pixels. Overlaying the response of the FEI4 pixel matrix with a programmable mask and feeding the resulting signal into the trigger logic allows triggering on a small area and is more flexible than a traditional trigger based on scintillators. To change the trigger definition, all that is needed is to upload a new mask to the device. This turns out to be a useful feature if the prototypes under test cover a small area.
Calorimeter development in AIDA WP9 is mainly motivated by experiments at possible future electron–positron colliders, as defined in the International Linear Collider and Compact Linear Collider studies. These will demand extremely high-performance calorimetry, which is best achieved using a finely segmented system that reconstructs events using the so-called particle-flow approach to allow the precise reconstruction of jet energies. The technique works best with an optimal combination of tracking and calorimeter information and has already been applied successfully in the CMS experiment. Reconstructing each particle individually requires fine cell granularity in 3D and has spurred the development of novel detection technologies, such as silicon photo-multipliers (SiPMs) mounted on small scintillator tiles or strips, gaseous detectors (micro mesh or resistive plate chambers) with 2D read-out segmentation and large-area arrays of silicon pads.
After tests of sensors developed by the CALICE collaboration in a tungsten stack at CERN (figure 3) – in particular to verify the neutron and timing response at high energy – the focus is now on the realization of fully technological prototypes. These include power-pulsed embedded data-acquisition chips requested for the particle-flow-optimized detectors for a future linear collider and they address all of the practical challenges of highly granular devices – compactness, integration, cooling and in situ calibration. Six layers (256 channels each) of a fine granularity (5 × 5 mm2) silicon-tungsten electromagnetic calorimeter are being tested in electron beams at DESY this July (figure 4). At the same time, the commissioning of full-featured scintillator hadron calorimeter units (140 channels each) is progressing at a steady pace. A precision tungsten structure and read-out chips are also being prepared for the forward calorimeters to test the radiation-hard sensors produced by the FCAL R&D collaboration.
The philosophy behind AIDA is to bring together institutes to solve common problems so that once the problem is solved, the solution can be made available to the entire community. Two years on from the project’s start – and halfway through its four-year lifetime – the highlights described here, from software toolkits to a beam-telescope infrastructure to academia-industry matching, illustrate well the progress that is being made. Ensuring the user support of all equipment in the long term will be the main task in a new proposal to be submitted next year to the EC’s Horizon 2020 programme. New innovative activities to be included will be discussed during the autumn within the community at large.
The combination of high intensity and high energy that characterizes the nominal beam in the LHC leads to a stored energy of 362 MJ in each ring. This is more than two orders of magnitude larger than in any previous accelerator – a large step that is highlighted in the comparisons shown in figure 1. An uncontrolled beam loss at the LHC could cause major damage to accelerator equipment. Indeed, recent simulations that couple energy-deposition and hydro-dynamic simulation codes show that the nominal LHC beam can drill a hole through the full length of a copper block that is 20 m long.
Safe operation of the LHC relies on a complex system of equipment protection – the machine protection system (MPS). Early detection of failures within the equipment and active monitoring of the beam parameters with fast and reliable beam instrumentation is required throughout the entire cycle, from injection to collisions. Once a failure is detected the information is transmitted to the beam-interlock system that triggers the LHC beam-dumping system. It is essential that the beams are always properly extracted from the accelerator via a 700-m-long transfer line into large graphite dump blocks, because these are the only elements of the LHC that can withstand the impact of the full beam. Figure 2 shows the simulated impact of a 7 TeV beam on the dump block.
There are several general requirements for the MPS. Its top priority is to protect the accelerator equipment from beam damage, while its second priority is to prevent the superconducting magnets from quenching. At the same time, it should also protect the beam – that is, the protection systems should dump the beam only when necessary so that the LHC’s availability is not compromised. Last, the MPS must provide evidence from beam aborts. When there are failures, the so-called post-mortem system provides complete and coherent diagnostics data. These are needed to reconstruct the sequence of events accurately, to understand the root cause of the failure and to assess whether the protection systems functioned correctly.
Protection of the LHC relies on a variety of systems with strong interdependency – these include the collimators and beam-loss monitors (BLMs) and the beam controls, as well as the beam injection, extraction and dumping systems. The strategy for machine protection, which involves all of these, rests on several basic principles:
• Definition of the machine aperture by the collimator jaws, with BLMs close to the collimators and the superconducting magnets. In general, particles lost from the beam will hit collimators first and not delicate equipment such as superconducting magnets or the LHC experiments.
• Early detection of failures within the equipment that controls the beams, to generate a beam-dump request before the beam is affected.
• Active monitoring with fast and reliable beam instrumentation, to detect abnormal beam conditions and rapidly generate a beam-dump request. This can happen within as little as half a turn of the beam round the machine (40 μs).
• Reliable transmission of a beam-dump request to the beam-dumping system by a distributed interlock system. Fail-safe logic is used for all interlocks, therefore an active signal is required for operation. The absence of the signal is considered as a beam-dump request or injection inhibit.
• Reliable operation of the beam-dumping system on receipt of a dump request or internal-fault detection, to extract the beams safely onto the external dump blocks.
• Passive protection by beam absorbers and collimators for specific failure cases.
• Redundancy in the protection system so that failures can be detected by more than one system. Particularly high standards for safety and reliability are applied in the design of the core protection systems.
Many types of failure are possible with a system as large and complex as the LHC. From the point of view of machine protection, the timescale is one of the most important characteristics of a failure because it determines how the MPS responds.
The fastest and most dangerous failures occur on the timescale of a single turn or less. These events may occur, for example, because of failures during beam injection or beam extraction. The probability for such failures is minimized by designing the systems for high reliability and by interlocking the kicker magnets as soon as they are not needed. However, despite all of these design precautions, failures such as incorrect firing of the kicker magnets at injection or extraction cannot be excluded. In these cases, active protection based on the detection of a fault and an appropriate reaction is not possible because the failure occurs on a timescale that is smaller than the minimum time that it would take to detect it and dump the beam. Protection from these specific failures therefore relies on passive protection with beam absorbers and collimators that must be correctly positioned close to the beam to capture the particles that are deflected accidentally.
Since the injection process is one of the most delicate procedures, a great deal of care has been taken to ensure that only a beam with low intensity – which is highly unlikely to damage equipment – can be injected into an LHC ring where no beam is already circulating. High-intensity beam can be injected only into a ring where a minimum amount of beam is present. This is a guarantee that conditions are acceptable for injection.
The LHC is equipped with around 4000 BLMs distributed along its circumference to protect all elements against excessive beam loss
The majority of equipment failures, however, lead to beam “instabilities” – i.e. fast movements of the orbit or growth in beam size – that must be detected on a timescale of 1 ms or more. Protection against such events relies on fast monitoring of the beam’s position and of beam loss. The LHC is equipped with around 4000 BLMs distributed along its circumference to protect all elements against excessive beam loss. Equipment monitoring – e.g. quench detectors and monitors for failures of magnet powering – provides redundancy for the most critical failure scenarios.
Last, on the longest timescale there will be unavoidable beam losses around the LHC machine during all of the phases of normal operation. Most of these losses will be captured in the collimation sections, where the beam losses and heat load at collimators are monitored. If the losses or the heat load become unacceptably high, the beam is dumped.
Figure 3 shows the evolution of the peak energy stored in each LHC beam between 2010 and 2012. The 2010 run was the main commissioning and learning year for the LHC and the associated MPSs. Experience had to be gained with all of the MPS sub-systems and thresholds for failure detection – e.g. beam-loss thresholds – had to be adjusted based on operational experience. In the summer of 2010, the LHC was operated at a stored energy of around 1–2 MJ – similar to the level of CERN’s Super Proton Synchrotron and Fermilab’s Tevatron – to gain experience with beams that could already create significant damage. A core team of MPS experts monitored the subsequent intensity ramps closely, with bunch spacings of 150 ns, 75 ns and 50 ns. Checklists were completed for each intensity level to document the subsystem status and to record observations. Approval to proceed to the next intensity stage was given only when all of the issues had been resolved. As experience was gained, the increments in intensity became larger and faster to execute. By mid-2012, a maximum stored energy of 140 MJ had been reached at 4 TeV per beam.
One worry with so many superconducting magnets in the LHC concerned quenches induced by uncontrolled beam losses. However, the rate was difficult to estimate before the machine began operation because it depended on a number of factors, including the performance of the large and complex collimation system. Fortunately, not a single magnet quench was observed during normal operation with circulating beams of 3.5 TeV and 4 TeV. This is a result of the excellent performance of the MPS, the collimation system and the outstanding stability and reproducibility of the machine.
Nevertheless, there were other – unexpected – effects. In the summer of 2010, during the intensity ramp-up to stored energies of 1 MJ, fast beam-loss events with timescales of 1 ms or less were observed for the first time in the LHC’s arcs. When it became rapidly evident that dust particles were interacting with the beam they were nicknamed unidentified falling objects (UFOs). The rate of these UFOs increased steadily with beam intensity. Each year, the beams were dumped about 20 times when the losses induced by the interaction of the beams with the dust particles exceeded the loss thresholds. For the LHC injection kickers – where an important number of UFOs were observed – the dust particles could clearly be identified on the surface of the ceramic vacuum chamber. Kickers with better surface cleanliness will replace the existing kickers during the present long shutdown. Nevertheless, UFOs remain a potential threat to the operational efficiency of the LHC at 7 TeV per beam.
All of the beam-dump events were meticulously analysed and validated by the operation crews and experts
The LHC’s MPS performed remarkably well from 2010 to 2013, thanks to the thoroughness and commitment of the operation crews and the MPS experts. Around 1500 beam dumps were executed correctly above the injection energy. All of the beam-dump events were meticulously analysed and validated by the operation crews and experts. This information has been stored in a knowledge database to assess possible long-term improvements of the machine protection and equipment systems. As experience grew, an increasing number of failures were captured before their effects on the particle beams became visible – i.e. before the beam position changed or beam losses were observed.
During the whole period, no evidence of a major loophole or uncovered risk in the protection architecture was identified, although sometimes unexpected failure modes were identified and mitigated. However, approximately 14% of the 1500 beam dumps were initiated by the failure of an element of the MPS – a “false” dump. So, despite the high dependability of the MPS during these first operational years, it will be essential to remain vigilant in the future as more emphasis is placed on increasing the LHC’s availability for physics.
Ideally, a storage ring like the LHC would never lose particles: the beam lifetime would be infinite. However, a number of processes will always lead to losses from the beam. The manipulations needed to prepare the beams for collision – such as injection, the energy ramp and “squeeze” – all entail unavoidable beam losses, as do the all-important collisions for physics. These losses generally become greater as the beam current and the luminosity are increased. In addition, the LHC’s superconducting environment demands an efficient beam-loss cleaning to avoid quenches from uncontrolled losses – the nominal stored beam energy of 362 MJ is more than a billion times larger than the typical quench limits.
The tight control of beam losses is the main purpose of the collimation system. Movable collimators define aperture restrictions for the circulating beam and should intercept particles on large-amplitude trajectories that could otherwise be lost in the magnets. Therefore, the collimators represent the LHC’s defence against unavoidable beam losses. Their primary role is to clean away the beam halo while maintaining losses at sensitive locations below safe limits. The current system is designed to ensure that peak losses below a few 0.01% of the energy lost from the beam is deposited in the cold magnets. As the closest elements to the circulating beams, the collimators provide passive machine protection against irregular fast losses and failures. They also control the distribution of losses around the ring by ensuring that the largest activation occurs at optimized locations. Collimators are also used to minimize background in the experiments.
The LHC collimation system provides multi-stage cleaning where primary, secondary and tertiary collimators and absorbers are used to reduce the population of halo particles to tolerable levels (figure 1). Robust carbon-based and non-robust but high-absorption metallic materials are used for different purposes. Collimators are installed around the LHC in seven out of the eight insertion regions (between the arcs), at optimal longitudinal positions and for various transverse rotation angles. The collimator jaws are set at different distances from the circulating beams, respecting the optimum setting hierarchy required to ensure that the system provides the required cleaning and protection functionalities.
The design was optimized using state-of-the-art numerical-simulation programs
The detailed system design was the outcome of a multi-parameter optimization that took into account nuclear-physics processes in the jaws, robustness against the worst anticipated beam accidents, collimation-cleaning efficiency, radiation impact and machine impedance. The result is the largest and most advanced cleaning system ever built for a particle accelerator. It consists of 84 two-sided movable collimators of various designs and materials. Including injection protection collimators, there are a total of 396 degrees-of-freedom, because each collimator jaw has two stepping motors. By contrast, the collimation system of the Tevatron at Fermilab had less than 30 degrees-of-freedom for collimator positions.
The design was optimized using state-of-the-art numerical-simulation programs. These were based on a detailed model of all of the magnetic elements for particle tracking and the vacuum pipe apertures, with a longitudinal resolution of 0.1 m along the 27-km-long rings. They also involved routines for proton-halo generation and transport, as well as aperture checks and proton–matter interactions. These simulations require high statistics to achieve accurate estimates of collimation cleaning. A typical simulation run involves tracking some 20–60 million primary halo protons for 200 LHC turns – equivalent to monitoring a single proton travelling a distance of 0.03 light-years. Several runs are needed to study the system in different conditions. Additional complex energy- deposition and thermo-mechanical finite-element computations are then used to establish heat loads in magnets, radiation doses and collimator structural behaviour for various loss scenarios. Such a highly demanding simulation process was possible only as a result of computing power developed over recent years.
The backbone of the collimation system is located at two warm insertion regions (IRs): the momentum cleaning at IR3 and betatron cleaning at IR7, which comprise 9 and 19 movable collimators per beam, respectively. Robust primary and secondary collimators made of a carbon-fibre composite define the momentum and betatron cuts for the beam halo. In 2012, in IR7 they were at ±4.3–6.3σ (with σ being the nominal standard deviation of the beam profile in the transverse plane) from the circulating 140 MJ beams, which passed through collimator apertures as small as 2.1 mm at a rate of around 11,000 times per second.
Additional tungsten absorbers protect the superconducting magnets downstream of the warm insertions. While these are more efficient in catching hadronic and electromagnetic showers, they are also more fragile against beam losses, so they are retracted further from the beam orbit. Further local protection is provided for the experiments in IR1, IR2, IR5 and IR8: tungsten collimators shield the inner triplet magnets that otherwise would be exposed to beam losses because they are the magnets with the tightest aperture restrictions in the LHC in collision conditions. Injection and dump protection elements are installed in IR2, IR8 and IR6. The collimation system must provide continuous cleaning and protection during all stages of beam operation: injection, ramp, squeeze and physics.
An LHC collimator consists of two jaws that define a slit for the beam, effectively constraining the beam halo from both sides (figure 2). These jaws are enclosed in a vacuum tank that can be rotated in the transverse plane to intercept the halo, whether it is horizontal, vertical or skew. Precise sensors monitor the jaw positions and collimator gaps. Temperature sensors are also mounted on the jaws. All of these critical parameters are connected to the beam-interlock system and trigger a beam dump if potentially dangerous conditions are detected.
At the LHC’s top energy, a beam size of less than 200 μm requires that the collimators act as high-precision devices. The correct system functionality relies on establishing the collimator hierarchy with position accuracies to within a fraction of the beam size. Collimation movements around the ring must also be synchronized to within better than 20 ms to achieve good relative positioning of devices during transient phases of the operational cycle. A unique feature of the control system is that the stepping motors can be driven according to arbitrary functions of time, synchronously with other accelerator systems such as power converters and radio-frequency cavities during ramp and squeeze.
These requirements place unprecedented constraints on the mechanical design, which is optimized to ensure good flatness along the 1-m-long jaw, even under extreme conditions. Extensive measurements were performed during prototyping and production, both for quality assurance and to obtain all of the required position calibrations. The collimator design has the critical feature that it is possible to measure a gap outside the beam vacuum that is directly related to the collimation gap seen by the beam. Some non-conformities in jaw flatness could not be avoided and were addressed by installing the affected jaws at locations of larger β functions (therefore larger beam size), in a way that is not critical for the overall performance.
Set-up and performance
The first step in collimation set-up is to adjust the collimators to the stored beam position. There are unavoidable uncertainties in the beam orbit and collimator alignment in the tunnel, so a beam-based alignment procedure has been established to set the jaws precisely around the beam orbit. The primary collimators are used to create reference cuts in phase space. Then all other jaws are moved symmetrically round the beam until they touch the reference beam halo. The results of this halo-based set-up provide information on the beam positions and sizes at each collimator. The theoretical target settings for the various collimators are determined from simulations to protect the available machine aperture. The beam-based alignment results are then used to generate appropriate setting functions for the collimator positions throughout the operational cycle. For each LHC fill, the system requires some 450 setting functions versus time, 1200 discrete set points and about 10,000 critical threshold settings versus time. Another 600 functions are used as redundant gap thresholds for different beam energies and optics configurations.
This complex system worked well during the first LHC operation with a minimum number of false errors and failures, showing that the choice of hardware and controls are fully appropriate for the challenging accelerator environment at the LHC. Collimator alignment and the handling of complex settings have always been major concerns for the operation of the large and distributed LHC collimation system. The experience accumulated in the first run indicates that these critical aspects have been addressed successfully.
The result of the cleaning mechanism from the LHC collimation process is always visible in the control room. Unavoidable beam losses occur continuously at the primary collimators and can be observed online by the operations team as the largest loss spikes on the fixed display showing the beam losses around the ring. The local leakage to cold magnets is in most cases below 0.00001 of the peak losses, with a few isolated loss locations around IR7 where the cleaning reaches levels up to a few 0.0001 (figure 3). So far, this excellent performance has ensured a quench-free operation, even in cases of extreme beam losses from circulating beams. Moreover, this was achieved throughout the year with only one collimator alignment in IR3 and IR7, thanks to the remarkable stability of the machine and of the collimator settings.
However, collimators in the interaction regions required regular setting up for each new machine configuration that was requested for the experiments. Eighteen of these collimators are being upgraded in the current long shutdown to reduce the time spent on alignment: the new tertiary collimator design has integrated beam-position monitors to enable a fast alignment without dedicated beam-based alignment fills. This upgrade will also eventually contribute to improving the peak luminosity performance by reducing further the colliding beam sizes, thanks to better control of the beam orbit next to the inner triplet.
The LHC collimation system performance is validated after set-up with provoked beam losses, which are artificially induced by deliberately driving transverse beam instabilities. Beam-loss monitors then record data at 3600 locations around the ring. As these losses occur under controlled conditions they can be compared in detail with simulations. As predicted, performance is limited by a few isolated loss locations, namely the IR7 dispersion-suppressor magnets, which catch particles that have lost energy in single diffractive scattering at the primary collimator. This limitation of the system will be addressed in future upgrades, in particular in the High Luminosity LHC era.
The first three-year operational run has shown that the LHC’s precise and complex collimation system works at the expected high performance, reaching unprecedented levels of cleaning efficiency. The system has shown excellent stability: the machine was regularly operated with stored beam energies of more than 140 MJ, with no loss-induced quenches of superconducting magnets. This excellent performance was among the major contributors to the rapid commissioning of high-intensity beams at the LHC as well as to the squeezing of 4 TeV beams to 60 cm at collision points – a crucial aspect of the successful operation in 2012 that led to the discovery of a Higgs boson.
•The success of the collimation system during the first years of LHC operation was the result of the efforts of the many motivated people involved in this project from different CERN departments and from external collaborators. All of these people, and Ralph Assmann who led the project until 2012, are gratefully acknowledged.
The total electromagnetic energy stored in the LHC superconducting magnets is about 10,000 MJ, which is more than an order of magnitude greater than in the nominal stored beams. Any uncontrolled release of this energy presents a danger to the machine. One way in which this can occur is through a magnet quench, so the LHC employs a sophisticated system to detect quenches and protect against their harmful effects.
The magnets of the LHC are superconducting if the temperature, the applied magnetic induction and the current density are below a critical set of interdependent values – the critical surface (figure 1). A quench occurs if the limits of the critical surface are exceeded locally and the affected section of magnet coil changes from a superconducting to a normal conducting state. The resulting drastic increase in electrical resistivity causes Joule heating, further increasing the temperature and spreading the normal conducting zone through the magnet.
An uncontrolled quench poses a number of threats to a superconducting magnet and its surroundings. High temperatures can destroy the insulation material or even result in a meltdown of superconducting cable: the energy stored in one dipole magnet can melt up to 14 kg of cable. The excessive voltages can cause electric discharges that could further destroy the magnet. In addition, high Lorentz forces and temperature gradients can cause large variations in stress and irreversible degradation of the superconducting material, resulting in a permanent reduction of its current-carrying capability.
The LHC main superconducting dipole magnets achieve magnetic fields of more than 8 T. There are 1232 main bending dipole magnets, each 15 m long, that produce the required curvature for proton beams with energies up to 7 TeV. Both the main dipole and the quadrupole magnets in each of the eight sectors of the LHC are powered in series. Each main dipole circuit includes 154 magnets, while the quadrupole circuits consist of 47 or 51 magnets, depending on the sector. All superconducting components, including bus- bars and current leads as well as the magnet coils, are vulnerable to quenching under adverse conditions.
The LHC employs sophisticated magnet protection, the so-called quench-protection system (QPS), both to safeguard the magnetic circuits and to maximize beam availability. The effectiveness of the magnet-protection system is dependent on the timely detection of a quench, followed by a beam dump and rapid disconnection of the power converter and current extraction from the affected magnetic circuit. The current decay rate is determined by the inductance, L, and resistance, R, of the resulting isolated circuit, with a discharge time constant of τ = L/R. For the purposes of magnet protection, reducing the current discharge time can be viewed as equivalent to the extraction and dissipation of stored magnetic energy. This is achieved by increasing the resistance of both the magnet and its associated circuit.
Additional resistance in the magnet is created by using quench heaters to heat up large fractions of the coil and spread the quench over the entire magnet. This dissipates the stored magnetic energy over a larger volume and results in lower hot-spot temperatures. The resistance in the circuit is increased by switching-in a dump resistor, which extracts energy from the circuit (figure 2). As soon as one magnet quenches, the dump resistor is used to extract the current from the chain. The size of the resistor is chosen such that the current does not decrease so quickly as to induce large eddy-current losses, which would cause further magnets in the chain to quench.
Detection and mitigation
A quench in the LHC is detected by monitoring the resistive voltage across the magnet, which rises as the quench appears and propagates. However, the total measured voltage also includes the inductive-voltage component, which is driven by the magnet current ramping up or down. Reliably extracting the resistive-voltage signal from the total voltage-measurement is done using detection systems with inductive-voltage compensation. In the case of fast-ramping corrector magnets with large inductive voltages, it is more difficult to detect a resistive voltage because of the low signal-to-noise ratio; higher threshold voltages have to be used and a quench is therefore detected later. Following the detection and validation of a quench, the beam is aborted and the power converter is switched off. The time between the start of a quench and quench validation (i.e. activating the beam and powering interlocks) must be independent of the selected method of protection.
Creating a parallel path to the magnet via a diode allows the circuit current to by-pass the quenching magnet (figure 2). As soon as the increasing voltage over the quenched coil reaches the threshold voltage of the diode, the current starts to transfer into the diode. The magnet is by-passed by its diode and discharges independently. The diode must withstand the radiation environment, carry the current of the magnet chain for a sufficient time and provide sufficiently high turn-on voltage, to hold during the ramp up of the current. The LHC’s main magnets use cold diodes, which are mounted within the cryostat. These have a significantly larger threshold voltage than diodes that operate at room temperature – but the threshold can be reached sooner if quench heaters are fired.
The sequence of events following quench detection and validation can be summarized as follows:
• 1. The beam is dumped and the power converter turned off.
• 2. The quench-heaters are triggered and the dump-resistor is switched-in.
• 3. The current transfers into the dump resistor and starts to decrease.
• 4. Once the quench heaters take effect, the voltage over the quenched magnet rises and switches on the cold diode.
• 5. The magnet starts now to be by-passed in the chain and discharges over the internal resistance.
• 6. The cold diode heats up and the forward voltage decreases.
• 7. The current decrease induces eddy-current losses in the magnet windings yielding enhanced quench propagation. • 8. The current of the quenched magnet transfers fully into the cold diode.
• 9. The magnet chain is completely switched off a few hundred seconds after the quench detection.
The QPS must perform with high reliability and high LHC beam availability. Satisfying these contradictory requirements requires careful design to optimize the sensitivity of the system. While failure to detect and control a quench can clearly have a significant impact on the integrity of the accelerator, QPS settings that are too tight may increase the number of false triggers significantly. As well as causing additional downtime of the machine, false triggers – which can result from electromagnetic perturbations, such as network glitches and thunderstorms – can contribute to the deterioration of the magnets and quench heaters by subjecting them to unnecessary spurious quenches and fast de-excitation.
One of the important challenges for the QPS is coping with the conditions experienced during a fast power abort (FPA) following quench validation. Switching off the power converter and activating the energy extraction to the dump resistors causes electromagnetic transients and high voltages. The sensitivity of the QPS to spurious triggers from electromagnetic transients caused a number of multiple-magnet quench events in 2010 (figure 3). Following simulation studies of transient behaviour, a series of modifications were implemented to reduce the transient signals from a FPA. A delay was introduced between switching off the power converter and switching-in the dump resistors, with “snubber” capacitors installed in parallel to the switches to reduce electrical arcing and related transient voltage waves in the circuit (these are not shown in figure 2). These improvements resulted in a radically reduced number of spurious quenches in 2011 – only one such quench was recorded, in a single magnet, and this was probably due to an energetic neutron, a so-called “single-event upset” (SEU). The reduction in falsely triggered quenches between 2010 and 2011 was the most significant improvement in the QPS performance and impacted directly on the decision to increase the beam energy to 4 TeV in 2012.
To date, there have been no beam-induced quenches with circulating beams above injection current
To date, there have been no beam-induced quenches with circulating beams above injection current. This operational experience shows that the beam-loss monitor thresholds are low enough to cause a beam dump before beam losses cause a quench. However, the QPS had to act on several occasions in the event of real quenches in the bus-bars and current leads, demonstrating real protection in operation. The robustness of the system was evident on 18 August 2011 when the LHC experienced a total loss of power at a critical moment for the magnet circuits. At the time, the machine was ramping up and close to maximum magnet current with high beam intensity: no magnet tripped and no quenches occurred.
A persistent issue for the vast and complex electronics systems used in the QPS is exposure to radiation. In 2012 some of the radiation-to-electronics problems were partly mitigated by the development of electronics more tolerant to radiation. The number of trips per inverse femtobarn owing to SEUs was reduced by about 60% from 2011 to 2012 thanks to additional shielding and firmware upgrades. The downtime from trips is also being addressed by automating the power cycling to reset electronics after a SEU. While most of the radiation-induced faults are transparent to LHC operation, the number of beam dumps caused by false triggers remains an issue. Future LHC operation will require improvements in radiation-tolerant electronics, coupled with a programme of replacement where necessary.
Future operation
During the LHC run in 2010 and 2011 with a beam energy of 3.5 TeV, the normal operational parameters of the dipole magnets were well below the critical surface required for superconductivity. The main dipoles operated at about 6 kA and 4.2 T, while the critical current at this field is about 35 kA, resulting in a safe temperature margin of 4.9 K. However, this value will become 1.4 K for future LHC operation at 7 TeV per beam. The QPS must therefore be prepared for operation with tighter margins. Moreover, at higher beam energy quench events will be considerably larger, involving up to 10 times more magnetic energy. This will result in longer recuperation times for the cryogenic system. There is also a higher likelihood of beam-induced quench events and quenches induced by conditions such as faster ramp rates and FPAs.
The successful implementation of magnet protection depends on a high-performance control and data acquisition system, automated software analysis tools and highly trained personnel for technical interventions. These have all contributed to the very good performance during 2010–2013. The operational experience gained during this first long run will allow the QPS to meet the challenges of the next run.
The LHC is one of the coldest places on Earth, with superconducting magnets – the key defining feature – that operate at 1.9 K. While there might be colder places in other laboratories, none compares to the LHC’s scale and complexity. The cryogenic system that provides the cooling for the superconducting magnets, with their total cold mass of 36,000 tonnes, is the largest and most advanced of its kind. It has been running continuously at some level since January 2007, providing stalwart service and achieving an availability equivalent to more than 99% per cryogenic plant.
The task of keeping the 27-km-long collider at 1.9 K is performed by helium that is cooled to its superfluid state in a huge refrigeration system. While the niobium-titanium alloy in the magnet coils would be superconducting if normal liquid helium were used as the coolant, the performance of the magnets is greatly enhanced by lowering their operating temperature and by taking advantage of the unique properties of superfluid helium. At atmospheric pressure, helium gas liquefies at around 4.2 K but on further cooling it undergoes a second phase change at about 2.17 K and becomes a superfluid. Among many remarkable properties, superfluid helium has a high thermal conductivity, which makes it the coolant of choice for the refrigeration and stabilization of large superconducting systems.
The LHC consists of eight 3.3-km-long sectors with sites for access shafts to services on the surface at the ends of each sector. Five of these sites are used to locate the eight separate cryogenic plants, each dedicated to serving one sector (figure 1). An individual cryoplant consists of a pair of refrigeration units: one, the 4.5 K refrigerator, provides a cooling-capacity equivalent to 18 kW at 4.5 K; while the other, the 1.8 K refrigeration unit, provides a further cooling capacity of 2.4 kW at 1.8 K. Therefore, each of the eight cryoplants must distribute and recover kilowatts of refrigeration across a distance of 3.3 km, to be achieved with a temperature change of less than 0.1 K.
Four of the 4.5 K refrigerators were recovered from the second phase of the Large Electron–Positron collider
Four of the 4.5 K refrigerators were recovered from the second phase of the Large Electron–Positron collider (LEP2), where they were used to cool its superconducting radiofrequency cavities. These “recycled” units have been upgraded to operate on the LHC sectors that have a lower demand for refrigeration. The four high-load sectors are instead cooled by new 4.5 K refrigerators. The refrigeration capacity needed to cool the 4500 tonnes of material in each sector of the LHC is enormous and can be produced only by using liquid nitrogen. Consequently, each 4.5 K refrigerator is equipped with a 600-kW liquid-nitrogen pre-cooler. This is used to cool a flow of helium down to 80 K while the corresponding sector is cooled before being filled with helium – a procedure that takes just under a month. Using only helium in the tunnel considerably reduces the risk of oxygen deficiency in the case of an accidental release.
The 4.5 K refrigeration system works by first compressing the helium gas and then allowing it to expand. During expansion it cools by losing energy through mechanical turbo-expanders that run at up to 140,000 rpm on helium-gas bearings. Each of the refrigerators consists of a helium-compressor station equipped with systems to remove oil and water, as well as a vacuum-insulated cold box (60 tonnes) where the helium is cooled, purified and liquefied. The compressor station supplies compressed helium gas at 20 bar and room temperature. The cold box houses the heat exchangers and turbo-expanders that provide the cooling capacities necessary to liquefy the helium at 4.5 K. The liquid helium then passes to the 1.8 K refrigeration unit, where the cold-compressor train decreases its saturation pressure and consequently its saturation temperature down to 1.8 K. Each cryoplant is equipped with a fully automatic process-control system that manages about 1000 inlets and outlets per plant. The system takes a total electrical input power of 32 MW and reaches an equivalent cooling capacity of 144 kW at 4.5 K – enough to provide almost 40,000 litres of liquid helium per hour.
In the LHC tunnel, a cryogenic distribution line runs alongside the machine. It consists of eight continuous cryostats, each about 3.2 km long and housing four (or five) headers to supply and recover helium, with temperatures ranging from 4 K to 75 K. A total of 310 service modules, of 44 different types feed the machine. These contain sub-cooling heat exchangers, all of the cryogenic control valves for the local cooling loops and 1–2 cold pressure-relief valves that protect the magnet cold masses, as well as monitoring and control instrumentation. Overall, the LHC cryogenic system contains about 60,000 inlets and outlets, which are managed by 120 industrial-process logic controllers that implement more than 4000 PID control loops.
Operational aspects
The structure of the group involved with the operation of the LHC’s cryogenics has evolved naturally since the installation phase, so maintaining experience and expertise. Each cryogenically independent sector of the LHC and its pair of refrigerators is managed by its own dedicated team for process control and operational procedures. In addition, there are three support teams for mechanics, electricity-instrumentation controls and metrology instrumentation. A further team handles scheduling, maintenance and logistics, including cryogen distribution. Continuous monitoring and technical support is provided by personnel who are on shift “24/7” in the CERN Control Centre and on standby duties. This constant supervision is necessary because any loss of availability for the cryogenic system impacts directly on the availability of the accelerator. Furthermore, the response to cryogenic failures must be rapid to mitigate the consequences of loss of cooling.
In developing a strategy for operating the LHC it was necessary to define the overall availability criteria. Rather than using every temperature sensor or liquid-helium level as a separate interlock to the magnet powering and therefore the beam permit, it made more sense to organize the information according to the modularity of the magnet-powering system. As a result, each magnet-powering subsector is attributed a pair of cryogenic signals: “cryo-maintain” (CM) and “cryo-start” (CS). The CM signal corresponds to any condition that requires a slow discharge of the magnets concerned, while the CS signal has more stringent conditions to enable powering to take place with sufficient margins for a smooth transition to the CM threshold. A global CM signal is defined as the combination of all of the required conditions for the eight sectors. This determines the overall availability of the LHC cryogenics.
During the first LHC beams in 2009, the system immediately delivered availability of 90% despite there being no means of dealing quickly with identified faults. These were corrected whenever possible during the routine technical stops of the accelerator and the end-of-year stops. The main issues resolved during this phase were the elimination of two air leaks in sub-atmospheric circuits, the consolidation of all of the 1200 cooling valves for current leads and the 1200 electronic cards for temperature sensors that were particularly affected by energetic neutron impacts, so-called single-event upsets (SEUs).
Since early operation for physics began in November 2009, the availability has been above 90% for more than 260 days per year. A substantial improvement occurred in 2012–2013 because of progress in the operation of the cryogenic system. The operation team undertook appropriate training that included the evaluation and optimization of operation settings. There were major improvements in handling utilities-induced failures. In particular, in the case of electrical-network glitches, fine-tuning the tolerance thresholds for the helium compressors and cooling-water stations represented half of the gain. A reduction in the time taken to recover nominal cryogenic conditions after failures also led to improved availability. The progress made during the past three years led to a reduction in the number of short stops, i.e. less than eight hours, from 140 to 81 per year. By 2012, the efforts of the operation and support teams had resulted in a global availability of 94.8%, corresponding to an equivalent availability of more than 99.3% for each of the eight cryogenically independent sectors.
In addition, the requirement to undertake an energy-saving programme contributed significantly to the improved availability and efficiency of the cryogenic system – and resulted in a direct saving of SwFr3 million a year. Efforts to improve efficiency have also focused on the consumption of helium. The overall LHC inventory comes to 136 tonnes of helium, with an additional 15 tonnes held as strategic storage to cope with urgent situations during operation. For 2010 and 2011, the overall losses remained high because of increased losses from the newly commissioned storage tanks during the first end-of-year technical stop. However, the operational losses were substantially reduced in 2011. Then, in 2012, the combination of a massive campaign to localize all detectable leaks – combined with the reduced operational losses – led to a dramatic improvement in the overall figure, nearly halving the losses.
Towards the next run
Thanks to the early consolidation work already performed while ramping up the LHC luminosity, no significant changes are being implemented to the cryogenic system during the first long shutdown (LS1) of the LHC. However, because it has been operating continuously since 2007, a full preventive-maintenance plan is taking place. A major overhaul of helium compressors and motors is being undertaken at the manufacturers’ premises. The acquisition of important spares for critical rotating machinery is already completed. Specific electronic units will be upgraded or relocated to cope with future radiation levels. In addition, identified leaks in the system must be repaired. The consolidation of the magnet interconnections – including the interface with the current leads – together with relocation of electronics to limit SEUs, will require a complete re-commissioning effort before cool-down for the next run.
The scheduled consolidation work – together with lessons learnt from the operational experience so far – will be key factors for the cryogenic system to maintain its high level of performance under future conditions at the LHC. The successful systematic approach to operations will continue when the LHC restarts at close to nominal beam energy and intensity. With greater heat loads corresponding to increased beam parameters and magnet currents, expectations are high that the cryogenic system will meet the challenge.
Since the first 3.5 TeV collisions in March 2010, the LHC has had three years of improving integrated luminosity. By the time that the first proton physics run ended in December 2012, the total integrated proton–proton luminosity delivered to each of the two general-purpose experiments – ATLAS and CMS – had reached nearly 30 fb–1 and enabled the discovery of a Higgs boson. ALICE, LHCb and TOTEM had also operated successfully and the LHC team was able to fulfil other objectives, including productive lead–lead and proton–lead runs.
Establishing good luminosity depends on several factors but the goal is to have the largest number of particles potentially colliding in the smallest possible area at a given interaction point (IP). Following injection of the two beams into the LHC, there are three main steps to collisions. First, the beam energy is ramped to the required level. Then comes the squeeze. This second step involves decreasing the beam size at the IP using quadrupole magnets on both sides of a given experiment. In the LHC, the squeeze process is usually parameterized by β* (the beam size at the IP is proportional to the square root of β*). The third step is to remove the separation bumps that are formed by local corrector magnets. These bumps keep the beams separated at the IPs during the ramp and squeeze.
High luminosity translates into having many high-intensity particle bunches, an optimally focused beam size at the interaction point and a small emittance (a measure of the spread of the beam in transverse phase space). The three-year run saw relatively distinct phases in the increase of proton–proton luminosity, starting with basic commissioning then moving on through exploration of the limits to full physics production running in 2012.
The beam energy remained at 3.5 TeV in 2011 and the year saw exploitation combined with exploration of the LHC’s performance limits
The first year in 2010 was devoted to commissioning and establishing confidence in operational procedures and the machine protection system, laying the foundations for what was to follow. Commissioning of the ramp to 3.5 TeV went smoothly and the first (unsqueezed) collisions were established on 30 March. Squeeze commissioning then successfully reduced β* to 2 m in all four IPs.
With June came the decision to go for bunches of nominal intensity, i.e. around 1011 protons per bunch (see table below, p27). This involved an extended commissioning period and subsequent operation with beams of up to 50 or so widely separated bunches. The next step was to increase the number of bunches further. This required the move to bunch trains with 150 ns between bunches and the introduction of well defined beam-crossing angles in the interaction regions to avoid parasitic collisions. There was also a judicious back-off in the squeeze to a β* of 3.5 m. These changes necessitated setting up the tertiary collimators again and recommissioning the process of injection, ramp and squeeze – but provided a good opportunity to bed-in the operational sequence.
A phased increase in total intensity followed, with operational and machine protection validation performed before each step up in the number of bunches. Each increase was followed by a few days of running to check system performance. The proton run for the year finished with beams of 368 bunches of around 1.2 × 1011 protons per bunch and a peak luminosity of 2.1 × 1032 cm–2 s–1. The total integrated luminosity for both ATLAS and CMS in 2010 was around 0.04 fb–1.
The beam energy remained at 3.5 TeV in 2011 and the year saw exploitation combined with exploration of the LHC’s performance limits. The campaign to increase the number of bunches in the machine continued with tests with a 50 ns bunch spacing. An encouraging performance led to the decision to run with 50 ns. A staged ramp-up in the number of bunches ensued, reaching 1380 – the maximum possible with a bunch spacing of 50 ns – by the end of June. The LHC’s performance was increased further by reducing the emittances of the beams that were delivered by the injectors and by gently increasing the bunch intensity. The result was a peak luminosity of 2.4 × 1033 cm–2 s–1 and some healthy delivery rates that topped 90 pb–1 in 24 hours.
The next step up in peak luminosity in 2011 followed a reduction in β* in ATLAS and CMS from 1.5 m to 1 m. Smaller beam size at an IP implies bigger beam sizes in the neighbouring inner triplet magnets. However, careful measurements had revealed a better-than-expected aperture in the interaction regions, opening the way for this further reduction in β*. The lower β* and increases in bunch intensity eventually produced a peak luminosity of 3.7 × 1033 cm–2 s–1, beyond expectations at the start of the year. ATLAS and CMS had each received around 5.6 fb–1 by the end of proton–proton running for 2011.
An increase in beam energy to 4 TeV marked the start of operations in 2012 and the decision was made to stay at a 50 ns bunch spacing with around 1380 bunches. The aperture in the interaction regions, together with the use of tight collimator settings, allowed a more aggressive squeeze to β* of 0.6 m. The tighter collimator settings shadow the inner triplet magnets more effectively and allow the measured aperture to be exploited fully. The price to pay was increased sensitivity to orbit movements – particularly in the squeeze – together with increased impedance, which as expected had a clear effect on beam stability.
Peak luminosity soon came close to its highest for the year, although there were determined and long-running attempts to further improve performance. These were successful to a certain extent and revealed some interesting issues at high bunch and total beam intensity. Although never debilitating, instabilities were a recurring problem and there were phases when they cut into operational efficiency. Integrated luminosity rates, however, were generally healthy at around 1 fb–1 per week. This allowed a total of about 23 fb–1 to be delivered to both ATLAS and CMS during a long operational year with the proton–proton run extended until December.
Apart from the delivery of high instantaneous and integrated proton–proton luminosity to ATLAS and CMS, the LHC team also satisfied other physics requirements. These included lead–lead runs in 2010 and 2011, which delivered 9.7 and 166 μb–1, respectively, at an energy of 3.5Z TeV (where Z is the atomic number of lead). Here the clients were ALICE, ATLAS and CMS. A process of luminosity levelling at around 4 × 1032 cm–2 s–1 via transverse separation with a tilted crossing angle enabled LHCb to collect 1.2 fb–1 and 2.2 fb–1 of proton–proton data in 2011 and 2012, respectively. ALICE enjoyed some sustained proton–proton running in 2012 at around 5 × 1030 cm–2 s–1, with collisions between enhanced satellite bunches and the main bunches. There was also a successful β* = 1 km run for TOTEM and the ATLAS forward detectors. This allowed the first LHC measurement in the Coulomb-nuclear interference region. Last, the three-year operational period culminated in a successful proton–lead run at the start of 2013, with ALICE, ATLAS, CMS and LHCb all taking data.
One of the main features of operation in 2011 and 2012 was the high bunch intensity and lower-than-nominal emittances offered by the excellent performance of the injector chain of Booster, Proton Synchrotron and Super Proton Synchrotron. The bunch intensity had been up to 150% of nominal with 50 ns bunch spacing, while the normalized emittance going into collisions had been around 2.5 mm mrad, i.e. 67% of nominal. Happily, the LHC proved to be capable of absorbing these brighter beams, notably in terms of beam–beam effects. The cost to the experiments was high pile-up, an issue that was handled successfully.
The table shows the values for the main luminosity-related parameters at peak performance of the LHC from 2010 to 2012 and the design values. It shows that, even though the beam size is naturally larger at lower energy, the LHC has achieved 77% of design luminosity at four-sevenths of the design energy with a β* of 0.6 m (compared with the design value of 0.55 m) with half of the nominal number of bunches.
Operational efficiency has been good with the integrated luminosity per week record reaching 1.3 fb–1. This is the result of outstanding system performance combined with fundamental characteristics of the LHC. The machine has a healthy single-beam lifetime before collisions of more than 300 hours and on the whole enjoys good vacuum conditions in both warm and cold regions. With a peak luminosity of around 7 × 1033 cm–2 s–1 at the start of a fill, the luminosity lifetime is initially in the range of 6–8 hours, increasing as the fill develops. There is minimal drift in beam overlap during physics data-taking and the beams are generally stable.
At the same time, a profound understanding of the beam physics and a good level of operational control have been established. The magnetic aspects of the machine are well understood thanks to modelling with FiDel (the Field Description for the LHC). A long and thorough magnet-measurement and analysis campaign meant that the deployed settings produced a machine with a linear optics that is close to the nominal model. Measurement and correction of the optics has aligned machine and model to an unprecedented level.
A robust operational cycle is now well established, with the steps of pre-cycle, injection, 450 GeV machine, ramp, squeeze and collide mostly sequencer-driven. A strict pre-cycling regime means that the magnetic machine is remarkably reproducible. Importantly, the resulting orbit stability – or the ability to correct back consistently to a reference – means that the collimator set-up remains good for a year’s run.
Considering the size, complexity and operating principles of the LHC, its availability has generally been good. The 257-day run in 2012 included around 200 days dedicated to proton–proton physics, with 36.5% of the time being spent in stable beams. This is encouraging for a machine that is only three years into its operational lifetime. Of note is the high availability of the critical LHC cryogenics system. In addition, many other systems also have crucial roles in ensuring that the LHC can run safely and efficiently.
In general the LHC beam-dump system (LBDS) worked impeccably, causing no major operational problems or long downtime. Beam-based set-up and checks are performed at the start of the operational year. The downstream protection devices form part of the collimator hierarchy and their proper positioning is verified periodically. The collimation system maintained a high proton-cleaning efficiency and semi-automatic tools have improved collimator set-up times during alignment.
The overall protection of the machine is ensured by rigorous follow-up, qualification and monitoring. The beam drives a subtle interplay of the LBDS, the collimation system and protection devices, which rely on a well defined aperture, orbit and optics for guaranteed safe operation. The beam dump, injection and collimation teams pursued well organized programmes of set-up and validation tests, permitting routine collimation of 140 MJ beams without a single quench of superconducting magnets from stored beams.
The beam instrumentation had great performance overall. Facilitating a deep understanding of the machine, it paved the way for the impressive improvement in performance during the three-year run. The power converters performed superbly, with good tracking between reference and measured currents and between the converters around the ring. There was good performance from the key RF systems. Software and controls benefited from a coherent approach, early deployment and tests on the injectors and transfer lines.
In summary, the LHC is performing well and a huge amount of experience and understanding has been gained during the past three years
There have inevitably been issues arising during the exploitation of the LHC. Initially, single-event upsets caused by beam-induced radiation to electronics in the tunnel were a serious cause of inefficiency. This problem had been foreseen and a sustained programme of mitigation measures, which included relocation of equipment, additional shielding and further equipment upgrades, resulted in a reduction of premature beam dumps from 12 per fb–1 to 3 per fb–1 in 2012. By contrast, an unforeseen problem concerned unidentified falling objects (UFOs) – dust particles falling into the beam causing fast, localized beam-loss events. These have now been studied and simulated but might still cause difficulties after the move to higher energy and a bunch spacing of 25 ns following the current long shutdown.
Beam-induced heating has been an issue. Essentially, all cases turned out to be localized and connected with nonconformities, either in design or installation. Design problems have affected the injection-protection devices and the mirror assemblies of the synchrotron-radiation telescopes, while installation problems have occurred in a low number of vacuum assemblies.
Beam instabilities dogged operations during 2012. The problems came with the push in bunch intensity, with the peak going into stable beams reaching around 1.7 × 1011 protons per bunch, i.e. ultimate bunch intensity. Other contributory factors included increased impedance from the tight collimator settings, smaller than nominal emittance and operation with low chromaticity during the first half of the run.
A final beam issue concerns the electron cloud. Here, electrons emitted from the vacuum chamber are accelerated by the electromagnetic fields of the circulating bunches. On impacting the vacuum chamber they cause further emission of one or more electrons and there is a potential avalanche effect. The effect is strongly bunch-spacing dependent and although it has not been a serious issue with the 50 ns beam, there are potential problems with 25 ns .
In summary, the LHC is performing well and a huge amount of experience and understanding has been gained during the past three years. There is good system performance, excellent tools and reasonable availability following targeted consolidation. Good luminosity performance has been achieved by harnessing the beam quality from injectors and fully exploiting the options in the LHC. This overall performance is the result of a remarkable amount of effort from all of the teams involved.
This article is based on “The first years of LHC operation for luminosity production”, which was presented at IPAC13.
The US LHC Accelerator Program (LARP) has successfully tested a powerful superconducting quadrupole magnet that will play a key role in developing a new beam-focusing system for the LHC. This advanced system – with other major upgrades to be implemented over the next decade – will allow the LHC to deliver a luminosity up to 10 times higher than in the original design.
Dubbed HQ02a, the latest in LARP’s series of high-field quadrupole magnets is wound with cables of the brittle but high-performance niobium-tin superconductor (Nb3Sn). Like all LHC magnets, HQ02a is designed to operate in superfluid helium at temperatures that are close to absolute zero. However, it has a larger beam aperture than the current focusing magnets – 120 mm in diameter compared to 70 mm – and the magnetic field in the superconducting coils reaches 12 T – 50% higher than the current 8 T. The corresponding field gradient – the rate of increase of field strength over the aperture – is 170 T/m. In a recent test at Fermilab, HQ02a achieved all of its challenging objectives.
One of LARP’s primary goals is to support CERN’s plan to replace the quadrupole magnets in the interaction regions in about 10 years from now as part of the High Luminosity LHC project. Not only must the magnets produce a stronger field, they will also require a larger temperature margin and have to cope with the intense radiation, which comes hand in hand with the planned increase in the rate of energetic collisions. These requirements go beyond the capabilities of the niobium-titanium currently used in the LHC and in all previous superconducting magnets for particle accelerators.
Modern niobium-tin can operate at a higher magnetic field and with a wider temperature margin than niobium-titanium. However, it is brittle and sensitive to strain – critical factors where intense electrical currents and strong magnetic fields create enormous forces as the magnets are energized. Large forces can damage the fragile conductor or cause sudden displacements of the superconducting coils, releasing energy as heat and possibly resulting in a loss of the superconducting state – that is, a quench.
To address these challenges, LARP has adopted a mechanical support structure that is based on a thick aluminum shell, pre-tensioned at room temperature using water-pressurized bladders and interference keys. This design concept – developed at Berkeley under the US Department of Energy’s General Accelerator Development programme – was compared with the traditional collar-based clamping system used in Fermilab’s Tevatron and all of the subsequent high-energy accelerators and scaled up to 4 m in length in the LARP long “racetrack” and long quadrupoles. The HQ models further refined this mechanical design approach, in particular by incorporating full coil alignment.
The success of these tests not only establishes high-performance niobium tin as a powerful superconductor for use in accelerator magnets, it also marks a shift from R&D to development of the LARP magnets that will be installed for the LHC luminosity upgrade.
• LARP is a collaboration involving Berkeley, Brookhaven, Fermilab and SLAC, working in close partnership with CERN. It is now led by Giorgio Apollinari.
A five-volume report containing the blueprint for the International Linear Collider (ILC) was published on 12 June. The authors of the Technical Design Report handed it over to the International Committee for Future Accelerators in three consecutive ceremonies in Tokyo, CERN and Fermilab, representing Asia, Europe and the Americas. Its publication marks the completion of many years of globally co-ordinated R&D and completes the mandate of the Global Design Effort for the ILC.
The ILC – a 31-km electron–positron collider with a total collision energy of 500 GeV – was designed to complement and advance LHC physics. The report contains all of the elements needed to propose the collider to collaborating governments, including the latest, most technologically advanced design and implementation plan optimized for performance, cost and risk.
Some 16,000 superconducting cavities will be needed to drive the particle beams. At the height of operation, bunches of 2 × 1010 electrons and positrons will collide roughly 7000 times a second. The report also includes details of two state-of-the-art detectors to record the collisions and an extensive outline of the geological and civil-engineering studies conducted for siting the ILC.
The design effort continues in the Linear Collider Collaboration. This combines the two most mature future particle-physics projects at the energy frontier – the ILC and the Compact Linear Collider (CLIC) – in an organizational partnership to co-ordinate and advance global development work for a linear collider. Some 2000 scientists worldwide – particle physicists, accelerator physicists and engineers – are involved in the ILC or in CLIC and often in both projects.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.