Comsol -leaderboard other pages

Topics

ATLAS undergoes some delicate gymnastics

ATLAS detector,

The LHC’s Long Shutdown 1 (LS1) is an opportunity that the ATLAS collaboration could not miss to improve the performance of its huge and complex detector. Planning began almost three years ago to be ready for the break and to produce a precise schedule for the multitude of activities that are needed at Point 1 – where ATLAS is located on the LHC. Now, a year after the famous announcement of the discovery of a “Higgs-like boson” on 4 July 2012 and only six months after the start of the shutdown, more than 800 different tasks have been already accomplished in more than 250 work packages. But what is ATLAS doing and why this hectic schedule? The list of activities is long, so only a few examples will be highlighted here.

The inner detector

One of the biggest interventions concerns the insertion of a fourth and innermost layer of the pixel detector – the IBL. The ATLAS pixel detector is the largest pixel-based system at the LHC. With about 80 million pixels, until now it has covered a radius from 12 cm down to 5 cm from the interaction point. At its conception, the collaboration already thought that it could be updated after a few years of operation. An additional layer at a radius of about 3 cm would allow for performance consolidation, in view of the effects of radiation damage to the original innermost layer at 5 cm (the b-layer). The decision to turn this idea into reality was taken in 2008, with the aim of installation around 2016. However, fast progress in preparing the detector and moving the long shutdown to the end of 2012 boosted the idea and the installation goal was moved forward by two years.

Making space

To make life more challenging, the collaboration decided to build the IBL using not only well established planar sensor technology but also novel 3D sensors. The resulting highly innovative detector is a tiny cylinder that is about 3 cm in radius and about 70 cm long but it will provide the ATLAS experiment with another 12 million detection channels. Despite its small dimensions, the entire assembly – including the necessary services – will need an installation tool that is nearly 10 m long. This has led to the so-called “big opening” of the ATLAS detector and the need to lift one of the small muon wheels to the surface.

The “big opening” of ATLAS is a special configuration where at one end of the detector one of the big muon wheels is moved as far as possible towards the wall of the cavern, the 400-tonne endcap toroid is moved laterally towards the surrounding path structure, the small muon wheel is moved as far as the already opened big wheel and then the endcap calorimeter is moved out by about 3 m. But that is not the end of the story. To make more space, the small muon wheel must be lifted to the surface to allow the endcap calorimeter to be moved further out against the big wheels.

In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP

This opening up – already foreseen for the installation of the IBL – became more worthwhile when the collaboration decided to use LS1 to repair the pixel detector. During the past three years of operation, the number of pixel modules that have stopped being operational has risen continuously from the original 10–15 up to 88 modules, at a worryingly increasing rate. Back in 2010, the first concerns triggered a closer look at the module failures and it was clear that in most of the cases the modules were in a good state but that something in the services had failed. This first glance was then augmented by substantial statistics after up to 88 modules had failed by mid-2012.

In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP for “new service quarter panels”. In January 2013, the collaboration decided to deploy the nSQP not only to fix the failures of the pixel modules and to enhance the future read-out capabilities for two of the three layers but also to ease the task of inserting the IBL into the pixel detector. This decision implied having to extract the pixel detector and take it to the clean-room building on the surface at Point 1 to execute the necessary work. The “big opening” therefore became mandatory.

"Big opening" of ATLAS

The extraction of the pixel detector was an extremely delicate operation but it was performed perfectly and a week in advance of the schedule. Work on both the original pixels and the IBL is now in full swing and preparations are under way to insert the enriched four-layer pixel detector back into ATLAS. The pixel detector will then contain 92 million channels – some 90% of the total number of channels in ATLAS.

But that is not the end of the story for the ATLAS inner detector. Gas leaks appeared last year during operation of the transition radiation tracker (TRT) detector. Profiting from the opening of the inner detector plates to access the pixel detector, a dedicated intervention was performed to cure as many leaks as possible using techniques that are usually deployed in surgery.

Further improvements

Another important improvement for the silicon detectors concerns the cooling. The evaporative cooling system that was based on a complex compressor plant has been satisfactory, even if it has created a variety of problems and interventions. The system allowed operating temperatures to be set to –20 °C with the possibility of going down to –30 °C, although the lower value has not been used so far as radiation damage to the detector is still in its infancy. However, the compressor plant needed continual attention and maintenance. The decision was therefore taken to build a second plant that was based on the thermosyphon concept, where the pressure that is required is obtained without a compressor, using instead the gravity advantage offered by the 90-m-deep ATLAS cavern. The new plant has been built and is now being commissioned, while the original plant has been refurbished and will serve as a redundant (back-up) system. In addition, the IBL cooling is based on CO2 cooling technology and a new redundant plant is being built to be ready for the IBL operations.

Both the semiconductor tracker and the pixel detector are also being consolidated. Improvements are being made to the back-end read-out electronics to cope with the higher luminosities that will go beyond twice the LHC design luminosity.

Extracting the pixel detector

Lifting the small muon wheel to the surface – an operation that had never been done before – was a success. The operation was not without difficulties because of the limited amount of space for manoeuvering the 140-tonne object to avoid collisions with other detectors, crates and the walls of the cavern and access shaft. Nevertheless, it was executed perfectly thanks to highly efficient preparation and the skill of the crane drivers and ATLAS engineers, with several dry runs done on the surface. Not to miss the opportunity, the few problematic cathode-strip chambers on the small wheel that was lifted to the surface will be repaired. A specialized tool is being designed and fabricated to perform this operation in the small space that is available between the lifting frame and the detector.

Many other tasks are foreseen for the muon spectrometer. The installation of a final layer of chambers – the endcap extensions – which was staged in 2003 for financial reasons has already been completed. These chambers were installed on one side of the detector during previous mid-winter shutdowns. The installation on the other side has now been completed during the first three months of LS1. In parallel, a big campaign to check for and repair leaks has started on the monitored drift tubes and resistive-plate chambers, with good results so far. As soon as access allows, a few problematic thin-gap chambers on the big wheels will be exchanged. Construction of some 30 new chambers has been under way for a few months and their installation will take place during the coming winter.

At the same time, the ATLAS collaboration is improving the calorimeters. New low-voltage power supplies are being installed for both the liquid-argon and tile calorimeters to give a better performance at higher luminosities and to correct issues that have been encountered during the past three years. In addition, a broad campaign of consolidation of the read-out electronics for the tile calorimeter is ongoing because it is many years since it was constructed. Designing, prototyping, constructing and testing new devices like these has kept the ATLAS calorimeter community busy during the past four years. The results that have been achieved are impressive and life for the calorimeter teams during operation will become much better with these new devices.

Improvements are also under way for the ATLAS forward detectors. The LUCID luminosity monitor is being rebuilt in a simplified way to make it more robust for operations at higher luminosity. All of the four Roman-pot stations for the absolute luminosity monitor, ALFA, located at 240 m from the centre of ATLAS in the LHC tunnel, will soon be in laboratories on the surface. There they will undergo modifications to implement wake-field suppression measures that will fight against the beam-induced increase in temperature that was suffered during operations in 2012. There are other plans for the beam-conditions monitor, the diamond-beam monitor and the zero-degree calorimeters. The activities are non-stop everywhere.

The infrastructure

All of the above might seem to be an enormous programme but it does not touch on the majority of the effort. The consolidation work spans the improvements to the evaporative cooling plants that have already been mentioned to all aspects of the electrical infrastructure and more. Here are a few examples from a long list.

The detector

Installation of a new uninterruptible power supply is ongoing at Point 1, together with replacement of the existing one. This is to avoid power glitches, which have affected the operation of the ATLAS detector on some occasions. Indeed, the whole electrical installation is being refreshed.

The cryogenic infrastructure is being consolidated and improved to allow completely separate operation of the ATLAS solenoid and toroid magnets. Redundancy is implemented everywhere in the magnet systems to limit downtime. Such downtime has, so far, been small enough to be unnoticeable in ATLAS data-taking but it could create problems in future.

All of the beam pipes will be replaced with new ones. In the inner detector, a new beryllium pipe with a smaller diameter to allow space for the IBL has been constructed and installed already in the IBL support structure. All of the other stainless-steel pipes will be replaced with aluminium ones to improve the level of background everywhere in ATLAS and minimize the adverse effects of activation.

A back-up for the ATLAS cooling towers is being created via a connection to existing cooling towers for the Super Proton Synchrotron. This will allow ATLAS to operate at reduced power, even during maintenance of the main cooling towers. The cooling infrastructure for the counting rooms is also undergoing complete improvement with redundancy measures inserted everywhere. All of these tasks are the result of a robust collaboration between ATLAS and all CERN departments.

LS1 is not, then, a period of rest for the ATLAS collaboration. Many resources are being deployed to consolidate and improve all possible aspects of the detector, with the aim of minimizing downtime and its impact on data-taking efficiency. Additional detectors are being installed to improve ATLAS’s capabilities. Only a few of these have been mentioned here. Others include, for example, even more muon chambers, which are being installed to fill any possible instrumental cracks in the detector.

All of this effort requires the co-ordination and careful planning of a complicated gymnastics of heavy elements in the cavern. ATLAS will be a better detector at the restart of LHC operations, ready to work at higher energies and luminosities for the long period until LS2 – and then the gymnastics will begin again.

Quarks, gluons and sea in Marseilles

Participants

The regular “DIS” workshops on Deep-Inelastic Scattering and related subjects usually bring together a unique mix of international communities and cover a spectrum of topics ranging across proton structure, strong interactions and physics at the energy frontier. DIS2013 – the 21st workshop – which took place in the Palais des Congrès in Marseilles earlier this year was no exception. Appropriately, this large scientific event formed part of a rich cultural programme in the city that was associated with its status as European Capital of Culture Marseilles-Provence 2013.

A significant part of the programme was devoted to recent and exciting experimental results, which together with theoretical advances and the outlook for the future created a vibrant scientific atmosphere. The workshop began with a full day of plenary reports on hot topics, followed by two and a half days of parallel sessions that were organized around seven themes: structure functions and parton densities; small-x, diffraction and vector mesons; electroweak physics and beyond the Standard Model; QCD and hadronic final states; heavy flavours; spin physics; future experiments.

Higgs and more

The meeting provided the opportunity to discuss in depth the various connections between strong interactions, proton structure and recent experimental results at the LHC. In particular, the discovery of a Higgs boson and the subsequent studies of its properties attracted a great deal of interest, including from the perspective of the connections with proton structure. A tremendous effort was made in the past year to provide an improved theory, study the constraints from the wealth of new experimental data and adopt a more robust methodology in analyses that determine the proton’s parton distribution functions (PDFs). The PDFs are an essential ingredient for most LHC analyses, from characterization of the Higgs boson to self-consistency tests of the Standard Model. The first “safari” into the new territory at the LHC and the impressive final results with the full data set from Fermilab’s Tevatron have revealed no new phenomena so far. However, it might well be that the search for new physics – which will be re-launched at higher energies during the next LHC run – will be affected by the precision with which the structure of the proton is known.

The goal of the spin community is to produce a 3D picture of the proton with high precision

The most recent experimental results from continuing analysis from HERA – the electron–proton collider that ran at DESY during 1992–2007 – were presented. In particular, both the H1 and ZEUS collaborations have now published measurements at high photon virtualities (Q2). While the refined data from HERA form its immensely valuable legacy, the transfer of the baton to the LHC has already begun. A large number of recent results – in particular from the LHC – provide further constraints on the PDFs, such as in the case of final states with weak bosons or top quarks, which are already in the regime of precision measurements with about 1% accuracy. Stimulated by an active Standard Model community that has many groups that are working on the determination of PDFs (such as ABM, MSTW and CTEQ) and by the release of common analysis tools such as HERAFitter, the new measurements from the LHC are rapidly interpreted in terms of valuable PDF constraints, as figure 1 shows. More exclusive final states have the potential to complement inclusive measurements: for instance, measurements on the W in association with the charm quark could shed new light on the strangeness content of the proton. A huge step in the precision of PDF determination – which might be essential to study new physics – complemented by a standalone programme at the energy frontier would be possible at the proposed Large Hadron Electron Collider (LHeC), which could provide a new opportunity to study Higgs-boson couplings.

Gluon density in the proton

The understanding of proton structure would not be complete without understanding its spin. Polarized experiments – including fixed-target DIS experiments at Jefferson Lab and CERN, as well as the polarized proton–proton programme at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) – continue to provide new data and to open new fields. The goal is to understand the parton contributions to the proton’s spin, long considered a “puzzle” because of the unexpected way that it is shared between the quarks – with only a quarter – the gluons and the relative angular momentum. Recent, more precise measurements of W-boson production in polarized proton–proton collisions at RHIC have the potential to constrain further the valence quark contributions, while semi-inclusive DIS scattering at fixed-target experiments (for instance, using final states with charm mesons) continue to reduce the uncertainty on the gluon contribution. The goal of the spin community – manifest in the project for a polarized Electron-Ion Collider (EIC) – is to produce a 3D picture of the proton with high precision using a large number of observables across an extended phase space.

Impressive precision

The current scientific landscape includes many experiments that are based on hadronic interactions, with the LHC taking these studies to the highest energies. These are reaching impressive, increasing precision across a large phase space, not only in final states with jets but also in more exclusive configurations including photons, weak bosons or tagged heavy flavours. The measurements performed in diffraction – by now a classical laboratory for QCD tests – are also available from the LHC in inclusive and semi-inclusive final states and enforce the global understanding of the strong interactions. An interesting case concerns double-parton interactions where the final state originates from not one but two parton–parton collisions – a contribution that in some cases can pollute final-state configurations (including bosons or Higgs production). Although the measurements are not yet precise enough to identify kinematical dependencies or parton–parton correlations, they are beginning to unveil this contribution, which may prove in future to be related to profound aspects of the proton structure, such as the generalized parton distributions and the proton spin.

A global picture and complete understanding of the strong force can only emerge by using all of the available configurations and energies. In particular, the measurements of the hadronic final states performed in electron–proton collisions at HERA and the refined measurements at the Tevatron provide an essential testing ground for the increasingly precise calculations. Figure 2 illustrates this statement, presenting measurements of the strong coupling from collider experiments – including the most recent measurements from the LHC.

The high-energy heavy-ion collisions at both RHIC and the LHC have been a constant source of new results and paradigms during the past few years and this proved equally true for the DIS2013 conference. Probes such as mesons or jets “disappear” when high densities of the collision fireball are reached. The set of such probes has been consolidated at the LHC, where the experimental capabilities and large phase space allow further measurements involving strangeness, charm or inclusive particle production. In addition, the recently achieved proton–lead collisions provide new testing grounds for the collective behaviour of the quarks and gluons at high densities.

Measurements of the strong coupling constant

A total of 300 talks were given covering the seven themes of the workshop, distributed across two and a half days of parallel sessions, a few of which combined different themes. As tradition requires at DIS workshops, the presentations were followed by intense debates on classic and new issues, including a satellite workshop on the HERAFitter project. On the last day, the working group convenors summarized the highlights of the rich scientific programme of the parallel sessions.

The conference ended with a session on future experiments, where together with upgrades of the LHC experiments and other interesting projects related to new capabilities for QCD-related studies (AFTER, CHIC, COMPASS, NA62, nuSTORM, etc.), the two projects for new colliders EIC and LHeC were discussed. Rolf Heuer, CERN’s director-general, presented the recently updated European Strategy for Particle Physics. The programme at the energy frontier with the LHC will be followed for at least 20 years and studies for further projects are ongoing. The conference ended with an inspiring outlook talk by Chris Quigg of Fermilab, with hints of a possible QCD-like walk on the new physics frontier. In the evening, Heuer gave a talk for the general public to an audience of more than 200 people on recent discoveries at the LHC.

In addition to the workshop sessions, participants enjoyed a dinner in the Pharo castle – with a splendid view of the old and new harbours of Marseilles – where they found out why the French national anthem is called La Marseillaise. There was also half a day of free time for most of the participants – except maybe for convenors who had to prepare their summary reports – with two excursions organized at Cassis and in the historic centre of Marseilles.

In summary, the DIS2013 workshop once again allowed an insightful journey around the fundamental links between QCD, proton structure and physics at the energy frontier – an interface that will continue to grow and create new research ideas and projects in the near future. The next – 22nd – DIS workshop will be held in Warsaw in April 2014.

Conference time in Stockholm

Stockholm

When the Swedish warship Vasa capsized in Stockholm harbour on her maiden voyage in 1628, many hearts must have also sunk metaphorically, as they did at CERN in September 2008 when the LHC’s start-up came to an abrupt end. Now, the raised and preserved Vasa is the pride of Stockholm and the LHC – following a successful restart in 2009 – is leading research in particle physics at the high-energy frontier. This year the two icons crossed paths when the International Europhysics Conference on High-Energy Physics, EPS-HEP 2013, took place in Stockholm on 18–24 July, hosted by the KTH (Royal Institute of Technology) and Stockholm University. Latest results from the LHC experiments featured in many of the parallel, plenary and poster sessions – and the 750 or so participants had the opportunity to see the Vasa for themselves at the conference dinner. There was, of course, much more and this report can only touch on some of the highlights.

Coming a year after the first announcement of the discovery of a “Higgs-like” boson on 4 July 2012, the conference was the perfect occasion for a birthday celebration for the new particle. Not only has its identity been more firmly established in the intervening time – it almost certainly is a Higgs boson – but many of its attributes have been measured by the ATLAS and CMS experiments at the LHC, as well as by the CDF and DØ collaborations using data collected at Fermilab’s Tevatron. At 125.5 GeV/c2, its mass is known to within 0.5% precision – better than for any quark – and several tests by ATLAS and CMS show that its spin-parity, JP, is compatible with the 0+ expected for a Standard Model Higgs boson. These results exclude other models to greater than 95% confidence level (CL), while a new result from DØ rejects a graviton-like 2+ at >99.2% CL.

The mass of the top quark is in fact so large – 173 GeV/c2 – that it decays before forming particles

The new boson’s couplings provide a crucial test of whether it is the particle responsible for electroweak-symmetry breaking in the Standard Model. A useful parameterization for this test is the ratio of the observed signal strength to the Standard Model prediction, μ = (σ × BR)/(σ × BR)SM, where σ is the cross-section and BR the branching fraction. The results for the five major decay channels measured so far (γγ, WW*, ZZ*, bb and ττ) are consistent with the expectations for a Standard Model Higgs boson, i.e. μ = 1, to 15% accuracy. Although it is too light to decay to the heaviest quark – top, t – and its antiquark, the new boson can in principle be produced together with a tt pair, so yielding a sixth coupling. While this is a challenging channel, new results from CMS and ATLAS are starting to approach the level of sensitivity for the Standard Model Higgs boson, which bodes well for its future use.

The mass of the top quark is in fact so large – 173 GeV/c2 – that it decays before forming particles, making it possible to study the “bare” quark. At the conference, the CMS collaboration announced the first observation, at 6.0σ, of the associated production of a top quark and a W boson, in line with the Standard Model’s prediction. Both ATLAS and CMS had previously found evidence for this process but not to this significance. The DØ collaboration presented latest results on the lepton-based forward–backward lepton asymmetry in tt- production, which had previously indicated some deviation from theory. The new measurement, based on the full data set of 9.7 fb–1 of proton–antiproton collisions at the Tevatron, gives an asymmetry of (4.7±2.3 stat.+1.1–1.4 syst.)%, which is consistent with predictions from the Standard Model to next-to-leading order.

Venue for the conference dinner

The study of B hadrons, which contain the next heaviest quark, b, is one of the aspects of flavour physics that could yield hints of new physics. One of the highlights of the conference was the announcement of the observation of the rare decay mode B0s → μμ by both the LHCb and CMS collaborations, at 4.0 and 4.3σ, respectively. While there had been hopes that this decay channel might open a window on new physics, the long-awaited results align with the predictions of the Standard Model. The BaBar and Belle collaborations also reported on their precise measurements of the decay B → D(*)τντ at SLAC and KEK, respectively, which together disagree with the Standard Model at the 4.3σ level. The results rule out one model that adds a second Higgs doublet to the Standard Model (2HDM type II) but are consistent with a different variant, 2HDM type III – a reminder that the highest energies are not the only place where new physics could emerge.

Precision, precision

Precise measurements require precise predictions for comparison and here theoretical physics has seen a revolution in calculating next-to-leading order (NLO) effects, involving a single loop in the related Feynman diagrams. Rapid progress during the past few years has meant that the experimentalists’ wish-list for QCD calculations at NLO relevant to the LHC is now fulfilled, including such high-multiplicity final states as W + 4 jets and even W + 5 jets. Techniques for calculating loops automatically should in future provide a “do-it-yourself” approach for experimentalists. The new frontier for the theorists, meanwhile, is at next-to-NLO (NNLO), where some measurements – such as pp → tt – are already at an accuracy of a few per cent and some processes – such as pp → γγ – could have large corrections, up to 40–50%. So a new wish-list is forming, which will keep theorists busy while the automatic code takes over at NLO.

With a measurement of the mass for the Higgs boson, small corrections to the theoretical predictions for many measurable quantities – such as the ratio between the masses of the W and the top quark – can now be calculated more precisely. The goal is to see if the Standard Model gives a consistent and coherent picture when everything is put together. The GFitter collaboration of theorists and experimentalists presented its latest global Standard Model fit to electroweak measurements, which includes the legacy both from the experiments at CERN’s Large Electron–Positron Collider and from the SLAC Large Detector, together with the most recent theoretical calculations. The results for 21 parameters show little tension between experiment and the Standard Model, with no discrepancy exceeding 2.5σ, the largest being in the forward–backward asymmetry for bottom quarks.

There is more to research at the LHC than the deep and persistent probing of the Standard Model. The ALICE, LHCb, CMS and ATLAS collaborations presented new results from high-energy lead–lead and proton–lead collisions at the LHC. The most intriguing results come from the analysis of proton–lead collisions and reveal features that previously were seen only in lead–lead collisions, where the hot dense matter that was created appears to behave like a perfect liquid. The new results could indicate that similar effects occur in proton–lead collisions, even though far fewer protons and neutrons are involved. Other results from ALICE included the observation of higher yields of J/ψ particles in heavy-ion collisions at the LHC than at Brookhaven’s Relativistic Heavy-Ion Collider, although the densities are much higher at the LHC. The measurements in proton–lead collisions should cast light on this finding by allowing initial-state effects to be disentangled from those for cold nuclear matter.

Supersymmetry and dark matter

The energy frontier of the LHC has long promised the prospect of physics beyond the Standard Model, in particular through evidence for a new symmetry – supersymmetry. The ATLAS and CMS collaborations presented their extensive searches for supersymmetric particles in which they have explored a vast range of masses and other parameters but found nothing. However, assumptions involved in the work so far mean that there are regions of parameter space that remain unexplored. So while supersymmetry may be “under siege”, its survival is certainly still possible. At the same time, creative searches for evidence of extra dimensions and many kinds of “exotics” – such as excited quarks and leptons – have likewise produced no signs of anything new.

Aula Magna lecture theatre

However, evidence that there must almost certainly be some kind of new particle comes from the existence of dark, non-hadronic matter in the universe. Latest results from the Planck mission show that this should make up some 26.8% of the universe – about 4% more than previously thought. This drives the search for weakly interacting particles (WIMPs) that could constitute dark matter, which is becoming a worldwide effort. Indeed, although the Higgs boson may have been top of the bill for hadron-collider physics, more generally, the number of papers with dark matter in the title is growing faster than those on the Higgs boson.

While experiments at the LHC look for the production of new kinds of particles with the correct properties to make dark matter, “direct” searches seek evidence of interactions of dark-matter particles in the local galaxy as they pass through highly sensitive detectors on Earth. Such experiments are showing an impressive evolution with time, increasing in sensitivity by about a factor of 10 every two years and now reaching cross-sections down to 10–8 pb. Among the many results presented, an analysis of 140.2-kg days of data in the silicon detectors of the CDMS II experiment revealed three WIMP-candidate events with an expected background of 0.7. A likelihood analysis gives a 0.19% probability for the known background-only hypothesis.

Neutrinos are the one type of known particle that provide a view outside the Standard Mode

“Indirect” searches, by contrast, involve in particular the search from signals from dark-matter annihilation in the cosmos. In 2012, an analysis of publically available data from 43 months of the Fermi Large Area Telescope (LAT) indicated a puzzling signal at 130 GeV, with the interesting possibility that these γ rays could originate from the annihilation of dark-matter particles. A new analysis by the Fermi LAT team of four years’ worth of data gives preliminary indications of an effect with a local significance of 3.35σ but the global significance is less than 2σ. The HESS II experiment is currently accumulating data and might soon be able to cross-check these results.

With their small but nonzero mass and consequent oscillations from one flavour to another, neutrinos are the one type of known particle that provide a view outside the Standard Model. At the conference, the T2K collaboration announced the first definitive observation at 7.5σ of the transition νμ → νe in the high-energy νμ beam that travels 295 km from the Japan Proton Accelerator Complex to the Super-Kamiokande detector. Meanwhile, the Double CHOOZ experiment, which studies νe produced in a nuclear reactor, has refined its measurement of θ13, one of the parameters characterizing neutrino oscillations, by using two independent methods that allow much better control of the backgrounds. The GERDA collaboration uses yet another means to investigate if neutrinos are their own antiparticles, by searching for the neutrinoless double-beta decay of the isotope 76Ge in a detector in the INFN Gran Sasso National Laboratory. The experiment has completed its first phase and finds no sign of this process but now provides the world’s best lower limit for the half-life at 2.1 × 1023 years.

On the other side of the world, deep in the ice beneath the South Pole, the IceCube collaboration has recently observed oscillations of neutrinos produced in the atmosphere. More exciting, arguably, is the detection of 28 extremely energetic neutrinos – including two with energies above 1 PeV – but the evidence is not yet sufficient to claim observation of neutrinos of extraterrestrial origin.

Towards the future

In addition to the sessions on the latest results, others looked to the continuing health of the field with presentations of studies on novel ideas for future particle accelerators and detection techniques. These topics also featured in the special session for the European Committee for Future Accelerators, which looked at future developments in the context of the update of the European Strategy for Particle Physics. A range of experiments at particle accelerators currently takes place on two frontiers – high energy and high intensity. Progress in probing physics that lies at the limit of these experiments will come both from upgrades of existing machines and at future facilities. These will rely on new ideas being investigated in current accelerator R&D and will also require novel particle detectors that can exploit the higher energies and intensities.

Paris Sphicas and Peter Higgs

For example, two proposals for new neutrino facilities would allow deeper studies of neutrinos – including the possibility of CP violation, which could cast light on the dominance of matter over antimatter in the universe. The Long-Baseline Neutrino Experiment (LBNE) would create a beam of high-energy νμ at Fermilab and detect the appearance of νe with a massive detector that is located 1300 km away at the Sanford Underground Research Facility. A test set-up, LBNE10, has received funding approval. A complementary approach providing low-energy neutrinos is proposed for the European Spallation Source, which is currently under construction in Lund. This will be a powerful source of neutrons that could also be used to generate the world’s most intense neutrino beam.

The LHC was first discussed in the 1980s, more than 25 years before the machine produced its first collisions. Looking to the long-term future, other accelerators are now on the drawing board. One possible option is the International Linear Collider, currently being evaluated for construction in Japan. Another option is to create a large circular electron–positron collider, 80–100 km in circumference, to produce Higgs bosons for precision studies.

The main physics highlights of the conference were reflected in the 2013 EPS-HEP prizes, awarded in the traditional manner at the start of the plenary sessions. The EPS-HEP prize honoured both ATLAS and CMS – for the discovery of the new boson – and three of their pioneering leaders (Michel Della Negra, Peter Jenni and Tejinder Virdee). François Englert and Peter Higgs were there to present this major prize and took part later in a press conference together with the prize winners. Following the ceremony, Higgs gave a talk, “Ancestry of a New Boson”, in which he recounted what led to his paper of 1963 and also cast light on why his name became attached to the now-famous particle. Other prizes acknowledged the measurement of the all-flavour neutrino flux from the Sun, as well as the observation of the rare decay B0s → μμ, work in 4D field theories and outstanding contributions to outreach. In a later session, a prize sponsored by Elsevier was awarded for the best four posters out of the 130 that were presented by young researchers in the dedicated poster sessions.

To close the conference, Nobel Laureate Gerard ‘t Hooft presented his outlook for the field. This followed the conference summary by Sergio Bertolucci, CERN’s director for research and computing, in which he also thanked the organizers for the “beautiful venue, the fantastic weather and the perfect organization” and acknowledged the excellent presentations from the younger members of the community. The baton now passes to the organizing committees of the next EPS-HEP conference, which will take place in Vienna on 22–29 July 2015.

• This article has touched on only some of the physics highlights of the conference. For all of the talks, see http://eps-hep2013.eu/.

Neutral currents: A perfect experimental discovery

In a seminar at CERN on 19 July 1973, Paul Musset of the Gargamelle collaboration presented the first direct evidence of weak neutral currents. They had discovered events in which a neutrino scattered from a hadron (proton or neutron) without turning into a muon (figure 1) – the signature of a hadronic weak neutral current. In addition they had one leptonic event characterized by a single electron track (figure 2). A month later, Gerald Myatt presented the results on a global stage at the 6th International Symposium on Electron and Photon Interactions at High Energies in Bonn. By then, two papers detailing the discovery were in the offices of Physics Letters and were published together on 3 September. A few days later, the results were presented in Aix-en-Provence at the International Europhysics Conference on High-Energy Particle Physics, where they were aired as part of a large programme of events for the public.

Earlier in May, Luciano Maiani was with Nicola Cabibbo at their university in Rome when Ettore Fiorini visited, bringing news of what the Gargamelle collaboration had found in photographs of neutrino interactions in the huge heavy-liquid bubble chamber at CERN. The main problem facing the collaboration was to be certain that the events were from neutrinos and not from neutrons that were liberated in interactions in the material surrounding the bubble chamber (CERN Courier September 2009 p25). Fiorini described how the researchers had overcome this in their analysis, one of the important factors being the size of Gargamelle. “He explained that the secret was the volume,” Maiani recalls, “which could kill the neutron background.” Maiani at least was convinced that the collaboration had observed neutral currents. It was a turning point along the road to today’s Standard Model of particles and their interactions, he says.

Weak neutral currents, which involve no exchange of electric charge between the particles concerned, are the manifestation of the exchange of the neutral vector boson, Z, which mediates the weak interaction together with the charged bosons, W±. The discovery of these neutral currents in 1973 was crucial experimental support for the unification of electromagnetic and weak interactions in electroweak theory. This theory – for which Sheldon Glashow, Abdus Salam and Steven Weinberg received the Nobel Prize in Physics in 1979 – became one of the pillars of the Standard Model.

For Maiani, the theoretical steps began in 1962 with his colleague Cabibbo’s work that restored universality in weak interactions. The problem that Cabibbo had resolved concerned an observed difference in the strength of weak decays of strange particles compared with muons and neutrons. His solution, formulated before the proposal of quarks, was based on a weak current parameterized by a single angle – later known as the Cabibbo angle.

During the next 10 years, not only did the concept of quarks as fundamental particles emerge but other elements of today’s Standard Model developed, too. In 1970, for example, Maiani, Glashow and Iliopoulos put forward a model that involved a fourth quark, charm, to deal correctly with divergences in weak-interaction theory. Their idea was based on a simple analogy between the weak hadronic and leptonic currents. As their paper stated, the model featured “a remarkable symmetry between leptons and quarks” – and this brought the neutral currents of the electroweak unification of Weinberg and Salam into play in the quark sector. One important implication was a large suppression of strangeness-changing neutral currents through what became known as the GIM mechanism.

Maiani now says that at this point no one was talking in terms of a standard theory, even though many of the elements were there – charm, intermediate vector bosons and the Brout-Englert-Higgs mechanism for electroweak-symmetry breaking. However, perceptions began to change around 1972 with the work of Gerardus ‘t Hooft and Martinus Veltman, who showed that electroweak theory could be self-consistent through renormalization (CERN Courier December 2009 p30). After this leap forward in theory, the observations in Gargamelle provided a similar breakthrough on the experimental front. “At the start of the decade, people did not generally believe in a standard theory even though theory had done everything. The neutral-current signals changed that,” Maiani recalls. “From then on, particle physics had to test the standard theory.”

A cascade of discoveries followed the observation of neutral currents, with the J/ψ in 1974 and open charm in 1976 through to the W and Z vector bosons in 1983. The discovery of a Higgs boson in 2012 at CERN finally closed the cycle, providing observation of the key player in the Brout-Englert-Higgs mechanism, which gives mass to the W and Z bosons of the weak interactions, while leaving the photon of electromagnetism massless. In a happy symmetry, the latest results on this first fundamental scalar boson were recent highlights at the 2013 Lepton–Photon Conference and at the International Europhysics Conference on High-Energy Particle Physics, EPS-HEP 2013 – the direct descendants of the meetings in Bonn and Aix-en-Provence where the discovery of neutral currents was first aired 40 years ago.

“We are now stressing the discovery of a Higgs boson,” says Maiani, “but in 1973 the mystery was: will it all work?” Looking back to the first observation of neutral currents, he describes it as “a perfect experimental discovery”. “It was a beautiful experimental development, totally independent of theory,” he continues. “It arrived at exactly the right moment, when people realized that there was a respectable theory.” That summer 40 years ago also saw the emergence of quantum chromodynamics (CERN Courier January/February 2013 p24), which set up the theory for strong interactions. The Standard Model had arrived.•

For more detailed accounts by key members of the Gargamelle collaboration, see the articles by Don Perkins (in the commemorative issue for Willibald Jentschke 2003 p15) and Dieter Haidt (CERN Courier October 2004 p21).

Physicists meet the public at Aix

During the week of the Aix Conference more attention than usual was given to the need for communication with non-physicists. A plenary session was held on “Popularizing High Energy Physics” and on several evenings “La Physique dans la Rue” events were organized in the town centre.

One evening saw a more classical presentation of information with talks by Louis Leprince-Ringuet (on the beauties of pure research), Bernard Gregory (on the role of fundamental science and its pioneering role in international collaboration) and Valentine Telegdi (on the intricate subject of neutral currents). More than 600 people heard these talks, no doubt attracted particularly by the well known television personality of Leprince-Ringuet.

CERN Courier October 1973 pp297–298 (extract).

AIDA boosts detector development

Conceptual structure of a pixel detector

Research in high-energy physics at particle accelerators requires highly complex detectors to observe the particles and study their behaviour. In the EU-supported project on Advanced European Infrastructure for Detectors at Accelerators (AIDA), more than 80 institutes from 23 European countries have joined forces to boost detector development for future particle accelerators in line with the European Strategy for Particle Physics. These include the planned upgrade of the LHC, as well as new linear colliders and facilities for neutrino and flavour physics. To fulfil its aims, AIDA is divided into three main activities: networking, joint research and transnational access, all of which are progressing well two years after the project’s launch.

Networking

AIDA’s networking activities fall into three work packages (WPs): the development of common software tools (WP2); microelectronics and detector/electronics integration (WP3); and relations with industry (WP4).

Building on and extending existing software and tools, the WP2 network is creating a generic geometry toolkit for particle physics together with tools for detector-independent reconstruction and alignment. The design of the toolkit is shaped by the experience gained with detector-description systems implemented for the LHC experiments – in particular LHCb – as well as by lessons learnt from various implementations of geometry-description tools that have been developed for the linear-collider community. In this context, the Software Development for Experiments and LHCb Computing groups at CERN have been working together to develop a new generation of software for geometry modellers. These are used to describe the geometry and material composition of the detectors and as the basis for tracking particles through the various detector layers.

Enabling the community to access the most advanced semiconductor technologies is an important aim for AIDA

This work uses the geometrical models in Geant4 and ROOT to describe the experimental set-ups in simulation or reconstruction programmes and involves the implementation of geometrical solid primitives as building blocks for the description of complex detector arrangements. These include a large collection of 3D primitives, ranging from simple shapes such as boxes, tubes or cones to more complex ones, as well as their Boolean combinations. Some 70–80% of the effort spent on code maintenance in the geometry modeller is devoted to improving the implementation of these primitives. To reduce the effort required for support and maintenance and to converge on a unique solution based on high-quality code, the AIDA initiative has started a project to create a “unified-solids library” of the geometrical primitives.

Enabling the community to access the most advanced semiconductor technologies – from nanoscale CMOS to innovative interconnection processes – is an important aim for AIDA. One new technique is 3D integration, which has been developed by the microelectronic industry to overcome limitations of high-frequency microprocessors and high-capacity memories. It involves fabricating devices based on two or more active layers that are bonded together, with vertical interconnections ensuring the communication between them and the external world. The WP3 networking activity is studying 3D integration to design novel tracking and vertexing detectors based on high-granularity pixel sensors.

Interesting results have already emerged from studies with the FE-Ix series of CMOS chips that the ATLAS collaboration has developed for the read-out of high-resistivity pixel sensors – 3D processing is currently in progress on FE-I4 chips. Now, some groups are evaluating the possibility of developing new electronic read-out chips in advanced CMOS technologies, such as 65 nm and of using these chips in a 3D process with high-density interconnections at the pixel level. Once the feasibility of such a device is demonstrated, physicists should be able to design a pixel detector with highly aggressive and intelligent architectures for sensing, analogue and digital processing, storage and data transmission (figure 1).

The development of detectors using breakthrough technologies calls for the involvement of hi-tech industry. The WP4 networking activity aims to increase industrial involvement in key detector-developments in AIDA and to provide follow-up long after completion of the project. To this end, it has developed the concept of workshops tailored to maximize the attendees’ benefits while also strengthening relations with European industry, including small and medium-sized enterprises (SMEs). The approach is to organize “matching events” that address technologies of high relevance for detector systems and gather key experts from industry and academia with a view to establish high-quality partnerships. WP4 is also developing a tool called “collaboration spotting”, which aims to monitor through publications and patents the industrial and academic organizations that are active in the technologies under focus at a workshop and to identify the key players. The tool was used successfully to invite European companies – including SMEs – to attend the workshop on advanced interconnections for chip packaging in future detectors that took place in April at the Laboratori Nazionali di Frascati of INFN.

Test beams and telescopes

The development, design and construction of detectors for particle-physics experiments are closely linked with the availability of test beams where prototypes can be validated under realistic conditions or production modules can undergo calibration. Through its transnational access and joint research activities, AIDA is not only supporting test-beam facilities and corresponding infrastructures at CERN, DESY and Frascati but is also extending them with new infrastructures. Various sub-tasks cover the detector activities for the LHC and linear collider, as well as a neutrino activity, where a new low-energy beam is being designed at CERN, together with prototype detectors.

One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN

One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN. These are essential for the selection and qualification of materials, components and full detectors operating in the harsh radiation environments of future experiments. AIDA has strongly supported the initiatives to construct GIF++ – a powerful γ irradiation facility combined with a test beam in the North Area – and EAIRRAD, which will be a powerful proton and mixed-field irradiation facility in the East Area. AIDA is contributing to both projects with common user-infrastructure as well as design and construction support. The aim is to start commissioning and operation of both facilities following the LS1 shutdown of CERN’s accelerator complex.

The current shutdown of the test beams at CERN during LS1 has resulted in a huge increase in demand for test beams at the DESY laboratory. The DESY II synchrotron is used mainly as a pre-accelerator for the X-ray source PETRA III but it also delivers electron or positron beams produced at a fixed carbon-fibre target to as many as three test-beam areas. Its ease of use makes the DESY test beam an excellent facility for prototype testing because this typically requires frequent access to the beam area. In 2013 alone, 45 groups from more than 30 countries with about 200 users have already accessed the DESY test beams. Many of them received travel support from the AIDA Transnational Access Funds and so far AIDA funding has enabled a total of 130 people to participate in test-beam campaigns. The many groups using the beams include those from the ALICE, ATLAS, Belle II, CALICE, CLIC, CMS, Compass, LHCb, LCTPC and Mu3e collaborations.

Combined beam-telescope

About half of the groups using the test beam at DESY have taken advantage of a beam telescope to provide precise measurements of particle tracks. The EUDET project – AIDA’s predecessor in the previous EU framework programme (FP6) – provided the first beam telescope to serve a large user community, which was aimed at detector R&D for an international linear collider. For more than five years, this telescope, which was based on Minimum Ionizing Monolithic Active pixel Sensors (MIMOSA), served a large number of groups. Several copies were made – a good indication of success – and AIDA is now providing continued support for the community that uses these telescopes. It is also extending its support to the TimePix telescope developed by institutes involved in the LHCb experiment.

The core of AIDA’s involvement lies in the upgrade and extension of the telescope. For many users who work on LHC applications, a precise reference position is not enough. They also need to know the exact time of arrival of the particle but it is difficult to find a single system that can provide both position and time at the required precision. Devices with a fast response tend to be less precise in the spatial domain or put too much material in the path of the particle. So AIDA combines two technologies: the thin MIMOSA sensors with their spatial resolution provide the position; while the ATLAS FEI4 detectors provide time information with the desired LHC structure.

The first beam test in 2012 with a combined MIMOSA-FEI4 telescope was an important breakthrough. Figure 2 shows the components involved in the set-up in the DESY beam. Charged particles from the accelerator – electrons in this case – first traverse three read-out planes of the MIMOSA telescope, followed by the device under test, then the second triplet of MIMOSA planes and then the ATLAS-FEI4 arm. The DEPFET pixel-detector international collaboration was the first group to use the telescope, so bringing together within a metre pixel detectors from three major R&D collaborations.

While combining the precise time information from the ATLAS-FEI4 detector with the excellent spatial resolution of MIMOSA provides the best of both worlds, there is an additional advantage: the FEI4 chip has a self-triggering capability because it can issue a trigger signal based on the response of the pixels. Overlaying the response of the FEI4 pixel matrix with a programmable mask and feeding the resulting signal into the trigger logic allows triggering on a small area and is more flexible than a traditional trigger based on scintillators. To change the trigger definition, all that is needed is to upload a new mask to the device. This turns out to be a useful feature if the prototypes under test cover a small area.

CALICE tungsten calorimeter

Calorimeter development in AIDA WP9 is mainly motivated by experiments at possible future electron–positron colliders, as defined in the International Linear Collider and Compact Linear Collider studies. These will demand extremely high-performance calorimetry, which is best achieved using a finely segmented system that reconstructs events using the so-called particle-flow approach to allow the precise reconstruction of jet energies. The technique works best with an optimal combination of tracking and calorimeter information and has already been applied successfully in the CMS experiment. Reconstructing each particle individually requires fine cell granularity in 3D and has spurred the development of novel detection technologies, such as silicon photo-multipliers (SiPMs) mounted on small scintillator tiles or strips, gaseous detectors (micro mesh or resistive plate chambers) with 2D read-out segmentation and large-area arrays of silicon pads.

After tests of sensors developed by the CALICE collaboration in a tungsten stack at CERN (figure 3) – in particular to verify the neutron and timing response at high energy – the focus is now on the realization of fully technological prototypes. These include power-pulsed embedded data-acquisition chips requested for the particle-flow-optimized detectors for a future linear collider and they address all of the practical challenges of highly granular devices – compactness, integration, cooling and in situ calibration. Six layers (256 channels each) of a fine granularity (5 × 5 mm2) silicon-tungsten electromagnetic calorimeter are being tested in electron beams at DESY this July (figure 4). At the same time, the commissioning of full-featured scintillator hadron calorimeter units (140 channels each) is progressing at a steady pace. A precision tungsten structure and read-out chips are also being prepared for the forward calorimeters to test the radiation-hard sensors produced by the FCAL R&D collaboration.

Five scintillator HCAL units

The philosophy behind AIDA is to bring together institutes to solve common problems so that once the problem is solved, the solution can be made available to the entire community. Two years on from the project’s start – and halfway through its four-year lifetime – the highlights described here, from software toolkits to a beam-telescope infrastructure to academia-industry matching, illustrate well the progress that is being made. Ensuring the user support of all equipment in the long term will be the main task in a new proposal to be submitted next year to the EC’s Horizon 2020 programme. New innovative activities to be included will be discussed during the autumn within the community at large.

Machine protection: the key to safe operation

The combination of high intensity and high energy that characterizes the nominal beam in the LHC leads to a stored energy of 362 MJ in each ring. This is more than two orders of magnitude larger than in any previous accelerator – a large step that is highlighted in the comparisons shown in figure 1. An uncontrolled beam loss at the LHC could cause major damage to accelerator equipment. Indeed, recent simulations that couple energy-deposition and hydro-dynamic simulation codes show that the nominal LHC beam can drill a hole through the full length of a copper block that is 20 m long.

Safe operation of the LHC relies on a complex system of equipment protection – the machine protection system (MPS). Early detection of failures within the equipment and active monitoring of the beam parameters with fast and reliable beam instrumentation is required throughout the entire cycle, from injection to collisions. Once a failure is detected the information is transmitted to the beam-interlock system that triggers the LHC beam-dumping system. It is essential that the beams are always properly extracted from the accelerator via a 700-m-long transfer line into large graphite dump blocks, because these are the only elements of the LHC that can withstand the impact of the full beam. Figure 2 shows the simulated impact of a 7 TeV beam on the dump block.

A variety of systems

There are several general requirements for the MPS. Its top priority is to protect the accelerator equipment from beam damage, while its second priority is to prevent the superconducting magnets from quenching. At the same time, it should also protect the beam – that is, the protection systems should dump the beam only when necessary so that the LHC’s availability is not compromised. Last, the MPS must provide evidence from beam aborts. When there are failures, the so-called post-mortem system provides complete and coherent diagnostics data. These are needed to reconstruct the sequence of events accurately, to understand the root cause of the failure and to assess whether the protection systems functioned correctly.

Protection of the LHC relies on a variety of systems with strong interdependency – these include the collimators and beam-loss monitors (BLMs) and the beam controls, as well as the beam injection, extraction and dumping systems. The strategy for machine protection, which involves all of these, rests on several basic principles:

• Definition of the machine aperture by the collimator jaws, with BLMs close to the collimators and the superconducting magnets. In general, particles lost from the beam will hit collimators first and not delicate equipment such as superconducting magnets or the LHC experiments.

• Early detection of failures within the equipment that controls the beams, to generate a beam-dump request before the beam is affected.

• Active monitoring with fast and reliable beam instrumentation, to detect abnormal beam conditions and rapidly generate a beam-dump request. This can happen within as little as half a turn of the beam round the machine (40 μs).

• Reliable transmission of a beam-dump request to the beam-dumping system by a distributed interlock system. Fail-safe logic is used for all interlocks, therefore an active signal is required for operation. The absence of the signal is considered as a beam-dump request or injection inhibit.

• Reliable operation of the beam-dumping system on receipt of a dump request or internal-fault detection, to extract the beams safely onto the external dump blocks.

• Passive protection by beam absorbers and collimators for specific failure cases.

• Redundancy in the protection system so that failures can be detected by more than one system. Particularly high standards for safety and reliability are applied in the design of the core protection systems.

Many types of failure are possible with a system as large and complex as the LHC. From the point of view of machine protection, the timescale is one of the most important characteristics of a failure because it determines how the MPS responds.

The fastest and most dangerous failures occur on the timescale of a single turn or less. These events may occur, for example, because of failures during beam injection or beam extraction. The probability for such failures is minimized by designing the systems for high reliability and by interlocking the kicker magnets as soon as they are not needed. However, despite all of these design precautions, failures such as incorrect firing of the kicker magnets at injection or extraction cannot be excluded. In these cases, active protection based on the detection of a fault and an appropriate reaction is not possible because the failure occurs on a timescale that is smaller than the minimum time that it would take to detect it and dump the beam. Protection from these specific failures therefore relies on passive protection with beam absorbers and collimators that must be correctly positioned close to the beam to capture the particles that are deflected accidentally.

Since the injection process is one of the most delicate procedures, a great deal of care has been taken to ensure that only a beam with low intensity – which is highly unlikely to damage equipment – can be injected into an LHC ring where no beam is already circulating. High-intensity beam can be injected only into a ring where a minimum amount of beam is present. This is a guarantee that conditions are acceptable for injection.

The LHC is equipped with around 4000 BLMs distributed along its circumference to protect all elements against excessive beam loss

The majority of equipment failures, however, lead to beam “instabilities” – i.e. fast movements of the orbit or growth in beam size – that must be detected on a timescale of 1 ms or more. Protection against such events relies on fast monitoring of the beam’s position and of beam loss. The LHC is equipped with around 4000 BLMs distributed along its circumference to protect all elements against excessive beam loss. Equipment monitoring – e.g. quench detectors and monitors for failures of magnet powering – provides redundancy for the most critical failure scenarios.

Last, on the longest timescale there will be unavoidable beam losses around the LHC machine during all of the phases of normal operation. Most of these losses will be captured in the collimation sections, where the beam losses and heat load at collimators are monitored. If the losses or the heat load become unacceptably high, the beam is dumped.

Operational experience

Figure 3 shows the evolution of the peak energy stored in each LHC beam between 2010 and 2012. The 2010 run was the main commissioning and learning year for the LHC and the associated MPSs. Experience had to be gained with all of the MPS sub-systems and thresholds for failure detection – e.g. beam-loss thresholds – had to be adjusted based on operational experience. In the summer of 2010, the LHC was operated at a stored energy of around 1–2 MJ – similar to the level of CERN’s Super Proton Synchrotron and Fermilab’s Tevatron – to gain experience with beams that could already create significant damage. A core team of MPS experts monitored the subsequent intensity ramps closely, with bunch spacings of 150 ns, 75 ns and 50 ns. Checklists were completed for each intensity level to document the subsystem status and to record observations. Approval to proceed to the next intensity stage was given only when all of the issues had been resolved. As experience was gained, the increments in intensity became larger and faster to execute. By mid-2012, a maximum stored energy of 140 MJ had been reached at 4 TeV per beam.

One worry with so many superconducting magnets in the LHC concerned quenches induced by uncontrolled beam losses. However, the rate was difficult to estimate before the machine began operation because it depended on a number of factors, including the performance of the large and complex collimation system. Fortunately, not a single magnet quench was observed during normal operation with circulating beams of 3.5 TeV and 4 TeV. This is a result of the excellent performance of the MPS, the collimation system and the outstanding stability and reproducibility of the machine.

Nevertheless, there were other – unexpected – effects. In the summer of 2010, during the intensity ramp-up to stored energies of 1 MJ, fast beam-loss events with timescales of 1 ms or less were observed for the first time in the LHC’s arcs. When it became rapidly evident that dust particles were interacting with the beam they were nicknamed unidentified falling objects (UFOs). The rate of these UFOs increased steadily with beam intensity. Each year, the beams were dumped about 20 times when the losses induced by the interaction of the beams with the dust particles exceeded the loss thresholds. For the LHC injection kickers – where an important number of UFOs were observed – the dust particles could clearly be identified on the surface of the ceramic vacuum chamber. Kickers with better surface cleanliness will replace the existing kickers during the present long shutdown. Nevertheless, UFOs remain a potential threat to the operational efficiency of the LHC at 7 TeV per beam.

All of the beam-dump events were meticulously analysed and validated by the operation crews and experts

The LHC’s MPS performed remarkably well from 2010 to 2013, thanks to the thoroughness and commitment of the operation crews and the MPS experts. Around 1500 beam dumps were executed correctly above the injection energy. All of the beam-dump events were meticulously analysed and validated by the operation crews and experts. This information has been stored in a knowledge database to assess possible long-term improvements of the machine protection and equipment systems. As experience grew, an increasing number of failures were captured before their effects on the particle beams became visible – i.e. before the beam position changed or beam losses were observed.

During the whole period, no evidence of a major loophole or uncovered risk in the protection architecture was identified, although sometimes unexpected failure modes were identified and mitigated. However, approximately 14% of the 1500 beam dumps were initiated by the failure of an element of the MPS – a “false” dump. So, despite the high dependability of the MPS during these first operational years, it will be essential to remain vigilant in the future as more emphasis is placed on increasing the LHC’s availability for physics.

Steve Myers and the LHC: an unexpected journey

Happiness as the LHC

The origins of the LHC trace from the early 1980s, in the days when construction of the tunnel for the Large Electron–Positron (LEP) collider was just getting under way. In 1983, Steve Myers was given an unexpected opportunity to travel to the US and participate in discussions on future proton colliders. He recalls: “None of the more senior accelerator physicists was available, so I got the job.” This journey, it turned out, was to be the start of his long relationship with the LHC.

Myers appreciated the significance for CERN of the discussions in the US: “We knew this was going to be the future competition and I wanted to understand it extremely well.” So he readied himself thoroughly by studying everything on the subject that he could. “With the catalyst that I had to prepare myself for the meeting, I looked at all aspects of it,” he adds. After returning to CERN, he thought about the concept of a proton collider in the LEP tunnel and wrote up his calculations, together with Wolfgang Schnell. “Wolfgang and I had many discussions and then we had a very good paper,” he says.

The paper (LEP Note 440) provided estimates for the design of a proton collider in the LEP tunnel and was the first document to bring all of the ideas together. It raised many of the points that were subsequently part of the LHC design: 8 TeV beam energy, beam–beam limitation (arguing the case for a twin-ring accelerator), twin-bore magnets and the need for magnet development, problems with pile-up (multiple collisions per bunch-crossing) and impedance limitations.

After Myers’ initial investigations, the time was ripe to develop active interest in a future hadron collider at CERN

After Myers’ initial investigations, the time was ripe to develop active interest in a future hadron collider at CERN. A dedicated study group was established in late 1983 and the significant Lausanne workshop took place the following year, bringing experimental physicists together with accelerator experts to discuss the feasibility of the potential LHC. Then began the detailed preparation of the project design.

In the meantime in the US, the Superconducting Super Collider (SSC) project had been approved. Myers was on the accelerator physics subcommittee for both of the major US Department of Energy reviews of the SSC, in 1986 and 1990. He recalls that the committee recommended a number of essential improvements to the proposed design specification, which ultimately resulted in spiralling costs, contributing to the eventual cancellation of the project. “The project parameters got changed, the budget went up and they got scrapped in the end.”

The LHC design, being constrained by the size of the LEP tunnel, could not compete with the SSC in terms of energy. Strategically, however, the LHC proposal compensated for the energy difference between the machines by claiming a factor-10 higher luminosity – an argument that was pushed hard by Carlo Rubbia. “We went for 1034 and nobody thought we could do it, including ourselves! But we had to say it, otherwise we weren’t competitive,” Myers says, looking back. It now gives Myers enormous satisfaction to see that the LHC performance in the first run achieved a peak stable luminosity of 7.73 × 1033 cm–2 s–1, while running at low energy. He adds confidently: “We will do 1034 and much more.”

The decision to use a twin-ring construction for the LHC was of central importance because separate rings allow the number of bunches in the beam to be increased dramatically. To date, the LHC has been running with 1380 bunches and is designed to use twice that number. For comparison, Myers adds: “The best we ever did with LEP was 16 bunches. The ratio of the number of bunches is effectively the ratio of the luminosities.”

Design details

LEP Note

At CERN, it was difficult to make significant progress with the LHC design while manpower and resources were focused on running LEP. Things took off after the closure of LEP in 2000, when there was a major redeployment of staff onto the LHC project and detailed operational design of the machine got under way. The LHC team, led by Lyn Evans, had three departments headed by Philippe Lebrun (magnets, cryogenics and vacuum), Paulo Ciriani (infrastructure and technical services) and Myers (accelerator physics, beam diagnostics, controls, injection, extraction and beam dump, machine protection, radio frequency and power supplies).

Myers makes a typical understatement when asked about the challenges of managing a project of this size: “You do your planning on a regular basis.” This attitude provides the flexibility to exploit delays in the project in a positive way. “Every cloud has a silver lining,” he comments, illustrating his point with the stark image of thousands of magnets sitting in car parks around CERN. A delay that was caused by bad welds in the cryogenic system gave the magnet evaluation group the benefit of extra time to analyse individual magnet characteristics in detail. The magnets were then situated around the ring so that any higher-order field component in one is compensated by its neighbour, therefore minimizing nonlinear dynamic effects. Myers believes that is one of the reasons the machine has been so forgiving with the beam optics: “You spend millions getting the higher-order fields down, so you don’t have nonlinear motion and what was done by the magnet sorting gained us a significant factor on top of that.”

When asked about the key moments in his journey with the LHC, he is clear: “The big highlight for us is when the beam goes all of the way round both rings. Then you know you’re in business; you know you can do things.” To that end, he paid close attention to the potential showstoppers: “The polarities of thousands of magnets and power supplies had to be checked and we had to make sure there were no obstacles in the path of the beam.” During the phase of systematically evaluating the polarities, it turned out that only about half were right first time. There were systematic problems to correct and even differing wiring conventions to address. In addition, a design fault in more than 3000 plug-in modules meant that they did not expand correctly when the LHC was warmed up. This was a potential source of beam-path obstacles and was methodically fixed. These stories illustrate the high level of attention to detail that was necessary for the successful switch-on of the LHC on 10 September 2008.

The low point of Myers’ experience was, of course, the LHC accident on 19 September 2008, which occurred only a matter of hours after he was nominated director of accelerators and technology. The incident triggered a shutdown of more than a year for repairs and an exhaustive analysis of what had gone wrong. During this time, an unprecedented amount of effort was invested in improvements to quality assurance and machine protection. One of the most important consequences was the development of the state-of-the-art magnet protection system, which is more technically advanced than was possible at the time of the LHC design. The outcome is a machine that is extremely robust and whose behaviour is understood by the operations team.

Steve Myers

In November 2009 the LHC was ready for testing once again. The first task was to ramp up the beam energy from the injection energy from the Super Proton Synchrotron of 0.45 TeV per beam. The process is complicated in the early stages by the behaviour of the superconducting magnets but the operations team succeeded in achieving 1.18 TeV per beam and established the LHC as the highest-energy collider ever built. By the end of March 2010, the first collisions at 7 TeV were made and from that point on the aim was to increase the collision rate by introducing more bunches with more protons per bunch and by squeezing the beam tighter at the interaction points. Every stage of this process was meticulously planned and carefully introduced, only going ahead when the machine protection team were completely satisfied.

In November 2009, when the LHC was ready to start up, both the machine and its experiments were thoroughly prepared for the physics programme ahead. The result was a spectacular level of productivity, leading to the series of announcements that culminated in the discovery of a Higgs boson. By the end of 2011 the LHC had surpassed its design luminosity for running with 3.5 TeV beams and the ATLAS and CMS experiments had seen the first hints of a new particle. The excitement was mounting and so was the pressure to generate as much data as possible. At the start of 2012, given that no magnet quenches had occurred while running with 3.5 TeV beams, it was considered safe to increase the beam energy to 4 TeV. With a collision rate of 20 MHz and levels of pile-up reaching 45, the experiments were successfully handling an almost overwhelming amount of data. Myers finds this an amazing achievement, as he says, “nobody thought we could handle the pile-up,” when the LHC was first proposed. He views the subsequent discovery announcement at CERN on 4 July 2012 as one of the most exciting moments of his career and, indeed, in the history of particle physics.

Reflecting on his journey with the LHC, Myers is keen to emphasize the importance of the people involved in its development, as well as the historical context in which it happened. In his early days at CERN in the 1970s, he was working with the Intersecting Storage Rings (ISR), which he calls “one of the best machines of its time”. As a result, “I knew protons extremely well,”he says. The experience he gained in those years has, in turn, contributed to his work on the LHC.

In the following years of building and operating LEP – as the world’s largest accelerator – many young engineers developed their expertise, just as Myers had on the ISR. “I think that’s why it worked so well,” he says, “because these guys came in as young graduates, not knowing anything about accelerators and we trained them all and they became the real experts, in the same way as I did on the ISR.” He sums up the value of this continuum of young people coming into CERN and becoming the next generation of experts: “That for me is what CERN is all about.”

The collimation system: defence against beam loss

Multi-stage cleaning

Ideally, a storage ring like the LHC would never lose particles: the beam lifetime would be infinite. However, a number of processes will always lead to losses from the beam. The manipulations needed to prepare the beams for collision – such as injection, the energy ramp and “squeeze” – all entail unavoidable beam losses, as do the all-important collisions for physics. These losses generally become greater as the beam current and the luminosity are increased. In addition, the LHC’s superconducting environment demands an efficient beam-loss cleaning to avoid quenches from uncontrolled losses – the nominal stored beam energy of 362 MJ is more than a billion times larger than the typical quench limits.

The tight control of beam losses is the main purpose of the collimation system. Movable collimators define aperture restrictions for the circulating beam and should intercept particles on large-amplitude trajectories that could otherwise be lost in the magnets. Therefore, the collimators represent the LHC’s defence against unavoidable beam losses. Their primary role is to clean away the beam halo while maintaining losses at sensitive locations below safe limits. The current system is designed to ensure that peak losses below a few 0.01% of the energy lost from the beam is deposited in the cold magnets. As the closest elements to the circulating beams, the collimators provide passive machine protection against irregular fast losses and failures. They also control the distribution of losses around the ring by ensuring that the largest activation occurs at optimized locations. Collimators are also used to minimize background in the experiments.

The LHC collimation system provides multi-stage cleaning where primary, secondary and tertiary collimators and absorbers are used to reduce the population of halo particles to tolerable levels (figure 1). Robust carbon-based and non-robust but high-absorption metallic materials are used for different purposes. Collimators are installed around the LHC in seven out of the eight insertion regions (between the arcs), at optimal longitudinal positions and for various transverse rotation angles. The collimator jaws are set at different distances from the circulating beams, respecting the optimum setting hierarchy required to ensure that the system provides the required cleaning and protection functionalities.

The design was optimized using state-of-the-art numerical-simulation programs

The detailed system design was the outcome of a multi-parameter optimization that took into account nuclear-physics processes in the jaws, robustness against the worst anticipated beam accidents, collimation-cleaning efficiency, radiation impact and machine impedance. The result is the largest and most advanced cleaning system ever built for a particle accelerator. It consists of 84 two-sided movable collimators of various designs and materials. Including injection protection collimators, there are a total of 396 degrees-of-freedom, because each collimator jaw has two stepping motors. By contrast, the collimation system of the Tevatron at Fermilab had less than 30 degrees-of-freedom for collimator positions.

The design was optimized using state-of-the-art numerical-simulation programs. These were based on a detailed model of all of the magnetic elements for particle tracking and the vacuum pipe apertures, with a longitudinal resolution of 0.1 m along the 27-km-long rings. They also involved routines for proton-halo generation and transport, as well as aperture checks and proton–matter interactions. These simulations require high statistics to achieve accurate estimates of collimation cleaning. A typical simulation run involves tracking some 20–60 million primary halo protons for 200 LHC turns – equivalent to monitoring a single proton travelling a distance of 0.03 light-years. Several runs are needed to study the system in different conditions. Additional complex energy- deposition and thermo-mechanical finite-element computations are then used to establish heat loads in magnets, radiation doses and collimator structural behaviour for various loss scenarios. Such a highly demanding simulation process was possible only as a result of computing power developed over recent years.

The backbone of the collimation system is located at two warm insertion regions (IRs): the momentum cleaning at IR3 and betatron cleaning at IR7, which comprise 9 and 19 movable collimators per beam, respectively. Robust primary and secondary collimators made of a carbon-fibre composite define the momentum and betatron cuts for the beam halo. In 2012, in IR7 they were at ±4.3–6.3σ (with σ being the nominal standard deviation of the beam profile in the transverse plane) from the circulating 140 MJ beams, which passed through collimator apertures as small as 2.1 mm at a rate of around 11,000 times per second.

Additional tungsten absorbers protect the superconducting magnets downstream of the warm insertions. While these are more efficient in catching hadronic and electromagnetic showers, they are also more fragile against beam losses, so they are retracted further from the beam orbit. Further local protection is provided for the experiments in IR1, IR2, IR5 and IR8: tungsten collimators shield the inner triplet magnets that otherwise would be exposed to beam losses because they are the magnets with the tightest aperture restrictions in the LHC in collision conditions. Injection and dump protection elements are installed in IR2, IR8 and IR6. The collimation system must provide continuous cleaning and protection during all stages of beam operation: injection, ramp, squeeze and physics.

An LHC collimator

An LHC collimator consists of two jaws that define a slit for the beam, effectively constraining the beam halo from both sides (figure 2). These jaws are enclosed in a vacuum tank that can be rotated in the transverse plane to intercept the halo, whether it is horizontal, vertical or skew. Precise sensors monitor the jaw positions and collimator gaps. Temperature sensors are also mounted on the jaws. All of these critical parameters are connected to the beam-interlock system and trigger a beam dump if potentially dangerous conditions are detected.

At the LHC’s top energy, a beam size of less than 200 μm requires that the collimators act as high-precision devices. The correct system functionality relies on establishing the collimator hierarchy with position accuracies to within a fraction of the beam size. Collimation movements around the ring must also be synchronized to within better than 20 ms to achieve good relative positioning of devices during transient phases of the operational cycle. A unique feature of the control system is that the stepping motors can be driven according to arbitrary functions of time, synchronously with other accelerator systems such as power converters and radio-frequency cavities during ramp and squeeze.

These requirements place unprecedented constraints on the mechanical design, which is optimized to ensure good flatness along the 1-m-long jaw, even under extreme conditions. Extensive measurements were performed during prototyping and production, both for quality assurance and to obtain all of the required position calibrations. The collimator design has the critical feature that it is possible to measure a gap outside the beam vacuum that is directly related to the collimation gap seen by the beam. Some non-conformities in jaw flatness could not be avoided and were addressed by installing the affected jaws at locations of larger β functions (therefore larger beam size), in a way that is not critical for the overall performance.

Set-up and performance

The first step in collimation set-up is to adjust the collimators to the stored beam position. There are unavoidable uncertainties in the beam orbit and collimator alignment in the tunnel, so a beam-based alignment procedure has been established to set the jaws precisely around the beam orbit. The primary collimators are used to create reference cuts in phase space. Then all other jaws are moved symmetrically round the beam until they touch the reference beam halo. The results of this halo-based set-up provide information on the beam positions and sizes at each collimator. The theoretical target settings for the various collimators are determined from simulations to protect the available machine aperture. The beam-based alignment results are then used to generate appropriate setting functions for the collimator positions throughout the operational cycle. For each LHC fill, the system requires some 450 setting functions versus time, 1200 discrete set points and about 10,000 critical threshold settings versus time. Another 600 functions are used as redundant gap thresholds for different beam energies and optics configurations.

This complex system worked well during the first LHC operation with a minimum number of false errors and failures, showing that the choice of hardware and controls are fully appropriate for the challenging accelerator environment at the LHC. Collimator alignment and the handling of complex settings have always been major concerns for the operation of the large and distributed LHC collimation system. The experience accumulated in the first run indicates that these critical aspects have been addressed successfully.

Beam losses

The result of the cleaning mechanism from the LHC collimation process is always visible in the control room. Unavoidable beam losses occur continuously at the primary collimators and can be observed online by the operations team as the largest loss spikes on the fixed display showing the beam losses around the ring. The local leakage to cold magnets is in most cases below 0.00001 of the peak losses, with a few isolated loss locations around IR7 where the cleaning reaches levels up to a few 0.0001 (figure 3). So far, this excellent performance has ensured a quench-free operation, even in cases of extreme beam losses from circulating beams. Moreover, this was achieved throughout the year with only one collimator alignment in IR3 and IR7, thanks to the remarkable stability of the machine and of the collimator settings.

However, collimators in the interaction regions required regular setting up for each new machine configuration that was requested for the experiments. Eighteen of these collimators are being upgraded in the current long shutdown to reduce the time spent on alignment: the new tertiary collimator design has integrated beam-position monitors to enable a fast alignment without dedicated beam-based alignment fills. This upgrade will also eventually contribute to improving the peak luminosity performance by reducing further the colliding beam sizes, thanks to better control of the beam orbit next to the inner triplet.

The LHC collimation system performance is validated after set-up with provoked beam losses, which are artificially induced by deliberately driving transverse beam instabilities. Beam-loss monitors then record data at 3600 locations around the ring. As these losses occur under controlled conditions they can be compared in detail with simulations. As predicted, performance is limited by a few isolated loss locations, namely the IR7 dispersion-suppressor magnets, which catch particles that have lost energy in single diffractive scattering at the primary collimator. This limitation of the system will be addressed in future upgrades, in particular in the High Luminosity LHC era.

The first three-year operational run has shown that the LHC’s precise and complex collimation system works at the expected high performance, reaching unprecedented levels of cleaning efficiency. The system has shown excellent stability: the machine was regularly operated with stored beam energies of more than 140 MJ, with no loss-induced quenches of superconducting magnets. This excellent performance was among the major contributors to the rapid commissioning of high-intensity beams at the LHC as well as to the squeezing of 4 TeV beams to 60 cm at collision points – a crucial aspect of the successful operation in 2012 that led to the discovery of a Higgs boson.

•The success of the collimation system during the first years of LHC operation was the result of the efforts of the many motivated people involved in this project from different CERN departments and from external collaborators. All of these people, and Ralph Assmann who led the project until 2012, are gratefully acknowledged.

Safeguarding the superconducting magnets

The total electromagnetic energy stored in the LHC superconducting magnets is about 10,000 MJ, which is more than an order of magnitude greater than in the nominal stored beams. Any uncontrolled release of this energy presents a danger to the machine. One way in which this can occur is through a magnet quench, so the LHC employs a sophisticated system to detect quenches and protect against their harmful effects.

The magnets of the LHC are superconducting if the temperature, the applied magnetic induction and the current density are below a critical set of interdependent values – the critical surface (figure 1). A quench occurs if the limits of the critical surface are exceeded locally and the affected section of magnet coil changes from a superconducting to a normal conducting state. The resulting drastic increase in electrical resistivity causes Joule heating, further increasing the temperature and spreading the normal conducting zone through the magnet.

An uncontrolled quench poses a number of threats to a superconducting magnet and its surroundings. High temperatures can destroy the insulation material or even result in a meltdown of superconducting cable: the energy stored in one dipole magnet can melt up to 14 kg of cable. The excessive voltages can cause electric discharges that could further destroy the magnet. In addition, high Lorentz forces and temperature gradients can cause large variations in stress and irreversible degradation of the superconducting material, resulting in a permanent reduction of its current-carrying capability.

The LHC main superconducting dipole magnets achieve magnetic fields of more than 8 T. There are 1232 main bending dipole magnets, each 15 m long, that produce the required curvature for proton beams with energies up to 7 TeV. Both the main dipole and the quadrupole magnets in each of the eight sectors of the LHC are powered in series. Each main dipole circuit includes 154 magnets, while the quadrupole circuits consist of 47 or 51 magnets, depending on the sector. All superconducting components, including bus- bars and current leads as well as the magnet coils, are vulnerable to quenching under adverse conditions.

The LHC employs sophisticated magnet protection, the so-called quench-protection system (QPS), both to safeguard the magnetic circuits and to maximize beam availability. The effectiveness of the magnet-protection system is dependent on the timely detection of a quench, followed by a beam dump and rapid disconnection of the power converter and current extraction from the affected magnetic circuit. The current decay rate is determined by the inductance, L, and resistance, R, of the resulting isolated circuit, with a discharge time constant of τ = L/R. For the purposes of magnet protection, reducing the current discharge time can be viewed as equivalent to the extraction and dissipation of stored magnetic energy. This is achieved by increasing the resistance of both the magnet and its associated circuit.

Additional resistance in the magnet is created by using quench heaters to heat up large fractions of the coil and spread the quench over the entire magnet. This dissipates the stored magnetic energy over a larger volume and results in lower hot-spot temperatures. The resistance in the circuit is increased by switching-in a dump resistor, which extracts energy from the circuit (figure 2). As soon as one magnet quenches, the dump resistor is used to extract the current from the chain. The size of the resistor is chosen such that the current does not decrease so quickly as to induce large eddy-current losses, which would cause further magnets in the chain to quench.

Detection and mitigation

A quench in the LHC is detected by monitoring the resistive voltage across the magnet, which rises as the quench appears and propagates. However, the total measured voltage also includes the inductive-voltage component, which is driven by the magnet current ramping up or down. Reliably extracting the resistive-voltage signal from the total voltage-measurement is done using detection systems with inductive-voltage compensation. In the case of fast-ramping corrector magnets with large inductive voltages, it is more difficult to detect a resistive voltage because of the low signal-to-noise ratio; higher threshold voltages have to be used and a quench is therefore detected later. Following the detection and validation of a quench, the beam is aborted and the power converter is switched off. The time between the start of a quench and quench validation (i.e. activating the beam and powering interlocks) must be independent of the selected method of protection.

Creating a parallel path to the magnet via a diode allows the circuit current to by-pass the quenching magnet (figure 2). As soon as the increasing voltage over the quenched coil reaches the threshold voltage of the diode, the current starts to transfer into the diode. The magnet is by-passed by its diode and discharges independently. The diode must withstand the radiation environment, carry the current of the magnet chain for a sufficient time and provide sufficiently high turn-on voltage, to hold during the ramp up of the current. The LHC’s main magnets use cold diodes, which are mounted within the cryostat. These have a significantly larger threshold voltage than diodes that operate at room temperature – but the threshold can be reached sooner if quench heaters are fired.

The sequence of events following quench detection and validation can be summarized as follows:

• 1. The beam is dumped and the power converter turned off.

• 2. The quench-heaters are triggered and the dump-resistor is switched-in.

• 3. The current transfers into the dump resistor and starts to decrease.

• 4. Once the quench heaters take effect, the voltage over the quenched magnet rises and switches on the cold diode.

• 5. The magnet starts now to be by-passed in the chain and discharges over the internal resistance.

• 6. The cold diode heats up and the forward voltage decreases.

• 7. The current decrease induces eddy-current losses in the magnet windings yielding enhanced quench propagation. • 8. The current of the quenched magnet transfers fully into the cold diode.

• 9. The magnet chain is completely switched off a few hundred seconds after the quench detection.

QPS in practice

The QPS must perform with high reliability and high LHC beam availability. Satisfying these contradictory requirements requires careful design to optimize the sensitivity of the system. While failure to detect and control a quench can clearly have a significant impact on the integrity of the accelerator, QPS settings that are too tight may increase the number of false triggers significantly. As well as causing additional downtime of the machine, false triggers – which can result from electromagnetic perturbations, such as network glitches and thunderstorms – can contribute to the deterioration of the magnets and quench heaters by subjecting them to unnecessary spurious quenches and fast de-excitation.

One of the important challenges for the QPS is coping with the conditions experienced during a fast power abort (FPA) following quench validation. Switching off the power converter and activating the energy extraction to the dump resistors causes electromagnetic transients and high voltages. The sensitivity of the QPS to spurious triggers from electromagnetic transients caused a number of multiple-magnet quench events in 2010 (figure 3). Following simulation studies of transient behaviour, a series of modifications were implemented to reduce the transient signals from a FPA. A delay was introduced between switching off the power converter and switching-in the dump resistors, with “snubber” capacitors installed in parallel to the switches to reduce electrical arcing and related transient voltage waves in the circuit (these are not shown in figure 2). These improvements resulted in a radically reduced number of spurious quenches in 2011 – only one such quench was recorded, in a single magnet, and this was probably due to an energetic neutron, a so-called “single-event upset” (SEU). The reduction in falsely triggered quenches between 2010 and 2011 was the most significant improvement in the QPS performance and impacted directly on the decision to increase the beam energy to 4 TeV in 2012.

To date, there have been no beam-induced quenches with circulating beams above injection current

To date, there have been no beam-induced quenches with circulating beams above injection current. This operational experience shows that the beam-loss monitor thresholds are low enough to cause a beam dump before beam losses cause a quench. However, the QPS had to act on several occasions in the event of real quenches in the bus-bars and current leads, demonstrating real protection in operation. The robustness of the system was evident on 18 August 2011 when the LHC experienced a total loss of power at a critical moment for the magnet circuits. At the time, the machine was ramping up and close to maximum magnet current with high beam intensity: no magnet tripped and no quenches occurred.

A persistent issue for the vast and complex electronics systems used in the QPS is exposure to radiation. In 2012 some of the radiation-to-electronics problems were partly mitigated by the development of electronics more tolerant to radiation. The number of trips per inverse femtobarn owing to SEUs was reduced by about 60% from 2011 to 2012 thanks to additional shielding and firmware upgrades. The downtime from trips is also being addressed by automating the power cycling to reset electronics after a SEU. While most of the radiation-induced faults are transparent to LHC operation, the number of beam dumps caused by false triggers remains an issue. Future LHC operation will require improvements in radiation-tolerant electronics, coupled with a programme of replacement where necessary.

Future operation

During the LHC run in 2010 and 2011 with a beam energy of 3.5 TeV, the normal operational parameters of the dipole magnets were well below the critical surface required for superconductivity. The main dipoles operated at about 6 kA and 4.2 T, while the critical current at this field is about 35 kA, resulting in a safe temperature margin of 4.9 K. However, this value will become 1.4 K for future LHC operation at 7 TeV per beam. The QPS must therefore be prepared for operation with tighter margins. Moreover, at higher beam energy quench events will be considerably larger, involving up to 10 times more magnetic energy. This will result in longer recuperation times for the cryogenic system. There is also a higher likelihood of beam-induced quench events and quenches induced by conditions such as faster ramp rates and FPAs.

The successful implementation of magnet protection depends on a high-performance control and data acquisition system, automated software analysis tools and highly trained personnel for technical interventions. These have all contributed to the very good performance during 2010–2013. The operational experience gained during this first long run will allow the QPS to meet the challenges of the next run.

The challenge of keeping cool

Distribution of the cryoplants

The LHC is one of the coldest places on Earth, with superconducting magnets – the key defining feature – that operate at 1.9 K. While there might be colder places in other laboratories, none compares to the LHC’s scale and complexity. The cryogenic system that provides the cooling for the superconducting magnets, with their total cold mass of 36,000 tonnes, is the largest and most advanced of its kind. It has been running continuously at some level since January 2007, providing stalwart service and achieving an availability equivalent to more than 99% per cryogenic plant.

The task of keeping the 27-km-long collider at 1.9 K is performed by helium that is cooled to its superfluid state in a huge refrigeration system. While the niobium-titanium alloy in the magnet coils would be superconducting if normal liquid helium were used as the coolant, the performance of the magnets is greatly enhanced by lowering their operating temperature and by taking advantage of the unique properties of superfluid helium. At atmospheric pressure, helium gas liquefies at around 4.2 K but on further cooling it undergoes a second phase change at about 2.17 K and becomes a superfluid. Among many remarkable properties, superfluid helium has a high thermal conductivity, which makes it the coolant of choice for the refrigeration and stabilization of large superconducting systems.

The LHC consists of eight 3.3-km-long sectors with sites for access shafts to services on the surface at the ends of each sector. Five of these sites are used to locate the eight separate cryogenic plants, each dedicated to serving one sector (figure 1). An individual cryoplant consists of a pair of refrigeration units: one, the 4.5 K refrigerator, provides a cooling-capacity equivalent to 18 kW at 4.5 K; while the other, the 1.8 K refrigeration unit, provides a further cooling capacity of 2.4 kW at 1.8 K. Therefore, each of the eight cryoplants must distribute and recover kilowatts of refrigeration across a distance of 3.3 km, to be achieved with a temperature change of less than 0.1 K.

Four of the 4.5 K refrigerators were recovered from the second phase of the Large Electron–Positron collider

Four of the 4.5 K refrigerators were recovered from the second phase of the Large Electron–Positron collider (LEP2), where they were used to cool its superconducting radiofrequency cavities. These “recycled” units have been upgraded to operate on the LHC sectors that have a lower demand for refrigeration. The four high-load sectors are instead cooled by new 4.5 K refrigerators. The refrigeration capacity needed to cool the 4500 tonnes of material in each sector of the LHC is enormous and can be produced only by using liquid nitrogen. Consequently, each 4.5 K refrigerator is equipped with a 600-kW liquid-nitrogen pre-cooler. This is used to cool a flow of helium down to 80 K while the corresponding sector is cooled before being filled with helium – a procedure that takes just under a month. Using only helium in the tunnel considerably reduces the risk of oxygen deficiency in the case of an accidental release.

The 4.5 K refrigeration system works by first compressing the helium gas and then allowing it to expand. During expansion it cools by losing energy through mechanical turbo-expanders that run at up to 140,000 rpm on helium-gas bearings. Each of the refrigerators consists of a helium-compressor station equipped with systems to remove oil and water, as well as a vacuum-insulated cold box (60 tonnes) where the helium is cooled, purified and liquefied. The compressor station supplies compressed helium gas at 20 bar and room temperature. The cold box houses the heat exchangers and turbo-expanders that provide the cooling capacities necessary to liquefy the helium at 4.5 K. The liquid helium then passes to the 1.8 K refrigeration unit, where the cold-compressor train decreases its saturation pressure and consequently its saturation temperature down to 1.8 K. Each cryoplant is equipped with a fully automatic process-control system that manages about 1000 inlets and outlets per plant. The system takes a total electrical input power of 32 MW and reaches an equivalent cooling capacity of 144 kW at 4.5 K – enough to provide almost 40,000 litres of liquid helium per hour.

The compressor unit

In the LHC tunnel, a cryogenic distribution line runs alongside the machine. It consists of eight continuous cryostats, each about 3.2 km long and housing four (or five) headers to supply and recover helium, with temperatures ranging from 4 K to 75 K. A total of 310 service modules, of 44 different types feed the machine. These contain sub-cooling heat exchangers, all of the cryogenic control valves for the local cooling loops and 1–2 cold pressure-relief valves that protect the magnet cold masses, as well as monitoring and control instrumentation. Overall, the LHC cryogenic system contains about 60,000 inlets and outlets, which are managed by 120 industrial-process logic controllers that implement more than 4000 PID control loops.

Operational aspects

The structure of the group involved with the operation of the LHC’s cryogenics has evolved naturally since the installation phase, so maintaining experience and expertise. Each cryogenically independent sector of the LHC and its pair of refrigerators is managed by its own dedicated team for process control and operational procedures. In addition, there are three support teams for mechanics, electricity-instrumentation controls and metrology instrumentation. A further team handles scheduling, maintenance and logistics, including cryogen distribution. Continuous monitoring and technical support is provided by personnel who are on shift “24/7” in the CERN Control Centre and on standby duties. This constant supervision is necessary because any loss of availability for the cryogenic system impacts directly on the availability of the accelerator. Furthermore, the response to cryogenic failures must be rapid to mitigate the consequences of loss of cooling.

In developing a strategy for operating the LHC it was necessary to define the overall availability criteria. Rather than using every temperature sensor or liquid-helium level as a separate interlock to the magnet powering and therefore the beam permit, it made more sense to organize the information according to the modularity of the magnet-powering system. As a result, each magnet-powering subsector is attributed a pair of cryogenic signals: “cryo-maintain” (CM) and “cryo-start” (CS). The CM signal corresponds to any condition that requires a slow discharge of the magnets concerned, while the CS signal has more stringent conditions to enable powering to take place with sufficient margins for a smooth transition to the CM threshold. A global CM signal is defined as the combination of all of the required conditions for the eight sectors. This determines the overall availability of the LHC cryogenics.

During the first LHC beams in 2009, the system immediately delivered availability of 90% despite there being no means of dealing quickly with identified faults. These were corrected whenever possible during the routine technical stops of the accelerator and the end-of-year stops. The main issues resolved during this phase were the elimination of two air leaks in sub-atmospheric circuits, the consolidation of all of the 1200 cooling valves for current leads and the 1200 electronic cards for temperature sensors that were particularly affected by energetic neutron impacts, so-called single-event upsets (SEUs).

Availability of the cryogenic system 2010–2012.

Since early operation for physics began in November 2009, the availability has been above 90% for more than 260 days per year. A substantial improvement occurred in 2012–2013 because of progress in the operation of the cryogenic system. The operation team undertook appropriate training that included the evaluation and optimization of operation settings. There were major improvements in handling utilities-induced failures. In particular, in the case of electrical-network glitches, fine-tuning the tolerance thresholds for the helium compressors and cooling-water stations represented half of the gain. A reduction in the time taken to recover nominal cryogenic conditions after failures also led to improved availability. The progress made during the past three years led to a reduction in the number of short stops, i.e. less than eight hours, from 140 to 81 per year. By 2012, the efforts of the operation and support teams had resulted in a global availability of 94.8%, corresponding to an equivalent availability of more than 99.3% for each of the eight cryogenically independent sectors.

In addition, the requirement to undertake an energy-saving programme contributed significantly to the improved availability and efficiency of the cryogenic system – and resulted in a direct saving of SwFr3 million a year. Efforts to improve efficiency have also focused on the consumption of helium. The overall LHC inventory comes to 136 tonnes of helium, with an additional 15 tonnes held as strategic storage to cope with urgent situations during operation. For 2010 and 2011, the overall losses remained high because of increased losses from the newly commissioned storage tanks during the first end-of-year technical stop. However, the operational losses were substantially reduced in 2011. Then, in 2012, the combination of a massive campaign to localize all detectable leaks – combined with the reduced operational losses – led to a dramatic improvement in the overall figure, nearly halving the losses.

Towards the next run

Thanks to the early consolidation work already performed while ramping up the LHC luminosity, no significant changes are being implemented to the cryogenic system during the first long shutdown (LS1) of the LHC. However, because it has been operating continuously since 2007, a full preventive-maintenance plan is taking place. A major overhaul of helium compressors and motors is being undertaken at the manufacturers’ premises. The acquisition of important spares for critical rotating machinery is already completed. Specific electronic units will be upgraded or relocated to cope with future radiation levels. In addition, identified leaks in the system must be repaired. The consolidation of the magnet interconnections – including the interface with the current leads – together with relocation of electronics to limit SEUs, will require a complete re-commissioning effort before cool-down for the next run.

The scheduled consolidation work – together with lessons learnt from the operational experience so far – will be key factors for the cryogenic system to maintain its high level of performance under future conditions at the LHC. The successful systematic approach to operations will continue when the LHC restarts at close to nominal beam energy and intensity. With greater heat loads corresponding to increased beam parameters and magnet currents, expectations are high that the cryogenic system will meet the challenge.

bright-rec iop pub iop-science physcis connect