Comsol -leaderboard other pages

Topics

Participants and spectators at the heavy-ion fireball

ALICE event displays

Heavy-ion collisions are used at CERN and other laboratories to re-create conditions of high temperature and high energy density, similar to those that must have characterized the first instants of the universe, after the Big Bang. Yet heavy-ion collisions are not all equal. Because heavy ions are extended objects, the system created in a central head-on collision is different from that created in a peripheral collision, where the nuclei just graze each other. Measuring just how central such collisions are at the LHC is an important part of the studies by the ALICE experiment, which specializes in heavy-ion physics. The centrality determination provides a tool to compare ALICE measurements with those of other experiments and with theoretical calculations.

Centrality is a key parameter in the study of the properties of QCD matter at extreme temperature and energy density because it is related directly to the initial overlap region of the colliding nuclei. Geometrically, it is defined by the impact parameter, b – the distance between the centres of the two colliding nuclei in a plane transverse to the collision axis (figure 1). Centrality is thus related to the fraction of the geometrical cross-section that overlaps, which is proportional to πb2/π(2RA)2, where RA is the nuclear radius. It is customary in heavy-ion physics to characterize the centrality of a collision in terms of the number of participants (Npart), i.e. the number of nucleons that undergo at least one collision, or in terms of the number of binary collisions among nucleons from the two nuclei (Ncoll). The nucleons that do not participate in any collision – the spectators – essentially keep travelling undeflected, close to the beam direction.

Two heavy ions

However, neither the impact parameter nor the number of participants, spectators or nucleon–nucleon collisions are directly measurable. This means that experimental observables are needed that can be related to these geometrical quantities. One such observable is the multiplicity of the particles produced in collision in a given rapidity range around mid-rapidity; this multiplicity increases monotonically with the impact parameter. A second useful observable is the energy carried by the spectators close to the beam direction and deposited – in the case of the ALICE experiment – in the Zero Degree Calorimeter (ZDC); this decreases for more central collisions, as shown in the upper part of figure 2.

Experimentally, centrality is expressed as a percentage of the total nuclear interaction cross-section, e.g. the 10% most central events are the 10% that have the highest particle multiplicity. But how much of the total nuclear cross-section is measured in ALICE? Are the events detected only hadronic processes or do they include something else?

The most peripheral events, 10% of the total, remain contaminated by electromagnetic processes and trigger inefficiency

ALICE collected data during the LHC’s periods of lead–lead running in 2010 and 2011 using interaction triggers that have an efficiency large enough to explore the entire sample of hadronic collisions. However, because of the strong electromagnetic fields generated as the relativistic heavy ions graze each other the event sample is contaminated by background from electromagnetic processes, such as pair-production and photonuclear interactions. These processes, which are characterized by low-multiplicity events with soft (low-momentum) particles close to mid-rapidity, produce events that are similar to peripheral hadronic collisions and must be rejected to isolate hadronic interactions. Part of the contamination is rejected by requiring that both nuclei break-up in the collision, producing a coincident signal in both sides of the ZDC. The remaining contamination is estimated using events generated by a Monte Carlo simulator of electromagnetic processes (e.g. STARLIGHT). This shows that for about 90% of the hadronic cross-section, the purity of the event sample and the efficiency of the event selection is 100%. Nevertheless the most peripheral events, 10% of the total, remain contaminated by electromagnetic processes and trigger inefficiency – and must be used with special care in the physics analyses.

The centrality of each event in the sample of hadronic interactions can be classified by using the measured particle multiplicity and the spectator energy deposited in the ZDC. Various detectors in ALICE measure quantities that are proportional to the particle multiplicity, with different detectors covering different regions in pseudo-rapidity (η). Several of these, e.g. the time-projection chamber (covering |Δη| < 0.8) the silicon pixel detector (|Δη| < 1.4), the forward multiplicity detector (1.7 < Δη < 5.0 and –3.4 < Δη < –1.7) and the V0 scintillators (2.8 < Δη < 5.1 and –3.7 < Δη < –1.7), are used to study how the centrality resolution depends on the acceptance and other possible detector effects (saturation, energy cut-off etc.). The percentiles of the hadronic cross-section are determined for any value of measured particle multiplicity (or something proportional to it, e.g. the V0 amplitude) by integrating the measured distribution, which can be divided into classes by defining sharp cuts that cor-respond to well defi ned percentile intervals of the cross-section, as indicated in the lower part of fi gure 2 for the V0 detectors.

Alternatively, measuring the energy deposited in the ZDC by the spectator particles in principle allows direct access to the number of participants (all of the nucleons minus the spectators). However, some spectator nucleons are bound into light nuclear fragments that, with a charge-to-mass ratio similar to that of the beam, remain inside the beam-pipe and are therefore undetected by the ZDC. This effect becomes quantitatively important for peripheral events because they have a large number of spectators, so the ZDC cannot be used alone to give a reliable estimate of the number of participants. Consequently, the information from the ZDC needs to be correlated to another quantity that has a monotonic relation with the participants. The ALICE collaboration uses the energy of the secondary particles (essentially photons produced by pion decays) measured by two small electromagnetic calorimeters (ZEM). Centrality classes are defined by cuts on the two-dimensional distribution of the ZDC energy as a function of the ZEM amplitude for the most central events (0–30%) above the point where the correlation between the ZDC and ZEM inverts sign.

So how can the events be partitioned? Should the process be based on 0–1% or 0–10% classes? And what is the best way to estimate the centrality? These questions relate to the issue of centrality resolution. The number of centrality classes that can be defined is connected to the resolution achieved by the centrality estimation. In general, centrality classes are defined so that the separation between the central values of the participant distributions for two adjacent classes is significantly larger than the resolution for the variable used for the classification.

The real resolution

In principle, the resolution is given by the difference between the true centrality and the value estimated using a given method. In reality, the true centrality is not known, so how can it be measured? ALICE tested its procedure on simulations using the event generator HIJING, which is widely used and tested on hadronic processes, together with a full-scale simulation of detector response based on the GEANT toolkit. In HIJING events, the value of the impact parameter for every given event and, hence, the true centrality is known. The full GEANT simulation yields the values of signals in the detectors for the given event, so using these centrality estimators an estimate of the centrality can be calculated. The real centrality resolution for the given event is equal to the difference between the measured and the true centrality.

In the real data we approximated the true centrality in an iterative procedure, evaluating event-by-event the average centrality measured by all estimators. The correlation between various estimators is excellent, resulting in a high centrality-resolution. Since the resolution depends on the rapidity coverage of the detector used, the best result – achieved with the V0 detector, which has the largest pseudo-rapidity coverage in ALICE – ranges from 0.5% in central collisions to 2% in peripheral ones, in agreement with the estimation from simulations. This high resolution is confirmed by the analysis of elliptic flow and two-particle correlations where the results, which address geometrical aspects of the collisions, change with 1% centrality bins (figure 3).

Elliptic flow

So much for the experimental classification of the events in percentiles of the hadronic cross-section. This leaves one issue remaining: how to relate the experimental observables (particle multiplicity, zero-degree energy) to the geometry of the collision (impact parameter, Npart, Ncoll). What is the mean number of participants in the 10% most central events?

To answer this question requires a model. HIJING is not used in this case, because the simulated particle multiplicity does not agree with the measured one. Instead ALICE uses a much simpler model, the Glauber model. This is a simple technique, widely used in heavy-ion physics, from the Alternating Gradient Synchrotron at Brookhaven, to CERN’s Super Proton Synchrotron, to Brookhaven’s Relativistic Heavy-Ion Collider. It uses few assumptions to describe heavy-ion collisions and couple the collision geometry to the detector signals. First, the two colliding nuclei are described by a realistic distribution of nucleons inside the nucleus measured in electron-scattering experiments (the Woods-Saxon distribution). Second, the nucleons are assumed to follow straight trajectories. Third, two nucleons from different nuclei are assumed to collide if their distance is less than the distance corresponding to the inelastic nucleon–nucleon cross-section. Last, the same cross-section is used for all successive collisions. The model, which is implemented in a Monte Carlo calculation, takes random samples from a geometrical distribution of the impact parameter and for each collision determines Npart and Ncoll.

The Glauber model can be combined with a simple model for particle production to simulate a multiplicity distribution that is then compared with the experimental one. The particle production is simulated in two steps. Employing a simple parameterization, the number of participants and the number of collisions can be used to determine the number of “ancestors”, i.e. independently emitting sources of particles. In the next step, each ancestor emits particles according to a negative binomial distribution (chosen because it describes particle multiplicity in nucleon–nucleon collisions). The simulated distribution describes up to 90% of the experimental one, as figure 2 shows.

Fitting the measured distribution (e.g. the V0 amplitude) with the distribution simulated using the Glauber model creates a connection between an experimental observable (the V0 amplitude) and the geometrical model of nuclear collisions employed in the model. Since the geometry information (b, Npart, Ncoll) for the simulated distribution is known from the model, the geometrical properties for centrality classes defined by sharp cuts in the simulated multiplicity distribution can be calculated.

The high-quality results obtained in the determination of centrality are directly reflected in the analyses that ALICE performs to investigate the properties of the system that strongly depend on its geometry. Elliptic flow, for example, is a fundamental measurement of the degree of collectivity of the system at an early stage of its evolution since it directly reflects the initial spatial anisotropy, which is largest at the beginning of the evolution. The quality of the centrality determination allows access to the geometrical properties of the system with a very high precision. To remove non-flow effects, which are predominantly short-ranged in rapidity, as well as artefacts of track-splitting, two-particle correlations are calculated in 1% centrality bins with a one-unit gap in pseudo-rapidity. Using these correlations, as well as the multi-particle cumulants (4th, 6th and 8th order), ALICE can extract the elliptic flow-coefficient v2 (figure 3), i.e. the second harmonic coefficient of the azimuthal Fourier decomposition of the momentum distribution (ALICE collaboration 2011). Such measurements have allowed ALICE to demonstrate that the hot and dense matter created in heavy-ion collisions at the LHC behaves like a fluid with almost zero viscosity (CERN Courier April 2011 p7) and to pursue further the hydrodynamic features of the quark–gluon plasma that is formed there.

The EMC effect still puzzles after 30 years

EMC plot from 1982

Contrary to the stereotype, advances in science are not typically about shouting “Eureka!”. Instead, they are about results that make a researcher say, “That’s strange”. This is what happened 30 years ago when the European Muon collaboration (EMC) at CERN looked at the ratio of their data on per-nucleon deep-inelastic muon scattering off iron and compared it with that of the much smaller nucleus of deuterium.

The data were plotted as a function of Bjorken-x, which in deep-inelastic scattering is interpreted as the fraction of the nucleon’s momentum carried by the struck quark. The binding energies of nucleons in the nucleus are several orders of magnitude smaller than the momentum transfers of deep-inelastic scattering, so, naively, such a ratio should be unity except for small corrections for the Fermi motion of nucleons in the nucleus. What the EMC experiment discovered was an unexpected downwards slope to the ratio (figure 1) – as revealed in CERN Courier in November 1982 and then published in a refereed journal the following March (Aubert et al. 1983).

This surprising result was confirmed by many groups

This surprising result was confirmed by many groups, culminating with the high-precision electron- and muon-scattering data from SLAC (Gomez et al. 1994), Fermilab (Adams et al. 1995) and the New Muon collaboration (NMC) at CERN (Amaudruz et al. 1995 and Arneodo et al. 1996). Figure 2 shows representative data. The conclusions from the combined experimental evidence were that: the effect had a universal shape; was independent of the squared four-momentum transfer, Q2; increased with nuclear mass number A; and scaled with the average nuclear density.

A simple picture

The primary theoretical interpretation of the EMC effect – the region x > 0.3 – was simple: quarks in nuclei move throughout a larger confinement volume and, as the uncertainty principle implies, they carry less momentum than quarks in free nucleons. The reduction of the ratio at lower x, named the shadowing region, was attributed either to the hadronic structure of the photon or, equivalently, to the overlap in the longitudinal direction of small-x partons from different nuclei. These notions gave rise to a host of models: bound nucleons are larger than free ones; quarks in nuclei move in quark bags with 6, 9 and even up to 3A quarks, where A is the total number of nucleons. More conventional explanations, such as the influence of nuclear binding, enhancement of pion-cloud effects and a nuclear pionic field, were successful in reproducing some of the nuclear deep-inelastic scattering data.

EMC graphs

It was even possible to combine different models to produce new ones; this led to a plethora of models that reproduced the data (Geesaman et al. 1995), causing one of the authors of this article to write that “EMC means Everyone’s Model is Cool”. It is interesting to note that none of the earliest models were that concerned with the role of two-nucleon correlations, except in relation to six-quark bags.

The initial excitement was tempered as deep-inelastic scattering became better understood and the data became more precise. Some of the more extreme models were ruled out by their failure to match well known nuclear phenomenology. Moreover, inconsistency with the baryon-momentum sum rules led to the downfall of many other models. Because some of them predicted an enhanced nuclear sea, the nuclear Drell-Yan process was suggested as a way to disentangle the various possible models. In this process, a quark from a proton projectile annihilates with a nuclear antiquark to form a virtual photon, which in turn becomes a leptonic pair (Bickerstaff et al. 1984). The experiment was done and none of the existing models provided an accurate description of both sets of data – a challenge that remains to this day (Alde et al. 1984).

New data

A significant shift in the experimental understanding of the EMC effect occurred when new data on 9Be became available (Seely et al. 2009). These data changed the experimental conclusion that the EMC effect follows the average nuclear density and instead suggested that the effect follows local nuclear density. In other words, even in deep-inelastic kinematics, 9Be seemed to act like two alpha particles with a single nearly free neutron, rather than like a collection of nucleons whose properties were all modified.

This led experimentalists to ask if the x > 1 scaling plateaux that have been attributed to short-range nucleon–nucleon correlations – a phenomenon that is also associated with high local densities – could be related to the EMC effect. Figure 3 shows the kinematic range of the EMC effect together with the x > 1 short-range correlation (SRC) region. While the dip at x = 1 has been shown to vary rapidly with Q2, the EMC effect and the magnitude of the x > 1 plateaux are basically constant within the Q2 range of the experimental data. Plotting the slope of the EMC effect, 0.3 < x < 0.7, against the magnitude of scaling x > 1 plateaux for all of the available data, as shown in figure 4, revealed a striking correlation (Weinstein et al. 2011). This phenomenological relationship has led to renewed interest in understanding how strongly correlated nucleons in the nucleus may be affecting the deep-inelastic results.

In February 2013, on nearly the 30th anniversary of the EMC publication, experimentalists and theorists came together at a special workshop at the University of Washington Institute of Nuclear Theory to review understanding of the EMC effect, discuss recent advances and plan new experimental and theoretical efforts. In particular, an entire series of EMC and SRC experiments are planned for the new 12 GeV electron beam at Jefferson Lab and analysis is underway of new Drell-Yan experimental data from Fermilab.

A new life

Although the EMC effect is now 30 years old, the recent experimental results have given new life to this old puzzle; no longer is Every Model Cool. Understanding the EMC effect implies understanding how partons behave in the nuclear medium. It thus has far-reaching consequences for not only the extraction of neutron information from nuclear targets but also for understanding effects such as the NuTeV anomaly or the excesses in the neutrino cross-sections observed by the MiniBooNe experiment.

Group Theory for High-Energy Physicists

By Mohammad Saleem and Muhammad Rafique
CRC Press/Taylor and Francis
Hardback: £44.99

9780429087097

Although group theory has played a significant role in the development of various disciplines of physics, there are few recent books that start from the beginning and then go on to consider applications from the point of view of high-energy physicists. Group Theory for High-Energy Physicists aims to fill that role. The book first introduces the concept of a group and the characteristics that are imperative for developing group theory as applied to high-energy physics. It then describes group representations and, with a focus on continuous groups, analyses the root structure of important groups and obtains the weights of various representations of these groups. It also explains how symmetry principles associated with group theoretical techniques can be used to interpret experimental results and make predictions. This concise introduction should be accessible to undergraduate and graduate students in physics and mathematics, as well as to researchers in high-energy physics.

Introduction to Mathematical Physics: Methods and Concepts, Second Edition

By Chun Wa Wong
Oxford University Press
Hardback: £45 $84.95

51PHIPEJW+L

Introduction to Mathematical Physics explains how and why mathematics is needed in the description of physical events in space. Aimed at physics undergraduates, it is a classroom-tested textbook on vector analysis, linear operators, Fourier series and integrals, differential equations, special functions and functions of a complex variable. Strongly correlated with core undergraduate courses on classical and quantum mechanics and electromagnetism, it helps students master these necessary mathematical skills but also contains advanced topics of interest to graduate students. It includes many tables of mathematical formulae and references to useful materials on the internet, as well as short tutorials on basic mathematical topics to help readers refresh their knowledge. An appendix on Mathematica encourages the reader to use computer-aided algebra to solve problems in mathematical physics. A free Instructor’s Solutions Manual is available to instructors who order the book.

Gauge Theories of Gravitation: A Reader with Commentaries

By Milutin Blagojevićand Friedrich W Hehl (eds.)
World Scientific
Hardback: £111 $168 S$222

51elGMp-YQL

With a foreword by Tom Kibble and commentaries by Milutin Blagojević and Friedrich W Hehl, the aim of this volume is to introduce graduate and advanced undergraduate students of theoretical or mathematical physics – and other interested researchers – to the field of classical gauge theories of gravity. Intended as a guide to the literature in this field, it encourages readers to study the introductory commentaries and become familiar with the basic content of the reprints and the related ideas, before choosing specific reprints and then returning to the text to focus on further topics.

A Unified Grand Tour of Theoretical Physics, Third Edition

By Ian D Lawrie
CRC Press/Taylor and Francis
Paperback: £44.99

9781138473355

A Unified Grand Tour of Theoretical Physics invites readers on a guided exploration of the theoretical ideas that shape contemporary understanding of the physical world at the fundamental level. Its central themes – which include space–time geometry and the general relativistic account of gravity, quantum field theory and the gauge theories of fundamental forces – are developed in explicit mathematical detail, with an emphasis on conceptual understanding. Straightforward treatments of the Standard Model of particle physics and that of cosmology are supplemented with introductory accounts of more speculative theories, including supersymmetry and string theory. This third edition includes a new chapter on quantum gravity and new sections with extended discussions of topics such as the Higgs boson, massive neutrinos, cosmological perturbations, dark energy and dark matter.

Strings, Gauge Fields, and the Geometry Behind the Legacy of Maximilian Kreuzer

By Anton Rebhan, Ludmil Katzarkov, Johanna Knapp, Radoslav Rashkov and Emanuel Scheidegger (eds.)
World Scientific
Hardback: £104
E-book: £135

strings-gauge-fields-and-the-geometry-behind-the-legacy-of-maximilian-kreuzer

This book contains invited contributions from collaborators of Maximilian Kreuzer, a well known string theorist who built a sizeable group at Vienna University of Technology (TU Vienna) but sadly died in November 2010 aged just 50. Victor Batyrev, Philip Candelas, Michael Douglas, Alexei Morozov, Joseph Polchinski, Peter van Nieuwenhuizen and Peter Wes are among others giving accounts of Kreuzer’s scientific legacy and original articles. Besides reviews of recent progress in the exploration of string-theory vacua and corresponding mathematical developments, Part I reviews in detail Kreuzer’s important work with Friedemann Brandt and Norbert Dragon on the classification of anomalies in gauge theories. Similarly, Part III contains a user manual for a new thoroughly revised version of PALP (Package for Analysing Lattice Polytopes with applications to toric geometry), the software developed by Kreuzer and Harald Skarke at TU Vienna.

Reviews of Accelerator Science and Technology: Volume 4 – Accelerator Applications in Industry and the Environment

By Alexander W Chao and Weiren Chou (ed.)
World Scientific
Hardback: £111
E-book: £144

610pMaavVYL

Of about 30,000 accelerators at work in the world today, a majority of these are for applications in industry. This volume of Reviews of Accelerator Science and Technology contains 14 articles on such applications, all by experts in their respective fields. The first eight articles review various applications, from ion-beam analysis to neutron generation, while the next three discuss accelerator technology that has been developed specifically for industry. The twelfth article tackles the challenging subject of future prospects in this rapidly evolving branch of technology. Last, the volume features an article on the success story of CERN by former director-general, Herwig Schopper, as well as a tribute to Simon van der Meer, “A modest genius of accelerator science”.

Colliders unite in the Linear Collider Collaboration

CCnew1_03_13

The Compact Linear Collider (CLIC) and the International Linear Collider (ILC) – two studies for next-generation projects to complement the LHC – now belong to the same organization. The Linear Collider Collaboration (LCC) was officially launched on 21 February at TRIUMF, Canada’s national laboratory for particle and nuclear physics.

The ILC and CLIC have similar physics goals but use different technologies and are at different stages of readiness. The teams working on them have now united in the new organization to make the best use of the synergies between the two projects and to co-ordinate and advance the global development work for a future linear collider. Lyn Evans, former project leader of the LHC, heads the LCC, while Hitoshi Murayama, director of the Kavli Institute for the Physics and Mathematics of the Universe, is deputy-director.

The LCC has three main sections, reflecting the three areas of research that will continue to be conducted. Mike Harrison of Brookhaven National Laboratory leads the ILC section, Steinar Stapnes of CERN leads the CLIC section and Hitoshi Yamamoto of Tohoku University leads the section for physics and detectors. The Linear Collider Board (LCB), with the University of Tokyo’s Sachio Komamiya at the head, is a new oversight committee for the LCC. Appointed by the International Committee for Future Accelerators, the LCC met for the first time at TRIUMF in February. The ILC’s Global Design Effort and its supervisory organization, the ILC Steering committee, officially handed over their duties to the LCC and LCB in February but they will continue to work together until the official completion of the Technical Design Report for the ILC.

Both the ILC and CLIC will continue to exist and carry on their R&D activities – but with even more synergy between common areas. These include the detectors and the planning of infrastructure, as well as civil-engineering and accelerator aspects. The projects are at different stages of maturity. The CLIC collaboration published its Conceptual Design Report in 2012 and is scheduled to complete the Technical Design Report, which demonstrates feasibility for construction, in a couple of years.

For the ILC collaboration, which will publish its Technical Design Report in June this year, the main focus is on preparing for possible construction while at the same time further advancing acceleration technologies, industrialization and design optimization. The final version of the report will include a new figure for the projected cost. The current estimate is 7.8 thousand million ILC Units (1 ILC unit is equivalent to US$1 of January 2012), plus an explicit estimate for labour costs averaged over the three regional sites, amounting to 23 million person-hours. With the finalization of the Technical Design Report, the ILC’s Global Design Effort, led by Barry Barish, will formally complete its mandate.

With the discovery of the Higgs-like boson at the LHC, the case for a next-generation collider in the near future has received a boost and researchers are thinking of ways to build the linear collider in stages: first as a so-called Higgs factory for the precision studies of the new particle; second at an energy of 500 GeV; and third, at double this energy, to open further possibilities for as yet undiscovered physics phenomena. Japan is signalling interest to host the ILC.

“Now that the LHC has delivered its first and exciting discovery, I am eager to help the next project on its way,” says Evans. “With the strong support the ILC receives from Japan, the LCC may be getting the tunnelling machines out soon for a Higgs factory in Japan while at the same time pushing frontiers in CLIC technology.”

LHC: access required, time estimate about two years

CCnew2_03_13

When the LHC and injector beams stopped on 16 February, the following words appeared on LHC Page 1: “No beam for a while. Access required: Time estimate ˜2 years”. This message marked the start of the first long shutdown (LS1). Over the coming years, major maintenance work will be carried out across the whole of CERN’s accelerator chain. Among the many tasks foreseen, more than 10,000 LHC magnet interconnections will be consolidated and the entire ventilation system for the 628-m-circumference Proton Synchrotron will be replaced, as will more than 100 km of cables on the Super Proton Synchrotron. The LHC is scheduled to start up again in 2015, operating at its design energy of 7 TeV per beam, with the rest of the CERN complex restarting in the second half of 2014.

The LHC’s first dedicated proton–lead run came to an end on 10 February, having delivered an integrated luminosity of more than 30 nb–1 to ALICE, ATLAS and CMS and 2.1 nb–1 to LHCb, with the TOTEM, ALFA and LHCf experiments also taking data. This run had ended later than planned because of challenges that had arisen in switching the directions of the two beams; as a result the 2013 operations were extended slightly to allow four days of proton–proton collisions at 1.38 TeV. To save time, these collisions were performed un-squeezed. After set up, four fills with around 1300 bunches and a peak luminosity of 1.5 × 1032 cm–2 s–1 delivered around 5 pb–1 of data to ATLAS and CMS. The requisite luminosity scans were somewhat hampered by technical issues but succeeded in the end, leaving just enough time for a fast turnaround and a short final run at 1.38 TeV for ALFA and TOTEM.

On 14 February, the shift crew dumped the beams from the LHC to bring to an end the machine’s first three-year physics run. Two days of quench tests followed immediately to establish the beam loss required to quench the magnets. Thanks to these tests, it will be possible to set optimum thresholds on the beam-loss monitors when beams circulate again in 2015.

Despite no beam from 16 February onwards, the LHC stayed cold until 4 March so that powering tests could verify the proper functioning of the LHC’s main magnet (dipole and quadrupole) circuits. At the same time, teams in the CERN Control Centre performed extensive tests of all of the other circuits, up to current levels corresponding to operation with 7 TeV beams. By powering the entire machine and then going sector by sector, the operators managed to perform more than a thousand tests on 540 circuits in just 10 days. Small issues were resolved by immediate interventions and the operators identified a number of circuits that need a more detailed analysis and possibly intervention during LS1.

With powering tests complete, the Electrical Quality Assurance team could test the electrical insulation of each magnet, sector by sector, before the helium was removed and stored. Beginning with sector 5–6, the magnets are now being warmed up carefully and the entire machine should be at room temperature by the end of May.

bright-rec iop pub iop-science physcis connect