Comsol -leaderboard other pages

Topics

ALICE: on the trail of a new state of matter

Résumé

ALICE : sur les traces d’un nouvel état de la matière

Après deux périodes de collisions plomb-plomb au LHC, complétées par des campagnes de collisions proton-plomb, de nouvelles perspectives s’ouvrent pour la compréhension de la matière à hautes température et densité, conditions dans lesquelles la chromodynamique quantique prédit l’existence d’un plasma de quarks et de gluons. Conçue pour supporter les fortes densités de particules générées par les collisions d’ions lourds, l’expérience ALICE a fourni de nombreuses mesures du milieu produit au LHC, qui sont ici résumées.Élément nouveau, la large section efficace pour les processus dits durs tels que la production de jets et de saveurs lourdes peut être utilisée pour ” voir ” à l’intérieur du milieu.

The dump of the lead beam in the early morning of 10 February this year marked the end of a successful and exciting first LHC running phase with heavy-ion beams. It started in November 2010 with the first lead–lead (PbPb) collisions at √sNN = 2.76 TeV per nucleon pair, when in one month of running the machine delivered an integrated luminosity of about 10 μb–1 for each experiment. In the second period a year later, the LHC’s heavy-ion performance exceeded expectation because the instantaneous luminosity reached more than 1026 cm–2 s–1 and the experiments collected about 10 times more integrated luminosity. A pilot proton–lead (pPb) run at √sNN = 5.02 TeV took place in September 2012, providing enough events for first surprises and publications. A full run followed in February, delivering 30 nb–1 of pPb collisions – precious reference data for the PbPb studies.

The ALICE experiment is optimized to cope with the large particle-densities produced in PbPb collisions and nothing was left unprepared for the first heavy-ion run in 2010. Nevertheless, immediately before the first collisions the tension was palpable until the first event displays appeared (figure 1). The image of the star-like burst with thousands of particles recorded by the time-projection chamber became an emblem for the accomplishment of a collaboration that had worked for 20 years on developing, building and operating the ALICE detector. With the arrival of a wealth of data, a new era for comprehension of the nature of matter at high temperature and density began, where QCD predicts that quark–gluon plasma (QGP) – a de-confined medium of quarks and gluons – exists.

Before the LHC started up, the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven was the most powerful machine of this kind, producing collisions between gold (Au) ions with a maximum energy of 200 GeV. After 10 years of analysis – just before the first PbPb data-taking at the LHC – the experimental collaborations at RHIC came to the surprising conclusion that central AuAu collisions create droplets of an almost perfect, dense fluid of partonic matter that is 250,000 times hotter than the core of the Sun. Their results indicate that because of the strength of the colour forces the plasma of partons (quarks and gluons) produced in these collisions has not yet reached its asymptotically gas-like state and, therefore, it has been dubbed strongly interacting QGP (sQGP). This finding raised several questions. What would the newly created medium look like at the LHC? How much denser and hotter would it be? Would it still be a perfect liquid or would it be closer to a weakly coupled gas-like state? How would the abundantly produced hard probes be modified by the medium?

Denser, hotter, bigger

In the most central PbPb collisions at the LHC, the charged-particle density at mid-rapidity amounts to dN/dη ≈ 1600, which is about 2.1 times more per nucleon pair participating in the collision than at RHIC. Since the particles are also, on average, more energetic at the LHC, the transverse-energy density is about 2.5 times higher. This allows a rough estimate of the energy density of the medium that is produced. Assuming the same equilibration time of the plasma at RHIC and the LHC, the energy density has increased at the LHC by at least a factor of three, corresponding to an increase in temperature of more than 30% (CERN Courier June 2011 p17).

A more accurate thermometer is provided by the spectrum of the thermal photons emitted by the plasma that reach the detector unscathed. Whereas counting inclusive charged particles is a relatively easy task, the thermal photons have to be arduously separated from a large background of photons from meson decays and photons produced by QCD processes in collisions with large momentum-transfer, pT. The thermal photons appear in the low-energy region of the direct-photon spectrum (pgT <2 GeV/c) as an excess above the yield expected from next-to-leading order QCD and have an exponential shape (figure 2). The inverse slope of this exponential measured by ALICE gives a value for the temperature, T = 304±51 MeV, about 40% higher than at RHIC. In hydrodynamic models, this parameter corresponds to an effective temperature averaged over the time evolution of the reaction. The measured values suggest initial temperatures that are well above the critical temperature of 150–160 MeV.

In the same way that astronomers determine the space–time structure of extended sources using Hanbury-Brown–Twiss optical intensity interferometry, heavy-ion physicists use 3D momentum correlation-functions of identical bosons to determine the size of the medium produced (the freeze-out volume) and its lifetime. In line with predictions from hydrodynamics, the volume increases between RHIC and the LHC by a factor of two and the system lifetime increases by 30% (CERN Courier May 2011 p6).

Perfect quantum liquids are characterized by a low shear-viscosity to entropy ratio, η/s, for which a lower limit of h/4πkB is postulated. This property is directly related to the ability of the medium to transform spatial anisotropy of the initial energy density into momentum-space anisotropy. Experimentally, the momentum-space anisotropy is quantified by the Fourier decomposition of the distribution in azimuthal angle of the produced particles with respect to the reaction plane. The second Fourier coefficient, v2, is commonly denoted as elliptic flow. With respect to RHIC, v2 is found to increase in mid-central collisions by 30% (CERN Courier April 2011 p7). Calculations based on hydrodynamical models show that the v2 measured at the LHC is consistent with a low η/s, close to or challenging the postulated limit.

Further knowledge on the collective behaviour of the medium has been obtained from the spectral analysis of pions, kaons and protons. The so-called blast-wave fit can be used to determine the common radial expansion velocity, <βr>. The velocity measured by ALICE comes to about 0.65 and has increased by 10% with respect to RHIC.

In di-hadron azimuthal correlations, elliptic flow manifests as cosine-shaped modulations that extend over large rapidity ranges. At the LHC, for central collisions, more complicated structures become prominent. These can be quantified by higher-order Fourier coefficients. Hydrodynamical models can then relate them to fluctuations of the initial density distribution of the interacting nucleons. Wavy structures that were formerly discussed at RHIC, such as Mach-cones and soft ridges, have now found a simpler explanation. By selecting events with larger than average higher-order Fourier coefficients, it is possible to select and study certain initial-state configurations. “Event shape engineering” has therefore been born.

As discussed above, using hydrodynamical models, the basic parameters of the medium can be extrapolated in a continuous manner between the energies of RHIC and the LHC and they turn out to show a moderate increase. Although this might not seem to be a spectacular discovery, its importance for the field should not be underestimated: it marks a transition from data-driven discoveries to precision measurements that constrain model parameters.

Hard probes

What is new at the LHC is the large cross-section (several orders of magnitude higher with respect to RHIC) for so-called hard processes, e.g. the production of jets and heavy flavour. In these cases, the production is decoupled from the formation of the medium and, therefore, as quasi-external probes traversing the medium they can be used for tomography measurements – in effect, to see inside the medium. Furthermore, they are well calibrated probes because their production rates in the absence of the medium can be calculated using perturbative QCD. Hard probes open a new window for the study of the QGP through high-pT parton and heavy-quark transport coefficients, as well as the possible thermalization and recombination of heavy quarks.

High-pT partons are produced in hard interactions at the early stage of heavy-ion collisions. They are ideal probes because they traverse the medium and their yield and kinematics are influenced by its presence. The ability of a parton to transfer momentum to the medium is particularly interesting. Described by a transport parameter, it is related to the density of colour charges and the coupling of the medium: the stronger the coupling, the larger the transport coefficient and, therefore, the modification of the probe. Energy loss of partons in the medium is caused by multiple elastic-scattering and gluon radiation (jet quenching). This was first observed at RHIC in the suppression of high-pT particles with respect to the appropriately scaled proton–proton (pp) and proton–nucleus (pA) references (the nuclear-modification factor RAA) and from the disappearance of back-to-back particle correlations.

At the LHC, rates are high at transverse energies where jets can be reconstructed above the fluctuations of the background-energy contribution from the underlying event. In particular, for jet transverse energies ET >100 GeV, the influence of the underlying event is relatively small, allowing robust jet-measurements. The ATLAS and CMS collaborations – whose detectors have almost complete calorimetric coverage – were the first to report direct observation of jet-quenching via the di-jet energy imbalance at the LHC (CERN Courier January/February 2011 p6 and March 2011 p6). However, the measurements of the suppression of inclusive single-particle production show that quenching effects are strongest for intermediate transverse momenta, 4 <pT <20 GeV/c, corresponding to parton pT values in the range around 6–30 GeV/c (CERN Courier June 2011 p17).

ALICE can approach this region – while introducing the smallest possible bias on the jet fragmentation – by measuring jet fragments down to low pT (pT >150 MeV/c). Although in jet reconstruction more of the original parton energy is recovered than with single particles, for the most central collisions the observed single inclusive jet suppression is similar to the one for single hadrons, with RjetAA = 0.2–0.4 in the range 30 <pjetT <100 GeV/c. Furthermore, no indication of energy redistribution within experimental uncertainties is observed from the ratios of jet yields with different cone sizes (CERN Courier June 2013 p8).

The suppression patterns are qualitatively and – to some extent – quantitatively similar for single hadrons and jets. This can be best explained by partonic energy loss through radiation mainly outside the jet cone that is used by the jet reconstruction algorithm and in-vacuum (pp-like) fragmentation of the remnant parton. Before the LHC start-up, it was widely believed that jets are more robust objects, i.e. jet quenching would soften their fragmentation without changing the total energy inside the cone. The study of jet fragmentation would have allowed insight into the details of the energy-loss mechanism. The latter is still true, but the energy lost by the partons has to be searched for at large distances from the jet axis, where the background from the underlying event is large. Detailed studies of the momentum and angular distribution of the radiated energy – which require future higher-statistics jet samples – will provide more detailed information on the nature of the energy-loss mechanisms.

Heavy versus light

At the LHC, high-pT hadron-production is dominated by gluon fragmentation. In QCD, quarks have a smaller colour-coupling factor with respect to gluons, so the energy loss for quarks is expected to be smaller than for gluons. In addition, for heavy quarks with pT <mq, small-angle gluon radiation is reduced by the so-called “dead-cone effect”. This will reduce further the effect of the medium. ALICE has measured the nuclear-modification factor for the charm mesons D0, D+ and D*+ for 2 <pT <16 GeV/c (figure 3). For central PbPb collisions, a strong in-medium energy loss of 0.2–0.34 is observed in the range pT >5 GeV/c. For lower transverse momenta there is a tendency for the suppression of D0 mesons to decrease (CERN Courier June 2012 p15, January/February 2013 p7).

The suppression is almost as large as that observed for charged particles that are dominated by pions from gluon fragmentation. This observation favours models that explain heavy-quark energy loss by additional mechanisms, such as in-medium hadron formation and dissociation or partial thermalization of heavy quarks through re-scatterings and in-medium resonant interactions. Such a scenario is further corroborated by measurement of the D-meson elliptic-flow coefficient, v2. For semi-central PbPb collisions, a positive flow is observed in the range 2 <pT <6 GeV/c indicating that the interactions with the medium transfer information on the azimuthal anisotropy of the system to charm quarks.

The suppression of the J/ψ and other charmonia states, as a result of short-range screening of the strong interaction, was one of the first signals predicted for QGP formation and has been observed both at CERN’s Super Proton Synchrotron and at RHIC. At the LHC, heavy quarks are abundantly produced – about 100 cc pairs per event in central PbPb collisions. If these charm quarks roam freely in the medium and the charm density is high enough, they can recombine to form quarkonia states, competing with the suppression mechanism.

Indeed, in the most central collisions a lower J/ψ suppression than at RHIC is observed. Also, a smaller suppression is observed at low pT compared with high pT and it is lower at mid-rapidity than in the forward direction (CERN Courier March 2012 p14). In line with regeneration models, suppression is reduced in regions where the charm-quark density is highest. In semi-central PbPb collisions, ALICE sees a hint of nonzero elliptic flow of the J/ψ. This also favours a scenario in which a significant fraction of J/ψ particles are produced by regeneration. The significance of these results will be improved with future heavy-ion data-taking.

Surprises from pPb reference data

The analysis of pPb collisions allows the ALICE collaboration to study initial and final state effects in cold nuclear matter, to establish a baseline for the interpretation of the heavy-ion results. However, the results from the data taken in the pilot run have already shown that pPb collisions are also good for surprises. First, the CMS collaboration observed from the analysis of two-particle angular correlations in high-multiplicity pPb collisions the presence of a ridge structure that is elongated in the pseudo-rapidity direction (CERN Courier January/February 2013 p9). Using low-multiplicity events as a reference, the ALICE and ATLAS collaborations found that this ridge-structure actually has a perfectly symmetrical counterpart, back-to-back in azimuth (CERN Courier March 2013 p7). The amplitude and shape of the observed double-ridge structure are similar to the modulations that are caused by the elliptic flow that is observed in PbPb collisions, therefore indicating collective behaviour in pPb. Other models attribute the effect to gluon saturation in the lead nucleus or to parton-induced final-state effects. These effects and their similarity to PbPb phenomena are intriguing. Their further investigation and theoretical interpretation will shed new light on the properties of matter at high temperatures and densities.

If pPb collisions do produce a QGP-like medium, its extension is expected to be much smaller than the one produced in PbPb collisions. However, the relevant quantity is not size but the ratio of the system size to the mean-free path of partons. If it is high enough, hydrodynamic models can explain the observed phenomena. If the observations can be explained by coherent effects between strings formed in different proton–nucleon scatterings, we must understand to what extent these effects contribute also to PbPb collisions. While the LHC takes a pause, the ALICE collaboration is looking forward to more exciting results from the existing data.

AMS-02 provides a precise measure of cosmic rays

AMS

More than 100 years have passed since the discovery of cosmic rays by Victor Hess in 1912 and there are still no signs of decreasing interest in the study of the properties of charged leptons, nuclei and photons from outer space. On the contrary, the search for a better understanding and clarification of the long-standing questions – the origin of ultrahigh energy cosmic rays, the composition as a function of energy, the existence of a maximum energy, the acceleration mechanisms, the propagation and confinement in the Galaxy, the extra-galactic origin, etc. – are more pertinent than ever. In addition, new ambitious experimental initiatives are starting to produce results that could cast light on more recent challenging questions, such as the nature of dark matter, the apparent absence of antimatter in the explored universe and the search for new forms of matter.

The 33rd International Conference on Cosmic Rays (ICRC 2013) – The Astroparticle Physics Conference – took place in Rio de Janeiro on 2–9 July and provided a high-profile platform for the presentation of a wealth of results from solar and heliospheric physics, through cosmic-ray physics and gamma-ray astronomy to neutrino astronomy and dark-matter physics. A full session was devoted to the presentation of new results from the Alpha Magnetic Spectrometer, AMS-02. Sponsored by the US Department of Energy and supported financially by the relevant funding and space agencies in Europe and Asia, this experiment was deployed on the International Space Station (ISS) on 19 May 2011 (figure 1). The results, which were presented for the first time at a large international conference, are based on the data collected by AMS-02 during its first two years of operation on the ISS.

AMS experiment

AMS-02 is a large particle detector by space standards and built using the concepts and technologies developed for experiments at particle accelerators but adapted to the extremely hostile environment of space. Measuring 5 × 4 × 3 m3, it weighs 7.5 tonnes. Reliability, performance and redundancy are the key features for the safe and successful operation of this instrument in space.

The main scientific goal is to perform a high-precision, large-statistics and long-duration study of cosmic nuclei (from hydrogen to iron and beyond), elementary charged particles (protons, antiprotons, electrons and positrons) and γ rays. In particular, AMS-02 is designed to measure the energy- and time-dependent fluxes of cosmic nuclei to an unprecedented degree of precision, to understand better the propagation models, the confinement mechanisms of cosmic rays in the Galaxy and the strength of the interactions with interstellar media. A second high-priority research topic is an indirect search for dark-matter signals based on looking at the fluxes of particles such as electrons, positrons, protons, antiprotons and photons.

Another important item on the list of priorities – which will be addressed in future – is the search for cosmic antimatter nuclei. This variety of matter is apparently absent in the region of the universe currently explored but – according to the Big Bang theory – it should have been highly abundant in the early phases of the universe. Last but not least, AMS-02 will explore the possible existence of new phenomena or new forms of matter, such as strangelets, which this state-of-the-art instrument will be in a unique position to unravel.

The AMS-02 detector was designed, built and is now operated by a large international collaboration led by Nobel laureate Samuel C C Ting, involving researchers from institutions in America, Europe and Asia. The detector components were constructed and tested in research centres around the world, with large facilities being built or refurbished for this purpose in China, France, Germany, Italy, Spain, Switzerland and Taiwan. The final assembly took place at CERN, benefiting from the laboratory’s significant expertise and experience in the technologies of detector construction. The instrument was then tested extensively with cosmic rays and particle beams at CERN, in the Maxwell electromagnetic compatibility chamber and the large-space thermal simulator at ESA-ESTEC in Noordwijk, as well as in the large facilities at the NASA Kennedy Space Center in the US.

The construction of AMS-02 has stimulated the development of important and novel technologies in advanced instrumentation. These include the first operation in space of a large two-phase CO2 cooling system for the silicon tracker and the two-gas (Xe-CO2) system for the operation of the transition-radiation detector, as well as the overall thermal system. The latter must protect the experiment from the continual changes of temperature that the detector undergoes at every position on its orbit, which affect various parts of the detector subsystems in a manner that is not easy to reproduce. The use of radiation-tolerant fast electronics, a sophisticated trigger, redundant systems for data acquisition, associated protocols for communications with the NASA on-board hardware and a high-rate downlink system for the real-time transmission of data from AMS-02 to the NASA ground facilities, are a few examples that illustrate the complexity and the kind of challenges that the project has had to meet.

Positron flux

The operation of the Payload Operation and Control Center (POCC) at CERN, 24 hours a day and 365 days a year, in permanent connection with the ISS and the NASA Johnson Space Center, has also been a major endeavour. Fast processing of data on reception at the Science Operation Center at CERN has been a formidable tour de force, resulting in the timely reconstruction of 36.5 × 109 cosmic rays during the period 19 May 2011 – August 2013.

After almost 28 months of operation, AMS-02 – with its 300,000 electronics channels, 650 computers, 1100 thermal sensors and 400 thermostats – has worked flawlessly. To maintain performance and reliability, three space-flight simulators operate continuously at CERN, at the NASA Johnson Space Center and at the NASA Marshall Space Flight Center, where they test and certify the numerous upgrades of the software packages for the on-board computers and the communication interfaces and protocols.

First results

At ICRC 2013, the AMS collaboration presented data on two important areas of cosmic-ray physics. One addresses the fluxes, ratios and anisotropies of leptons, while the other concerns charged cosmic nuclei (protons, helium, boron, carbon). The following presents a brief summary of the results and of some of the most critical experimental challenges.

Proton flux

In the case of electrons and positrons, efficient instrumental handles for the suppression of the dominant backgrounds are: the minimal amount of material in the transition-radiation and time-of-flight detectors; the magnet location, separating the transition-radiation detector and the electromagnetic calorimeter; and the capability to match the value of the particle momentum reconstructed in the nine tracker layers of the silicon spectrometer with the value of the energy of the particle showering in the electromagnetic calorimeter.

The performance of the transition-radiation detector results in a high proton-rejection efficiency (larger than 103) at 90% positron efficiency in the rigidity range of interest. The performance of the calorimeter with its 17 radiation lengths provides a rejection factor better than 103 for protons with momenta up to 103 GeV/c. The combination of the two efficiencies leads to an overall proton-rejection factor of 106 for most of the energy range under study.

A precision measurement of the positron fraction in primary cosmic rays, based on the sample of 6.8 million positron and electron events in the energy range of 0.5–350 GeV – collected during the initial 18 months of operation on the ISS – was recently published and presented at the conference (Aguilar et al. 2013 and Kounine ICRC 2013). The positron-fraction spectrum (figure 2), does not exhibit fine structure and the highly precise determination shows that the positron fraction steadily increases from 10–250 GeV, while from 20–250 GeV, the slope decreases by an order of magnitude. The AMS-02 measurements have extended the energy ranges covered by recent experiments to higher values and reveal a different behaviour in the high-energy region of the spectrum.

AMS-02 has also extended the measurements of the positron spectrum to 350 GeV – that is, above the energy range of determinations by other experiments. The individual electron and positron spectra, with the E3 multiplication factor and the combined spectrum, were presented at the conference (Schael, Bertucci ICRC 2013). Figure 3 shows the electron spectrum, which appears to follow a smooth, slowly falling curve up to 500 GeV. The positron spectrum, by contrast, rises to 10 GeV, flattens from 10–30 GeV, before rising again above 30 GeV (figure 4). For the time being, it is not obvious that the models or simple parametric estimations that are currently used to describe the rate spectrum can also describe the behaviour of the individual electron and positron spectra.

helium spectrum

Using a larger data sample, comprising of the order of 9 million of electrons and positrons, the collaboration has performed a preliminary measurement of the combined fluxes of electrons and positrons in the energy range 0.5–700 GeV (Bertucci ICRC 2013). The data do not show significant structures, although a change in the spectral index with increasing lepton energies is clearly observed. However, the positron flux increases with energy and a promising approach to identifying the physics origin of this behaviour lies in the determination of the size of a possible anisotropy, arising in primary sources, in the arrival directions of positrons and electrons measured in galactic co-ordinates. AMS-02 has obtained a limit on the dipole anisotropy parameter d <0.030 at the 95% confidence level for energies above 16 GeV (Casaus ICRC 2013).

Turning to cosmic nuclei, the first AMS-02 measurements of the proton and helium fluxes were presented at the conference (Haino, Choutko ICRC 2013). The rigidity ranges were 1 GV – 1.8 TV for protons and 2 GV – 3.2 TV for helium (figures 5 and 6). In both cases, the experiment observed gradual changes of the fluxes owing to solar modulation, as well as drastic changes after large solar flares. Otherwise, the spectra are fairly smooth and do not exhibit breaks or fine structures of the kind reported for other recent experiments.

Boron-to-carbon ratio

The ratio of the boron to carbon fluxes is particularly interesting because it carries important information about the production and propagation of cosmic rays in the Galaxy. Boron nuclei are produced mainly by spallation of heavier primary elements present in the interstellar medium, whereas primary cosmic rays – such as carbon and oxygen – are predominantly produced at the source. Precision measurements of the boron-to-carbon ratio therefore provide important input for determining the characteristics of the cosmic-ray sources by deconvoluting the propagation effects from the measured data. The capability of AMS-02 to do multiple independent determinations of the electric charges of the cosmic rays allows a separation of carbon from boron with a contamination of less than 10–4. Figure 7 presents a preliminary measurement of the boron-to-carbon ratio in the kinetic-energy interval 0.5 – 670 GeV/n (Oliva ICRC 2013).

For the future

After nearly 28 months of successful operation, the results presented at ICRC 2013 already give a taste of the scientific potential of the AMS-02 experiment. In the near future, the measurements sketched in this article will extend the energy or rigidity coverage and the study of systematic uncertainties will be finalized. The experiment will measure the fluxes of more cosmic nuclei with unprecedented precision to constrain further the size and energy dependence of the underlying background processes.

By the end of the decade AMS-02 will have collected more than 150 × 109 cosmic-ray events

High on the priority list for AMS-02 is the measurement of the antiproton flux and the antiproton/proton rate – a relevant and most sensitive quantity for disentangling, among the possible sources, those that induce the observed increase of the positron flux with energy. With the growing data sample and a deeper assessment of the systematic uncertainties, the searches for cosmic antinuclei will become extremely important, as will the search for unexpected new signatures.

By the end of the decade AMS-02 will have collected more than 150 × 109 cosmic-ray events. In view of what has been achieved so far, it is reasonable to be fairly confident that this massive amount of new and precise data will contribute significantly to a better understanding of the ever exciting and lively field of cosmic rays.

ATLAS undergoes some delicate gymnastics

ATLAS detector,

The LHC’s Long Shutdown 1 (LS1) is an opportunity that the ATLAS collaboration could not miss to improve the performance of its huge and complex detector. Planning began almost three years ago to be ready for the break and to produce a precise schedule for the multitude of activities that are needed at Point 1 – where ATLAS is located on the LHC. Now, a year after the famous announcement of the discovery of a “Higgs-like boson” on 4 July 2012 and only six months after the start of the shutdown, more than 800 different tasks have been already accomplished in more than 250 work packages. But what is ATLAS doing and why this hectic schedule? The list of activities is long, so only a few examples will be highlighted here.

The inner detector

One of the biggest interventions concerns the insertion of a fourth and innermost layer of the pixel detector – the IBL. The ATLAS pixel detector is the largest pixel-based system at the LHC. With about 80 million pixels, until now it has covered a radius from 12 cm down to 5 cm from the interaction point. At its conception, the collaboration already thought that it could be updated after a few years of operation. An additional layer at a radius of about 3 cm would allow for performance consolidation, in view of the effects of radiation damage to the original innermost layer at 5 cm (the b-layer). The decision to turn this idea into reality was taken in 2008, with the aim of installation around 2016. However, fast progress in preparing the detector and moving the long shutdown to the end of 2012 boosted the idea and the installation goal was moved forward by two years.

Making space

To make life more challenging, the collaboration decided to build the IBL using not only well established planar sensor technology but also novel 3D sensors. The resulting highly innovative detector is a tiny cylinder that is about 3 cm in radius and about 70 cm long but it will provide the ATLAS experiment with another 12 million detection channels. Despite its small dimensions, the entire assembly – including the necessary services – will need an installation tool that is nearly 10 m long. This has led to the so-called “big opening” of the ATLAS detector and the need to lift one of the small muon wheels to the surface.

The “big opening” of ATLAS is a special configuration where at one end of the detector one of the big muon wheels is moved as far as possible towards the wall of the cavern, the 400-tonne endcap toroid is moved laterally towards the surrounding path structure, the small muon wheel is moved as far as the already opened big wheel and then the endcap calorimeter is moved out by about 3 m. But that is not the end of the story. To make more space, the small muon wheel must be lifted to the surface to allow the endcap calorimeter to be moved further out against the big wheels.

In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP

This opening up – already foreseen for the installation of the IBL – became more worthwhile when the collaboration decided to use LS1 to repair the pixel detector. During the past three years of operation, the number of pixel modules that have stopped being operational has risen continuously from the original 10–15 up to 88 modules, at a worryingly increasing rate. Back in 2010, the first concerns triggered a closer look at the module failures and it was clear that in most of the cases the modules were in a good state but that something in the services had failed. This first glance was then augmented by substantial statistics after up to 88 modules had failed by mid-2012.

In 2011, the ATLAS pixel community decided to prepare new services for the detector – code-named nSQP for “new service quarter panels”. In January 2013, the collaboration decided to deploy the nSQP not only to fix the failures of the pixel modules and to enhance the future read-out capabilities for two of the three layers but also to ease the task of inserting the IBL into the pixel detector. This decision implied having to extract the pixel detector and take it to the clean-room building on the surface at Point 1 to execute the necessary work. The “big opening” therefore became mandatory.

"Big opening" of ATLAS

The extraction of the pixel detector was an extremely delicate operation but it was performed perfectly and a week in advance of the schedule. Work on both the original pixels and the IBL is now in full swing and preparations are under way to insert the enriched four-layer pixel detector back into ATLAS. The pixel detector will then contain 92 million channels – some 90% of the total number of channels in ATLAS.

But that is not the end of the story for the ATLAS inner detector. Gas leaks appeared last year during operation of the transition radiation tracker (TRT) detector. Profiting from the opening of the inner detector plates to access the pixel detector, a dedicated intervention was performed to cure as many leaks as possible using techniques that are usually deployed in surgery.

Further improvements

Another important improvement for the silicon detectors concerns the cooling. The evaporative cooling system that was based on a complex compressor plant has been satisfactory, even if it has created a variety of problems and interventions. The system allowed operating temperatures to be set to –20 °C with the possibility of going down to –30 °C, although the lower value has not been used so far as radiation damage to the detector is still in its infancy. However, the compressor plant needed continual attention and maintenance. The decision was therefore taken to build a second plant that was based on the thermosyphon concept, where the pressure that is required is obtained without a compressor, using instead the gravity advantage offered by the 90-m-deep ATLAS cavern. The new plant has been built and is now being commissioned, while the original plant has been refurbished and will serve as a redundant (back-up) system. In addition, the IBL cooling is based on CO2 cooling technology and a new redundant plant is being built to be ready for the IBL operations.

Both the semiconductor tracker and the pixel detector are also being consolidated. Improvements are being made to the back-end read-out electronics to cope with the higher luminosities that will go beyond twice the LHC design luminosity.

Extracting the pixel detector

Lifting the small muon wheel to the surface – an operation that had never been done before – was a success. The operation was not without difficulties because of the limited amount of space for manoeuvering the 140-tonne object to avoid collisions with other detectors, crates and the walls of the cavern and access shaft. Nevertheless, it was executed perfectly thanks to highly efficient preparation and the skill of the crane drivers and ATLAS engineers, with several dry runs done on the surface. Not to miss the opportunity, the few problematic cathode-strip chambers on the small wheel that was lifted to the surface will be repaired. A specialized tool is being designed and fabricated to perform this operation in the small space that is available between the lifting frame and the detector.

Many other tasks are foreseen for the muon spectrometer. The installation of a final layer of chambers – the endcap extensions – which was staged in 2003 for financial reasons has already been completed. These chambers were installed on one side of the detector during previous mid-winter shutdowns. The installation on the other side has now been completed during the first three months of LS1. In parallel, a big campaign to check for and repair leaks has started on the monitored drift tubes and resistive-plate chambers, with good results so far. As soon as access allows, a few problematic thin-gap chambers on the big wheels will be exchanged. Construction of some 30 new chambers has been under way for a few months and their installation will take place during the coming winter.

At the same time, the ATLAS collaboration is improving the calorimeters. New low-voltage power supplies are being installed for both the liquid-argon and tile calorimeters to give a better performance at higher luminosities and to correct issues that have been encountered during the past three years. In addition, a broad campaign of consolidation of the read-out electronics for the tile calorimeter is ongoing because it is many years since it was constructed. Designing, prototyping, constructing and testing new devices like these has kept the ATLAS calorimeter community busy during the past four years. The results that have been achieved are impressive and life for the calorimeter teams during operation will become much better with these new devices.

Improvements are also under way for the ATLAS forward detectors. The LUCID luminosity monitor is being rebuilt in a simplified way to make it more robust for operations at higher luminosity. All of the four Roman-pot stations for the absolute luminosity monitor, ALFA, located at 240 m from the centre of ATLAS in the LHC tunnel, will soon be in laboratories on the surface. There they will undergo modifications to implement wake-field suppression measures that will fight against the beam-induced increase in temperature that was suffered during operations in 2012. There are other plans for the beam-conditions monitor, the diamond-beam monitor and the zero-degree calorimeters. The activities are non-stop everywhere.

The infrastructure

All of the above might seem to be an enormous programme but it does not touch on the majority of the effort. The consolidation work spans the improvements to the evaporative cooling plants that have already been mentioned to all aspects of the electrical infrastructure and more. Here are a few examples from a long list.

The detector

Installation of a new uninterruptible power supply is ongoing at Point 1, together with replacement of the existing one. This is to avoid power glitches, which have affected the operation of the ATLAS detector on some occasions. Indeed, the whole electrical installation is being refreshed.

The cryogenic infrastructure is being consolidated and improved to allow completely separate operation of the ATLAS solenoid and toroid magnets. Redundancy is implemented everywhere in the magnet systems to limit downtime. Such downtime has, so far, been small enough to be unnoticeable in ATLAS data-taking but it could create problems in future.

All of the beam pipes will be replaced with new ones. In the inner detector, a new beryllium pipe with a smaller diameter to allow space for the IBL has been constructed and installed already in the IBL support structure. All of the other stainless-steel pipes will be replaced with aluminium ones to improve the level of background everywhere in ATLAS and minimize the adverse effects of activation.

A back-up for the ATLAS cooling towers is being created via a connection to existing cooling towers for the Super Proton Synchrotron. This will allow ATLAS to operate at reduced power, even during maintenance of the main cooling towers. The cooling infrastructure for the counting rooms is also undergoing complete improvement with redundancy measures inserted everywhere. All of these tasks are the result of a robust collaboration between ATLAS and all CERN departments.

LS1 is not, then, a period of rest for the ATLAS collaboration. Many resources are being deployed to consolidate and improve all possible aspects of the detector, with the aim of minimizing downtime and its impact on data-taking efficiency. Additional detectors are being installed to improve ATLAS’s capabilities. Only a few of these have been mentioned here. Others include, for example, even more muon chambers, which are being installed to fill any possible instrumental cracks in the detector.

All of this effort requires the co-ordination and careful planning of a complicated gymnastics of heavy elements in the cavern. ATLAS will be a better detector at the restart of LHC operations, ready to work at higher energies and luminosities for the long period until LS2 – and then the gymnastics will begin again.

Quarks, gluons and sea in Marseilles

Participants

The regular “DIS” workshops on Deep-Inelastic Scattering and related subjects usually bring together a unique mix of international communities and cover a spectrum of topics ranging across proton structure, strong interactions and physics at the energy frontier. DIS2013 – the 21st workshop – which took place in the Palais des Congrès in Marseilles earlier this year was no exception. Appropriately, this large scientific event formed part of a rich cultural programme in the city that was associated with its status as European Capital of Culture Marseilles-Provence 2013.

A significant part of the programme was devoted to recent and exciting experimental results, which together with theoretical advances and the outlook for the future created a vibrant scientific atmosphere. The workshop began with a full day of plenary reports on hot topics, followed by two and a half days of parallel sessions that were organized around seven themes: structure functions and parton densities; small-x, diffraction and vector mesons; electroweak physics and beyond the Standard Model; QCD and hadronic final states; heavy flavours; spin physics; future experiments.

Higgs and more

The meeting provided the opportunity to discuss in depth the various connections between strong interactions, proton structure and recent experimental results at the LHC. In particular, the discovery of a Higgs boson and the subsequent studies of its properties attracted a great deal of interest, including from the perspective of the connections with proton structure. A tremendous effort was made in the past year to provide an improved theory, study the constraints from the wealth of new experimental data and adopt a more robust methodology in analyses that determine the proton’s parton distribution functions (PDFs). The PDFs are an essential ingredient for most LHC analyses, from characterization of the Higgs boson to self-consistency tests of the Standard Model. The first “safari” into the new territory at the LHC and the impressive final results with the full data set from Fermilab’s Tevatron have revealed no new phenomena so far. However, it might well be that the search for new physics – which will be re-launched at higher energies during the next LHC run – will be affected by the precision with which the structure of the proton is known.

The goal of the spin community is to produce a 3D picture of the proton with high precision

The most recent experimental results from continuing analysis from HERA – the electron–proton collider that ran at DESY during 1992–2007 – were presented. In particular, both the H1 and ZEUS collaborations have now published measurements at high photon virtualities (Q2). While the refined data from HERA form its immensely valuable legacy, the transfer of the baton to the LHC has already begun. A large number of recent results – in particular from the LHC – provide further constraints on the PDFs, such as in the case of final states with weak bosons or top quarks, which are already in the regime of precision measurements with about 1% accuracy. Stimulated by an active Standard Model community that has many groups that are working on the determination of PDFs (such as ABM, MSTW and CTEQ) and by the release of common analysis tools such as HERAFitter, the new measurements from the LHC are rapidly interpreted in terms of valuable PDF constraints, as figure 1 shows. More exclusive final states have the potential to complement inclusive measurements: for instance, measurements on the W in association with the charm quark could shed new light on the strangeness content of the proton. A huge step in the precision of PDF determination – which might be essential to study new physics – complemented by a standalone programme at the energy frontier would be possible at the proposed Large Hadron Electron Collider (LHeC), which could provide a new opportunity to study Higgs-boson couplings.

Gluon density in the proton

The understanding of proton structure would not be complete without understanding its spin. Polarized experiments – including fixed-target DIS experiments at Jefferson Lab and CERN, as well as the polarized proton–proton programme at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) – continue to provide new data and to open new fields. The goal is to understand the parton contributions to the proton’s spin, long considered a “puzzle” because of the unexpected way that it is shared between the quarks – with only a quarter – the gluons and the relative angular momentum. Recent, more precise measurements of W-boson production in polarized proton–proton collisions at RHIC have the potential to constrain further the valence quark contributions, while semi-inclusive DIS scattering at fixed-target experiments (for instance, using final states with charm mesons) continue to reduce the uncertainty on the gluon contribution. The goal of the spin community – manifest in the project for a polarized Electron-Ion Collider (EIC) – is to produce a 3D picture of the proton with high precision using a large number of observables across an extended phase space.

Impressive precision

The current scientific landscape includes many experiments that are based on hadronic interactions, with the LHC taking these studies to the highest energies. These are reaching impressive, increasing precision across a large phase space, not only in final states with jets but also in more exclusive configurations including photons, weak bosons or tagged heavy flavours. The measurements performed in diffraction – by now a classical laboratory for QCD tests – are also available from the LHC in inclusive and semi-inclusive final states and enforce the global understanding of the strong interactions. An interesting case concerns double-parton interactions where the final state originates from not one but two parton–parton collisions – a contribution that in some cases can pollute final-state configurations (including bosons or Higgs production). Although the measurements are not yet precise enough to identify kinematical dependencies or parton–parton correlations, they are beginning to unveil this contribution, which may prove in future to be related to profound aspects of the proton structure, such as the generalized parton distributions and the proton spin.

A global picture and complete understanding of the strong force can only emerge by using all of the available configurations and energies. In particular, the measurements of the hadronic final states performed in electron–proton collisions at HERA and the refined measurements at the Tevatron provide an essential testing ground for the increasingly precise calculations. Figure 2 illustrates this statement, presenting measurements of the strong coupling from collider experiments – including the most recent measurements from the LHC.

The high-energy heavy-ion collisions at both RHIC and the LHC have been a constant source of new results and paradigms during the past few years and this proved equally true for the DIS2013 conference. Probes such as mesons or jets “disappear” when high densities of the collision fireball are reached. The set of such probes has been consolidated at the LHC, where the experimental capabilities and large phase space allow further measurements involving strangeness, charm or inclusive particle production. In addition, the recently achieved proton–lead collisions provide new testing grounds for the collective behaviour of the quarks and gluons at high densities.

Measurements of the strong coupling constant

A total of 300 talks were given covering the seven themes of the workshop, distributed across two and a half days of parallel sessions, a few of which combined different themes. As tradition requires at DIS workshops, the presentations were followed by intense debates on classic and new issues, including a satellite workshop on the HERAFitter project. On the last day, the working group convenors summarized the highlights of the rich scientific programme of the parallel sessions.

The conference ended with a session on future experiments, where together with upgrades of the LHC experiments and other interesting projects related to new capabilities for QCD-related studies (AFTER, CHIC, COMPASS, NA62, nuSTORM, etc.), the two projects for new colliders EIC and LHeC were discussed. Rolf Heuer, CERN’s director-general, presented the recently updated European Strategy for Particle Physics. The programme at the energy frontier with the LHC will be followed for at least 20 years and studies for further projects are ongoing. The conference ended with an inspiring outlook talk by Chris Quigg of Fermilab, with hints of a possible QCD-like walk on the new physics frontier. In the evening, Heuer gave a talk for the general public to an audience of more than 200 people on recent discoveries at the LHC.

In addition to the workshop sessions, participants enjoyed a dinner in the Pharo castle – with a splendid view of the old and new harbours of Marseilles – where they found out why the French national anthem is called La Marseillaise. There was also half a day of free time for most of the participants – except maybe for convenors who had to prepare their summary reports – with two excursions organized at Cassis and in the historic centre of Marseilles.

In summary, the DIS2013 workshop once again allowed an insightful journey around the fundamental links between QCD, proton structure and physics at the energy frontier – an interface that will continue to grow and create new research ideas and projects in the near future. The next – 22nd – DIS workshop will be held in Warsaw in April 2014.

Conference time in Stockholm

Stockholm

When the Swedish warship Vasa capsized in Stockholm harbour on her maiden voyage in 1628, many hearts must have also sunk metaphorically, as they did at CERN in September 2008 when the LHC’s start-up came to an abrupt end. Now, the raised and preserved Vasa is the pride of Stockholm and the LHC – following a successful restart in 2009 – is leading research in particle physics at the high-energy frontier. This year the two icons crossed paths when the International Europhysics Conference on High-Energy Physics, EPS-HEP 2013, took place in Stockholm on 18–24 July, hosted by the KTH (Royal Institute of Technology) and Stockholm University. Latest results from the LHC experiments featured in many of the parallel, plenary and poster sessions – and the 750 or so participants had the opportunity to see the Vasa for themselves at the conference dinner. There was, of course, much more and this report can only touch on some of the highlights.

Coming a year after the first announcement of the discovery of a “Higgs-like” boson on 4 July 2012, the conference was the perfect occasion for a birthday celebration for the new particle. Not only has its identity been more firmly established in the intervening time – it almost certainly is a Higgs boson – but many of its attributes have been measured by the ATLAS and CMS experiments at the LHC, as well as by the CDF and DØ collaborations using data collected at Fermilab’s Tevatron. At 125.5 GeV/c2, its mass is known to within 0.5% precision – better than for any quark – and several tests by ATLAS and CMS show that its spin-parity, JP, is compatible with the 0+ expected for a Standard Model Higgs boson. These results exclude other models to greater than 95% confidence level (CL), while a new result from DØ rejects a graviton-like 2+ at >99.2% CL.

The mass of the top quark is in fact so large – 173 GeV/c2 – that it decays before forming particles

The new boson’s couplings provide a crucial test of whether it is the particle responsible for electroweak-symmetry breaking in the Standard Model. A useful parameterization for this test is the ratio of the observed signal strength to the Standard Model prediction, μ = (σ × BR)/(σ × BR)SM, where σ is the cross-section and BR the branching fraction. The results for the five major decay channels measured so far (γγ, WW*, ZZ*, bb and ττ) are consistent with the expectations for a Standard Model Higgs boson, i.e. μ = 1, to 15% accuracy. Although it is too light to decay to the heaviest quark – top, t – and its antiquark, the new boson can in principle be produced together with a tt pair, so yielding a sixth coupling. While this is a challenging channel, new results from CMS and ATLAS are starting to approach the level of sensitivity for the Standard Model Higgs boson, which bodes well for its future use.

The mass of the top quark is in fact so large – 173 GeV/c2 – that it decays before forming particles, making it possible to study the “bare” quark. At the conference, the CMS collaboration announced the first observation, at 6.0σ, of the associated production of a top quark and a W boson, in line with the Standard Model’s prediction. Both ATLAS and CMS had previously found evidence for this process but not to this significance. The DØ collaboration presented latest results on the lepton-based forward–backward lepton asymmetry in tt- production, which had previously indicated some deviation from theory. The new measurement, based on the full data set of 9.7 fb–1 of proton–antiproton collisions at the Tevatron, gives an asymmetry of (4.7±2.3 stat.+1.1–1.4 syst.)%, which is consistent with predictions from the Standard Model to next-to-leading order.

Venue for the conference dinner

The study of B hadrons, which contain the next heaviest quark, b, is one of the aspects of flavour physics that could yield hints of new physics. One of the highlights of the conference was the announcement of the observation of the rare decay mode B0s → μμ by both the LHCb and CMS collaborations, at 4.0 and 4.3σ, respectively. While there had been hopes that this decay channel might open a window on new physics, the long-awaited results align with the predictions of the Standard Model. The BaBar and Belle collaborations also reported on their precise measurements of the decay B → D(*)τντ at SLAC and KEK, respectively, which together disagree with the Standard Model at the 4.3σ level. The results rule out one model that adds a second Higgs doublet to the Standard Model (2HDM type II) but are consistent with a different variant, 2HDM type III – a reminder that the highest energies are not the only place where new physics could emerge.

Precision, precision

Precise measurements require precise predictions for comparison and here theoretical physics has seen a revolution in calculating next-to-leading order (NLO) effects, involving a single loop in the related Feynman diagrams. Rapid progress during the past few years has meant that the experimentalists’ wish-list for QCD calculations at NLO relevant to the LHC is now fulfilled, including such high-multiplicity final states as W + 4 jets and even W + 5 jets. Techniques for calculating loops automatically should in future provide a “do-it-yourself” approach for experimentalists. The new frontier for the theorists, meanwhile, is at next-to-NLO (NNLO), where some measurements – such as pp → tt – are already at an accuracy of a few per cent and some processes – such as pp → γγ – could have large corrections, up to 40–50%. So a new wish-list is forming, which will keep theorists busy while the automatic code takes over at NLO.

With a measurement of the mass for the Higgs boson, small corrections to the theoretical predictions for many measurable quantities – such as the ratio between the masses of the W and the top quark – can now be calculated more precisely. The goal is to see if the Standard Model gives a consistent and coherent picture when everything is put together. The GFitter collaboration of theorists and experimentalists presented its latest global Standard Model fit to electroweak measurements, which includes the legacy both from the experiments at CERN’s Large Electron–Positron Collider and from the SLAC Large Detector, together with the most recent theoretical calculations. The results for 21 parameters show little tension between experiment and the Standard Model, with no discrepancy exceeding 2.5σ, the largest being in the forward–backward asymmetry for bottom quarks.

There is more to research at the LHC than the deep and persistent probing of the Standard Model. The ALICE, LHCb, CMS and ATLAS collaborations presented new results from high-energy lead–lead and proton–lead collisions at the LHC. The most intriguing results come from the analysis of proton–lead collisions and reveal features that previously were seen only in lead–lead collisions, where the hot dense matter that was created appears to behave like a perfect liquid. The new results could indicate that similar effects occur in proton–lead collisions, even though far fewer protons and neutrons are involved. Other results from ALICE included the observation of higher yields of J/ψ particles in heavy-ion collisions at the LHC than at Brookhaven’s Relativistic Heavy-Ion Collider, although the densities are much higher at the LHC. The measurements in proton–lead collisions should cast light on this finding by allowing initial-state effects to be disentangled from those for cold nuclear matter.

Supersymmetry and dark matter

The energy frontier of the LHC has long promised the prospect of physics beyond the Standard Model, in particular through evidence for a new symmetry – supersymmetry. The ATLAS and CMS collaborations presented their extensive searches for supersymmetric particles in which they have explored a vast range of masses and other parameters but found nothing. However, assumptions involved in the work so far mean that there are regions of parameter space that remain unexplored. So while supersymmetry may be “under siege”, its survival is certainly still possible. At the same time, creative searches for evidence of extra dimensions and many kinds of “exotics” – such as excited quarks and leptons – have likewise produced no signs of anything new.

Aula Magna lecture theatre

However, evidence that there must almost certainly be some kind of new particle comes from the existence of dark, non-hadronic matter in the universe. Latest results from the Planck mission show that this should make up some 26.8% of the universe – about 4% more than previously thought. This drives the search for weakly interacting particles (WIMPs) that could constitute dark matter, which is becoming a worldwide effort. Indeed, although the Higgs boson may have been top of the bill for hadron-collider physics, more generally, the number of papers with dark matter in the title is growing faster than those on the Higgs boson.

While experiments at the LHC look for the production of new kinds of particles with the correct properties to make dark matter, “direct” searches seek evidence of interactions of dark-matter particles in the local galaxy as they pass through highly sensitive detectors on Earth. Such experiments are showing an impressive evolution with time, increasing in sensitivity by about a factor of 10 every two years and now reaching cross-sections down to 10–8 pb. Among the many results presented, an analysis of 140.2-kg days of data in the silicon detectors of the CDMS II experiment revealed three WIMP-candidate events with an expected background of 0.7. A likelihood analysis gives a 0.19% probability for the known background-only hypothesis.

Neutrinos are the one type of known particle that provide a view outside the Standard Mode

“Indirect” searches, by contrast, involve in particular the search from signals from dark-matter annihilation in the cosmos. In 2012, an analysis of publically available data from 43 months of the Fermi Large Area Telescope (LAT) indicated a puzzling signal at 130 GeV, with the interesting possibility that these γ rays could originate from the annihilation of dark-matter particles. A new analysis by the Fermi LAT team of four years’ worth of data gives preliminary indications of an effect with a local significance of 3.35σ but the global significance is less than 2σ. The HESS II experiment is currently accumulating data and might soon be able to cross-check these results.

With their small but nonzero mass and consequent oscillations from one flavour to another, neutrinos are the one type of known particle that provide a view outside the Standard Model. At the conference, the T2K collaboration announced the first definitive observation at 7.5σ of the transition νμ → νe in the high-energy νμ beam that travels 295 km from the Japan Proton Accelerator Complex to the Super-Kamiokande detector. Meanwhile, the Double CHOOZ experiment, which studies νe produced in a nuclear reactor, has refined its measurement of θ13, one of the parameters characterizing neutrino oscillations, by using two independent methods that allow much better control of the backgrounds. The GERDA collaboration uses yet another means to investigate if neutrinos are their own antiparticles, by searching for the neutrinoless double-beta decay of the isotope 76Ge in a detector in the INFN Gran Sasso National Laboratory. The experiment has completed its first phase and finds no sign of this process but now provides the world’s best lower limit for the half-life at 2.1 × 1023 years.

On the other side of the world, deep in the ice beneath the South Pole, the IceCube collaboration has recently observed oscillations of neutrinos produced in the atmosphere. More exciting, arguably, is the detection of 28 extremely energetic neutrinos – including two with energies above 1 PeV – but the evidence is not yet sufficient to claim observation of neutrinos of extraterrestrial origin.

Towards the future

In addition to the sessions on the latest results, others looked to the continuing health of the field with presentations of studies on novel ideas for future particle accelerators and detection techniques. These topics also featured in the special session for the European Committee for Future Accelerators, which looked at future developments in the context of the update of the European Strategy for Particle Physics. A range of experiments at particle accelerators currently takes place on two frontiers – high energy and high intensity. Progress in probing physics that lies at the limit of these experiments will come both from upgrades of existing machines and at future facilities. These will rely on new ideas being investigated in current accelerator R&D and will also require novel particle detectors that can exploit the higher energies and intensities.

Paris Sphicas and Peter Higgs

For example, two proposals for new neutrino facilities would allow deeper studies of neutrinos – including the possibility of CP violation, which could cast light on the dominance of matter over antimatter in the universe. The Long-Baseline Neutrino Experiment (LBNE) would create a beam of high-energy νμ at Fermilab and detect the appearance of νe with a massive detector that is located 1300 km away at the Sanford Underground Research Facility. A test set-up, LBNE10, has received funding approval. A complementary approach providing low-energy neutrinos is proposed for the European Spallation Source, which is currently under construction in Lund. This will be a powerful source of neutrons that could also be used to generate the world’s most intense neutrino beam.

The LHC was first discussed in the 1980s, more than 25 years before the machine produced its first collisions. Looking to the long-term future, other accelerators are now on the drawing board. One possible option is the International Linear Collider, currently being evaluated for construction in Japan. Another option is to create a large circular electron–positron collider, 80–100 km in circumference, to produce Higgs bosons for precision studies.

The main physics highlights of the conference were reflected in the 2013 EPS-HEP prizes, awarded in the traditional manner at the start of the plenary sessions. The EPS-HEP prize honoured both ATLAS and CMS – for the discovery of the new boson – and three of their pioneering leaders (Michel Della Negra, Peter Jenni and Tejinder Virdee). François Englert and Peter Higgs were there to present this major prize and took part later in a press conference together with the prize winners. Following the ceremony, Higgs gave a talk, “Ancestry of a New Boson”, in which he recounted what led to his paper of 1963 and also cast light on why his name became attached to the now-famous particle. Other prizes acknowledged the measurement of the all-flavour neutrino flux from the Sun, as well as the observation of the rare decay B0s → μμ, work in 4D field theories and outstanding contributions to outreach. In a later session, a prize sponsored by Elsevier was awarded for the best four posters out of the 130 that were presented by young researchers in the dedicated poster sessions.

To close the conference, Nobel Laureate Gerard ‘t Hooft presented his outlook for the field. This followed the conference summary by Sergio Bertolucci, CERN’s director for research and computing, in which he also thanked the organizers for the “beautiful venue, the fantastic weather and the perfect organization” and acknowledged the excellent presentations from the younger members of the community. The baton now passes to the organizing committees of the next EPS-HEP conference, which will take place in Vienna on 22–29 July 2015.

• This article has touched on only some of the physics highlights of the conference. For all of the talks, see http://eps-hep2013.eu/.

Neutral currents: A perfect experimental discovery

In a seminar at CERN on 19 July 1973, Paul Musset of the Gargamelle collaboration presented the first direct evidence of weak neutral currents. They had discovered events in which a neutrino scattered from a hadron (proton or neutron) without turning into a muon (figure 1) – the signature of a hadronic weak neutral current. In addition they had one leptonic event characterized by a single electron track (figure 2). A month later, Gerald Myatt presented the results on a global stage at the 6th International Symposium on Electron and Photon Interactions at High Energies in Bonn. By then, two papers detailing the discovery were in the offices of Physics Letters and were published together on 3 September. A few days later, the results were presented in Aix-en-Provence at the International Europhysics Conference on High-Energy Particle Physics, where they were aired as part of a large programme of events for the public.

Earlier in May, Luciano Maiani was with Nicola Cabibbo at their university in Rome when Ettore Fiorini visited, bringing news of what the Gargamelle collaboration had found in photographs of neutrino interactions in the huge heavy-liquid bubble chamber at CERN. The main problem facing the collaboration was to be certain that the events were from neutrinos and not from neutrons that were liberated in interactions in the material surrounding the bubble chamber (CERN Courier September 2009 p25). Fiorini described how the researchers had overcome this in their analysis, one of the important factors being the size of Gargamelle. “He explained that the secret was the volume,” Maiani recalls, “which could kill the neutron background.” Maiani at least was convinced that the collaboration had observed neutral currents. It was a turning point along the road to today’s Standard Model of particles and their interactions, he says.

Weak neutral currents, which involve no exchange of electric charge between the particles concerned, are the manifestation of the exchange of the neutral vector boson, Z, which mediates the weak interaction together with the charged bosons, W±. The discovery of these neutral currents in 1973 was crucial experimental support for the unification of electromagnetic and weak interactions in electroweak theory. This theory – for which Sheldon Glashow, Abdus Salam and Steven Weinberg received the Nobel Prize in Physics in 1979 – became one of the pillars of the Standard Model.

For Maiani, the theoretical steps began in 1962 with his colleague Cabibbo’s work that restored universality in weak interactions. The problem that Cabibbo had resolved concerned an observed difference in the strength of weak decays of strange particles compared with muons and neutrons. His solution, formulated before the proposal of quarks, was based on a weak current parameterized by a single angle – later known as the Cabibbo angle.

During the next 10 years, not only did the concept of quarks as fundamental particles emerge but other elements of today’s Standard Model developed, too. In 1970, for example, Maiani, Glashow and Iliopoulos put forward a model that involved a fourth quark, charm, to deal correctly with divergences in weak-interaction theory. Their idea was based on a simple analogy between the weak hadronic and leptonic currents. As their paper stated, the model featured “a remarkable symmetry between leptons and quarks” – and this brought the neutral currents of the electroweak unification of Weinberg and Salam into play in the quark sector. One important implication was a large suppression of strangeness-changing neutral currents through what became known as the GIM mechanism.

Maiani now says that at this point no one was talking in terms of a standard theory, even though many of the elements were there – charm, intermediate vector bosons and the Brout-Englert-Higgs mechanism for electroweak-symmetry breaking. However, perceptions began to change around 1972 with the work of Gerardus ‘t Hooft and Martinus Veltman, who showed that electroweak theory could be self-consistent through renormalization (CERN Courier December 2009 p30). After this leap forward in theory, the observations in Gargamelle provided a similar breakthrough on the experimental front. “At the start of the decade, people did not generally believe in a standard theory even though theory had done everything. The neutral-current signals changed that,” Maiani recalls. “From then on, particle physics had to test the standard theory.”

A cascade of discoveries followed the observation of neutral currents, with the J/ψ in 1974 and open charm in 1976 through to the W and Z vector bosons in 1983. The discovery of a Higgs boson in 2012 at CERN finally closed the cycle, providing observation of the key player in the Brout-Englert-Higgs mechanism, which gives mass to the W and Z bosons of the weak interactions, while leaving the photon of electromagnetism massless. In a happy symmetry, the latest results on this first fundamental scalar boson were recent highlights at the 2013 Lepton–Photon Conference and at the International Europhysics Conference on High-Energy Particle Physics, EPS-HEP 2013 – the direct descendants of the meetings in Bonn and Aix-en-Provence where the discovery of neutral currents was first aired 40 years ago.

“We are now stressing the discovery of a Higgs boson,” says Maiani, “but in 1973 the mystery was: will it all work?” Looking back to the first observation of neutral currents, he describes it as “a perfect experimental discovery”. “It was a beautiful experimental development, totally independent of theory,” he continues. “It arrived at exactly the right moment, when people realized that there was a respectable theory.” That summer 40 years ago also saw the emergence of quantum chromodynamics (CERN Courier January/February 2013 p24), which set up the theory for strong interactions. The Standard Model had arrived.•

For more detailed accounts by key members of the Gargamelle collaboration, see the articles by Don Perkins (in the commemorative issue for Willibald Jentschke 2003 p15) and Dieter Haidt (CERN Courier October 2004 p21).

Physicists meet the public at Aix

During the week of the Aix Conference more attention than usual was given to the need for communication with non-physicists. A plenary session was held on “Popularizing High Energy Physics” and on several evenings “La Physique dans la Rue” events were organized in the town centre.

One evening saw a more classical presentation of information with talks by Louis Leprince-Ringuet (on the beauties of pure research), Bernard Gregory (on the role of fundamental science and its pioneering role in international collaboration) and Valentine Telegdi (on the intricate subject of neutral currents). More than 600 people heard these talks, no doubt attracted particularly by the well known television personality of Leprince-Ringuet.

CERN Courier October 1973 pp297–298 (extract).

AIDA boosts detector development

Conceptual structure of a pixel detector

Research in high-energy physics at particle accelerators requires highly complex detectors to observe the particles and study their behaviour. In the EU-supported project on Advanced European Infrastructure for Detectors at Accelerators (AIDA), more than 80 institutes from 23 European countries have joined forces to boost detector development for future particle accelerators in line with the European Strategy for Particle Physics. These include the planned upgrade of the LHC, as well as new linear colliders and facilities for neutrino and flavour physics. To fulfil its aims, AIDA is divided into three main activities: networking, joint research and transnational access, all of which are progressing well two years after the project’s launch.

Networking

AIDA’s networking activities fall into three work packages (WPs): the development of common software tools (WP2); microelectronics and detector/electronics integration (WP3); and relations with industry (WP4).

Building on and extending existing software and tools, the WP2 network is creating a generic geometry toolkit for particle physics together with tools for detector-independent reconstruction and alignment. The design of the toolkit is shaped by the experience gained with detector-description systems implemented for the LHC experiments – in particular LHCb – as well as by lessons learnt from various implementations of geometry-description tools that have been developed for the linear-collider community. In this context, the Software Development for Experiments and LHCb Computing groups at CERN have been working together to develop a new generation of software for geometry modellers. These are used to describe the geometry and material composition of the detectors and as the basis for tracking particles through the various detector layers.

Enabling the community to access the most advanced semiconductor technologies is an important aim for AIDA

This work uses the geometrical models in Geant4 and ROOT to describe the experimental set-ups in simulation or reconstruction programmes and involves the implementation of geometrical solid primitives as building blocks for the description of complex detector arrangements. These include a large collection of 3D primitives, ranging from simple shapes such as boxes, tubes or cones to more complex ones, as well as their Boolean combinations. Some 70–80% of the effort spent on code maintenance in the geometry modeller is devoted to improving the implementation of these primitives. To reduce the effort required for support and maintenance and to converge on a unique solution based on high-quality code, the AIDA initiative has started a project to create a “unified-solids library” of the geometrical primitives.

Enabling the community to access the most advanced semiconductor technologies – from nanoscale CMOS to innovative interconnection processes – is an important aim for AIDA. One new technique is 3D integration, which has been developed by the microelectronic industry to overcome limitations of high-frequency microprocessors and high-capacity memories. It involves fabricating devices based on two or more active layers that are bonded together, with vertical interconnections ensuring the communication between them and the external world. The WP3 networking activity is studying 3D integration to design novel tracking and vertexing detectors based on high-granularity pixel sensors.

Interesting results have already emerged from studies with the FE-Ix series of CMOS chips that the ATLAS collaboration has developed for the read-out of high-resistivity pixel sensors – 3D processing is currently in progress on FE-I4 chips. Now, some groups are evaluating the possibility of developing new electronic read-out chips in advanced CMOS technologies, such as 65 nm and of using these chips in a 3D process with high-density interconnections at the pixel level. Once the feasibility of such a device is demonstrated, physicists should be able to design a pixel detector with highly aggressive and intelligent architectures for sensing, analogue and digital processing, storage and data transmission (figure 1).

The development of detectors using breakthrough technologies calls for the involvement of hi-tech industry. The WP4 networking activity aims to increase industrial involvement in key detector-developments in AIDA and to provide follow-up long after completion of the project. To this end, it has developed the concept of workshops tailored to maximize the attendees’ benefits while also strengthening relations with European industry, including small and medium-sized enterprises (SMEs). The approach is to organize “matching events” that address technologies of high relevance for detector systems and gather key experts from industry and academia with a view to establish high-quality partnerships. WP4 is also developing a tool called “collaboration spotting”, which aims to monitor through publications and patents the industrial and academic organizations that are active in the technologies under focus at a workshop and to identify the key players. The tool was used successfully to invite European companies – including SMEs – to attend the workshop on advanced interconnections for chip packaging in future detectors that took place in April at the Laboratori Nazionali di Frascati of INFN.

Test beams and telescopes

The development, design and construction of detectors for particle-physics experiments are closely linked with the availability of test beams where prototypes can be validated under realistic conditions or production modules can undergo calibration. Through its transnational access and joint research activities, AIDA is not only supporting test-beam facilities and corresponding infrastructures at CERN, DESY and Frascati but is also extending them with new infrastructures. Various sub-tasks cover the detector activities for the LHC and linear collider, as well as a neutrino activity, where a new low-energy beam is being designed at CERN, together with prototype detectors.

One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN

One of the highlights of WP8 is the excellent progress made towards two new major irradiation facilities at CERN. These are essential for the selection and qualification of materials, components and full detectors operating in the harsh radiation environments of future experiments. AIDA has strongly supported the initiatives to construct GIF++ – a powerful γ irradiation facility combined with a test beam in the North Area – and EAIRRAD, which will be a powerful proton and mixed-field irradiation facility in the East Area. AIDA is contributing to both projects with common user-infrastructure as well as design and construction support. The aim is to start commissioning and operation of both facilities following the LS1 shutdown of CERN’s accelerator complex.

The current shutdown of the test beams at CERN during LS1 has resulted in a huge increase in demand for test beams at the DESY laboratory. The DESY II synchrotron is used mainly as a pre-accelerator for the X-ray source PETRA III but it also delivers electron or positron beams produced at a fixed carbon-fibre target to as many as three test-beam areas. Its ease of use makes the DESY test beam an excellent facility for prototype testing because this typically requires frequent access to the beam area. In 2013 alone, 45 groups from more than 30 countries with about 200 users have already accessed the DESY test beams. Many of them received travel support from the AIDA Transnational Access Funds and so far AIDA funding has enabled a total of 130 people to participate in test-beam campaigns. The many groups using the beams include those from the ALICE, ATLAS, Belle II, CALICE, CLIC, CMS, Compass, LHCb, LCTPC and Mu3e collaborations.

Combined beam-telescope

About half of the groups using the test beam at DESY have taken advantage of a beam telescope to provide precise measurements of particle tracks. The EUDET project – AIDA’s predecessor in the previous EU framework programme (FP6) – provided the first beam telescope to serve a large user community, which was aimed at detector R&D for an international linear collider. For more than five years, this telescope, which was based on Minimum Ionizing Monolithic Active pixel Sensors (MIMOSA), served a large number of groups. Several copies were made – a good indication of success – and AIDA is now providing continued support for the community that uses these telescopes. It is also extending its support to the TimePix telescope developed by institutes involved in the LHCb experiment.

The core of AIDA’s involvement lies in the upgrade and extension of the telescope. For many users who work on LHC applications, a precise reference position is not enough. They also need to know the exact time of arrival of the particle but it is difficult to find a single system that can provide both position and time at the required precision. Devices with a fast response tend to be less precise in the spatial domain or put too much material in the path of the particle. So AIDA combines two technologies: the thin MIMOSA sensors with their spatial resolution provide the position; while the ATLAS FEI4 detectors provide time information with the desired LHC structure.

The first beam test in 2012 with a combined MIMOSA-FEI4 telescope was an important breakthrough. Figure 2 shows the components involved in the set-up in the DESY beam. Charged particles from the accelerator – electrons in this case – first traverse three read-out planes of the MIMOSA telescope, followed by the device under test, then the second triplet of MIMOSA planes and then the ATLAS-FEI4 arm. The DEPFET pixel-detector international collaboration was the first group to use the telescope, so bringing together within a metre pixel detectors from three major R&D collaborations.

While combining the precise time information from the ATLAS-FEI4 detector with the excellent spatial resolution of MIMOSA provides the best of both worlds, there is an additional advantage: the FEI4 chip has a self-triggering capability because it can issue a trigger signal based on the response of the pixels. Overlaying the response of the FEI4 pixel matrix with a programmable mask and feeding the resulting signal into the trigger logic allows triggering on a small area and is more flexible than a traditional trigger based on scintillators. To change the trigger definition, all that is needed is to upload a new mask to the device. This turns out to be a useful feature if the prototypes under test cover a small area.

CALICE tungsten calorimeter

Calorimeter development in AIDA WP9 is mainly motivated by experiments at possible future electron–positron colliders, as defined in the International Linear Collider and Compact Linear Collider studies. These will demand extremely high-performance calorimetry, which is best achieved using a finely segmented system that reconstructs events using the so-called particle-flow approach to allow the precise reconstruction of jet energies. The technique works best with an optimal combination of tracking and calorimeter information and has already been applied successfully in the CMS experiment. Reconstructing each particle individually requires fine cell granularity in 3D and has spurred the development of novel detection technologies, such as silicon photo-multipliers (SiPMs) mounted on small scintillator tiles or strips, gaseous detectors (micro mesh or resistive plate chambers) with 2D read-out segmentation and large-area arrays of silicon pads.

After tests of sensors developed by the CALICE collaboration in a tungsten stack at CERN (figure 3) – in particular to verify the neutron and timing response at high energy – the focus is now on the realization of fully technological prototypes. These include power-pulsed embedded data-acquisition chips requested for the particle-flow-optimized detectors for a future linear collider and they address all of the practical challenges of highly granular devices – compactness, integration, cooling and in situ calibration. Six layers (256 channels each) of a fine granularity (5 × 5 mm2) silicon-tungsten electromagnetic calorimeter are being tested in electron beams at DESY this July (figure 4). At the same time, the commissioning of full-featured scintillator hadron calorimeter units (140 channels each) is progressing at a steady pace. A precision tungsten structure and read-out chips are also being prepared for the forward calorimeters to test the radiation-hard sensors produced by the FCAL R&D collaboration.

Five scintillator HCAL units

The philosophy behind AIDA is to bring together institutes to solve common problems so that once the problem is solved, the solution can be made available to the entire community. Two years on from the project’s start – and halfway through its four-year lifetime – the highlights described here, from software toolkits to a beam-telescope infrastructure to academia-industry matching, illustrate well the progress that is being made. Ensuring the user support of all equipment in the long term will be the main task in a new proposal to be submitted next year to the EC’s Horizon 2020 programme. New innovative activities to be included will be discussed during the autumn within the community at large.

Machine protection: the key to safe operation

The combination of high intensity and high energy that characterizes the nominal beam in the LHC leads to a stored energy of 362 MJ in each ring. This is more than two orders of magnitude larger than in any previous accelerator – a large step that is highlighted in the comparisons shown in figure 1. An uncontrolled beam loss at the LHC could cause major damage to accelerator equipment. Indeed, recent simulations that couple energy-deposition and hydro-dynamic simulation codes show that the nominal LHC beam can drill a hole through the full length of a copper block that is 20 m long.

Safe operation of the LHC relies on a complex system of equipment protection – the machine protection system (MPS). Early detection of failures within the equipment and active monitoring of the beam parameters with fast and reliable beam instrumentation is required throughout the entire cycle, from injection to collisions. Once a failure is detected the information is transmitted to the beam-interlock system that triggers the LHC beam-dumping system. It is essential that the beams are always properly extracted from the accelerator via a 700-m-long transfer line into large graphite dump blocks, because these are the only elements of the LHC that can withstand the impact of the full beam. Figure 2 shows the simulated impact of a 7 TeV beam on the dump block.

A variety of systems

There are several general requirements for the MPS. Its top priority is to protect the accelerator equipment from beam damage, while its second priority is to prevent the superconducting magnets from quenching. At the same time, it should also protect the beam – that is, the protection systems should dump the beam only when necessary so that the LHC’s availability is not compromised. Last, the MPS must provide evidence from beam aborts. When there are failures, the so-called post-mortem system provides complete and coherent diagnostics data. These are needed to reconstruct the sequence of events accurately, to understand the root cause of the failure and to assess whether the protection systems functioned correctly.

Protection of the LHC relies on a variety of systems with strong interdependency – these include the collimators and beam-loss monitors (BLMs) and the beam controls, as well as the beam injection, extraction and dumping systems. The strategy for machine protection, which involves all of these, rests on several basic principles:

• Definition of the machine aperture by the collimator jaws, with BLMs close to the collimators and the superconducting magnets. In general, particles lost from the beam will hit collimators first and not delicate equipment such as superconducting magnets or the LHC experiments.

• Early detection of failures within the equipment that controls the beams, to generate a beam-dump request before the beam is affected.

• Active monitoring with fast and reliable beam instrumentation, to detect abnormal beam conditions and rapidly generate a beam-dump request. This can happen within as little as half a turn of the beam round the machine (40 μs).

• Reliable transmission of a beam-dump request to the beam-dumping system by a distributed interlock system. Fail-safe logic is used for all interlocks, therefore an active signal is required for operation. The absence of the signal is considered as a beam-dump request or injection inhibit.

• Reliable operation of the beam-dumping system on receipt of a dump request or internal-fault detection, to extract the beams safely onto the external dump blocks.

• Passive protection by beam absorbers and collimators for specific failure cases.

• Redundancy in the protection system so that failures can be detected by more than one system. Particularly high standards for safety and reliability are applied in the design of the core protection systems.

Many types of failure are possible with a system as large and complex as the LHC. From the point of view of machine protection, the timescale is one of the most important characteristics of a failure because it determines how the MPS responds.

The fastest and most dangerous failures occur on the timescale of a single turn or less. These events may occur, for example, because of failures during beam injection or beam extraction. The probability for such failures is minimized by designing the systems for high reliability and by interlocking the kicker magnets as soon as they are not needed. However, despite all of these design precautions, failures such as incorrect firing of the kicker magnets at injection or extraction cannot be excluded. In these cases, active protection based on the detection of a fault and an appropriate reaction is not possible because the failure occurs on a timescale that is smaller than the minimum time that it would take to detect it and dump the beam. Protection from these specific failures therefore relies on passive protection with beam absorbers and collimators that must be correctly positioned close to the beam to capture the particles that are deflected accidentally.

Since the injection process is one of the most delicate procedures, a great deal of care has been taken to ensure that only a beam with low intensity – which is highly unlikely to damage equipment – can be injected into an LHC ring where no beam is already circulating. High-intensity beam can be injected only into a ring where a minimum amount of beam is present. This is a guarantee that conditions are acceptable for injection.

The LHC is equipped with around 4000 BLMs distributed along its circumference to protect all elements against excessive beam loss

The majority of equipment failures, however, lead to beam “instabilities” – i.e. fast movements of the orbit or growth in beam size – that must be detected on a timescale of 1 ms or more. Protection against such events relies on fast monitoring of the beam’s position and of beam loss. The LHC is equipped with around 4000 BLMs distributed along its circumference to protect all elements against excessive beam loss. Equipment monitoring – e.g. quench detectors and monitors for failures of magnet powering – provides redundancy for the most critical failure scenarios.

Last, on the longest timescale there will be unavoidable beam losses around the LHC machine during all of the phases of normal operation. Most of these losses will be captured in the collimation sections, where the beam losses and heat load at collimators are monitored. If the losses or the heat load become unacceptably high, the beam is dumped.

Operational experience

Figure 3 shows the evolution of the peak energy stored in each LHC beam between 2010 and 2012. The 2010 run was the main commissioning and learning year for the LHC and the associated MPSs. Experience had to be gained with all of the MPS sub-systems and thresholds for failure detection – e.g. beam-loss thresholds – had to be adjusted based on operational experience. In the summer of 2010, the LHC was operated at a stored energy of around 1–2 MJ – similar to the level of CERN’s Super Proton Synchrotron and Fermilab’s Tevatron – to gain experience with beams that could already create significant damage. A core team of MPS experts monitored the subsequent intensity ramps closely, with bunch spacings of 150 ns, 75 ns and 50 ns. Checklists were completed for each intensity level to document the subsystem status and to record observations. Approval to proceed to the next intensity stage was given only when all of the issues had been resolved. As experience was gained, the increments in intensity became larger and faster to execute. By mid-2012, a maximum stored energy of 140 MJ had been reached at 4 TeV per beam.

One worry with so many superconducting magnets in the LHC concerned quenches induced by uncontrolled beam losses. However, the rate was difficult to estimate before the machine began operation because it depended on a number of factors, including the performance of the large and complex collimation system. Fortunately, not a single magnet quench was observed during normal operation with circulating beams of 3.5 TeV and 4 TeV. This is a result of the excellent performance of the MPS, the collimation system and the outstanding stability and reproducibility of the machine.

Nevertheless, there were other – unexpected – effects. In the summer of 2010, during the intensity ramp-up to stored energies of 1 MJ, fast beam-loss events with timescales of 1 ms or less were observed for the first time in the LHC’s arcs. When it became rapidly evident that dust particles were interacting with the beam they were nicknamed unidentified falling objects (UFOs). The rate of these UFOs increased steadily with beam intensity. Each year, the beams were dumped about 20 times when the losses induced by the interaction of the beams with the dust particles exceeded the loss thresholds. For the LHC injection kickers – where an important number of UFOs were observed – the dust particles could clearly be identified on the surface of the ceramic vacuum chamber. Kickers with better surface cleanliness will replace the existing kickers during the present long shutdown. Nevertheless, UFOs remain a potential threat to the operational efficiency of the LHC at 7 TeV per beam.

All of the beam-dump events were meticulously analysed and validated by the operation crews and experts

The LHC’s MPS performed remarkably well from 2010 to 2013, thanks to the thoroughness and commitment of the operation crews and the MPS experts. Around 1500 beam dumps were executed correctly above the injection energy. All of the beam-dump events were meticulously analysed and validated by the operation crews and experts. This information has been stored in a knowledge database to assess possible long-term improvements of the machine protection and equipment systems. As experience grew, an increasing number of failures were captured before their effects on the particle beams became visible – i.e. before the beam position changed or beam losses were observed.

During the whole period, no evidence of a major loophole or uncovered risk in the protection architecture was identified, although sometimes unexpected failure modes were identified and mitigated. However, approximately 14% of the 1500 beam dumps were initiated by the failure of an element of the MPS – a “false” dump. So, despite the high dependability of the MPS during these first operational years, it will be essential to remain vigilant in the future as more emphasis is placed on increasing the LHC’s availability for physics.

Steve Myers and the LHC: an unexpected journey

Happiness as the LHC

The origins of the LHC trace from the early 1980s, in the days when construction of the tunnel for the Large Electron–Positron (LEP) collider was just getting under way. In 1983, Steve Myers was given an unexpected opportunity to travel to the US and participate in discussions on future proton colliders. He recalls: “None of the more senior accelerator physicists was available, so I got the job.” This journey, it turned out, was to be the start of his long relationship with the LHC.

Myers appreciated the significance for CERN of the discussions in the US: “We knew this was going to be the future competition and I wanted to understand it extremely well.” So he readied himself thoroughly by studying everything on the subject that he could. “With the catalyst that I had to prepare myself for the meeting, I looked at all aspects of it,” he adds. After returning to CERN, he thought about the concept of a proton collider in the LEP tunnel and wrote up his calculations, together with Wolfgang Schnell. “Wolfgang and I had many discussions and then we had a very good paper,” he says.

The paper (LEP Note 440) provided estimates for the design of a proton collider in the LEP tunnel and was the first document to bring all of the ideas together. It raised many of the points that were subsequently part of the LHC design: 8 TeV beam energy, beam–beam limitation (arguing the case for a twin-ring accelerator), twin-bore magnets and the need for magnet development, problems with pile-up (multiple collisions per bunch-crossing) and impedance limitations.

After Myers’ initial investigations, the time was ripe to develop active interest in a future hadron collider at CERN

After Myers’ initial investigations, the time was ripe to develop active interest in a future hadron collider at CERN. A dedicated study group was established in late 1983 and the significant Lausanne workshop took place the following year, bringing experimental physicists together with accelerator experts to discuss the feasibility of the potential LHC. Then began the detailed preparation of the project design.

In the meantime in the US, the Superconducting Super Collider (SSC) project had been approved. Myers was on the accelerator physics subcommittee for both of the major US Department of Energy reviews of the SSC, in 1986 and 1990. He recalls that the committee recommended a number of essential improvements to the proposed design specification, which ultimately resulted in spiralling costs, contributing to the eventual cancellation of the project. “The project parameters got changed, the budget went up and they got scrapped in the end.”

The LHC design, being constrained by the size of the LEP tunnel, could not compete with the SSC in terms of energy. Strategically, however, the LHC proposal compensated for the energy difference between the machines by claiming a factor-10 higher luminosity – an argument that was pushed hard by Carlo Rubbia. “We went for 1034 and nobody thought we could do it, including ourselves! But we had to say it, otherwise we weren’t competitive,” Myers says, looking back. It now gives Myers enormous satisfaction to see that the LHC performance in the first run achieved a peak stable luminosity of 7.73 × 1033 cm–2 s–1, while running at low energy. He adds confidently: “We will do 1034 and much more.”

The decision to use a twin-ring construction for the LHC was of central importance because separate rings allow the number of bunches in the beam to be increased dramatically. To date, the LHC has been running with 1380 bunches and is designed to use twice that number. For comparison, Myers adds: “The best we ever did with LEP was 16 bunches. The ratio of the number of bunches is effectively the ratio of the luminosities.”

Design details

LEP Note

At CERN, it was difficult to make significant progress with the LHC design while manpower and resources were focused on running LEP. Things took off after the closure of LEP in 2000, when there was a major redeployment of staff onto the LHC project and detailed operational design of the machine got under way. The LHC team, led by Lyn Evans, had three departments headed by Philippe Lebrun (magnets, cryogenics and vacuum), Paulo Ciriani (infrastructure and technical services) and Myers (accelerator physics, beam diagnostics, controls, injection, extraction and beam dump, machine protection, radio frequency and power supplies).

Myers makes a typical understatement when asked about the challenges of managing a project of this size: “You do your planning on a regular basis.” This attitude provides the flexibility to exploit delays in the project in a positive way. “Every cloud has a silver lining,” he comments, illustrating his point with the stark image of thousands of magnets sitting in car parks around CERN. A delay that was caused by bad welds in the cryogenic system gave the magnet evaluation group the benefit of extra time to analyse individual magnet characteristics in detail. The magnets were then situated around the ring so that any higher-order field component in one is compensated by its neighbour, therefore minimizing nonlinear dynamic effects. Myers believes that is one of the reasons the machine has been so forgiving with the beam optics: “You spend millions getting the higher-order fields down, so you don’t have nonlinear motion and what was done by the magnet sorting gained us a significant factor on top of that.”

When asked about the key moments in his journey with the LHC, he is clear: “The big highlight for us is when the beam goes all of the way round both rings. Then you know you’re in business; you know you can do things.” To that end, he paid close attention to the potential showstoppers: “The polarities of thousands of magnets and power supplies had to be checked and we had to make sure there were no obstacles in the path of the beam.” During the phase of systematically evaluating the polarities, it turned out that only about half were right first time. There were systematic problems to correct and even differing wiring conventions to address. In addition, a design fault in more than 3000 plug-in modules meant that they did not expand correctly when the LHC was warmed up. This was a potential source of beam-path obstacles and was methodically fixed. These stories illustrate the high level of attention to detail that was necessary for the successful switch-on of the LHC on 10 September 2008.

The low point of Myers’ experience was, of course, the LHC accident on 19 September 2008, which occurred only a matter of hours after he was nominated director of accelerators and technology. The incident triggered a shutdown of more than a year for repairs and an exhaustive analysis of what had gone wrong. During this time, an unprecedented amount of effort was invested in improvements to quality assurance and machine protection. One of the most important consequences was the development of the state-of-the-art magnet protection system, which is more technically advanced than was possible at the time of the LHC design. The outcome is a machine that is extremely robust and whose behaviour is understood by the operations team.

Steve Myers

In November 2009 the LHC was ready for testing once again. The first task was to ramp up the beam energy from the injection energy from the Super Proton Synchrotron of 0.45 TeV per beam. The process is complicated in the early stages by the behaviour of the superconducting magnets but the operations team succeeded in achieving 1.18 TeV per beam and established the LHC as the highest-energy collider ever built. By the end of March 2010, the first collisions at 7 TeV were made and from that point on the aim was to increase the collision rate by introducing more bunches with more protons per bunch and by squeezing the beam tighter at the interaction points. Every stage of this process was meticulously planned and carefully introduced, only going ahead when the machine protection team were completely satisfied.

In November 2009, when the LHC was ready to start up, both the machine and its experiments were thoroughly prepared for the physics programme ahead. The result was a spectacular level of productivity, leading to the series of announcements that culminated in the discovery of a Higgs boson. By the end of 2011 the LHC had surpassed its design luminosity for running with 3.5 TeV beams and the ATLAS and CMS experiments had seen the first hints of a new particle. The excitement was mounting and so was the pressure to generate as much data as possible. At the start of 2012, given that no magnet quenches had occurred while running with 3.5 TeV beams, it was considered safe to increase the beam energy to 4 TeV. With a collision rate of 20 MHz and levels of pile-up reaching 45, the experiments were successfully handling an almost overwhelming amount of data. Myers finds this an amazing achievement, as he says, “nobody thought we could handle the pile-up,” when the LHC was first proposed. He views the subsequent discovery announcement at CERN on 4 July 2012 as one of the most exciting moments of his career and, indeed, in the history of particle physics.

Reflecting on his journey with the LHC, Myers is keen to emphasize the importance of the people involved in its development, as well as the historical context in which it happened. In his early days at CERN in the 1970s, he was working with the Intersecting Storage Rings (ISR), which he calls “one of the best machines of its time”. As a result, “I knew protons extremely well,”he says. The experience he gained in those years has, in turn, contributed to his work on the LHC.

In the following years of building and operating LEP – as the world’s largest accelerator – many young engineers developed their expertise, just as Myers had on the ISR. “I think that’s why it worked so well,” he says, “because these guys came in as young graduates, not knowing anything about accelerators and we trained them all and they became the real experts, in the same way as I did on the ISR.” He sums up the value of this continuum of young people coming into CERN and becoming the next generation of experts: “That for me is what CERN is all about.”

The collimation system: defence against beam loss

Multi-stage cleaning

Ideally, a storage ring like the LHC would never lose particles: the beam lifetime would be infinite. However, a number of processes will always lead to losses from the beam. The manipulations needed to prepare the beams for collision – such as injection, the energy ramp and “squeeze” – all entail unavoidable beam losses, as do the all-important collisions for physics. These losses generally become greater as the beam current and the luminosity are increased. In addition, the LHC’s superconducting environment demands an efficient beam-loss cleaning to avoid quenches from uncontrolled losses – the nominal stored beam energy of 362 MJ is more than a billion times larger than the typical quench limits.

The tight control of beam losses is the main purpose of the collimation system. Movable collimators define aperture restrictions for the circulating beam and should intercept particles on large-amplitude trajectories that could otherwise be lost in the magnets. Therefore, the collimators represent the LHC’s defence against unavoidable beam losses. Their primary role is to clean away the beam halo while maintaining losses at sensitive locations below safe limits. The current system is designed to ensure that peak losses below a few 0.01% of the energy lost from the beam is deposited in the cold magnets. As the closest elements to the circulating beams, the collimators provide passive machine protection against irregular fast losses and failures. They also control the distribution of losses around the ring by ensuring that the largest activation occurs at optimized locations. Collimators are also used to minimize background in the experiments.

The LHC collimation system provides multi-stage cleaning where primary, secondary and tertiary collimators and absorbers are used to reduce the population of halo particles to tolerable levels (figure 1). Robust carbon-based and non-robust but high-absorption metallic materials are used for different purposes. Collimators are installed around the LHC in seven out of the eight insertion regions (between the arcs), at optimal longitudinal positions and for various transverse rotation angles. The collimator jaws are set at different distances from the circulating beams, respecting the optimum setting hierarchy required to ensure that the system provides the required cleaning and protection functionalities.

The design was optimized using state-of-the-art numerical-simulation programs

The detailed system design was the outcome of a multi-parameter optimization that took into account nuclear-physics processes in the jaws, robustness against the worst anticipated beam accidents, collimation-cleaning efficiency, radiation impact and machine impedance. The result is the largest and most advanced cleaning system ever built for a particle accelerator. It consists of 84 two-sided movable collimators of various designs and materials. Including injection protection collimators, there are a total of 396 degrees-of-freedom, because each collimator jaw has two stepping motors. By contrast, the collimation system of the Tevatron at Fermilab had less than 30 degrees-of-freedom for collimator positions.

The design was optimized using state-of-the-art numerical-simulation programs. These were based on a detailed model of all of the magnetic elements for particle tracking and the vacuum pipe apertures, with a longitudinal resolution of 0.1 m along the 27-km-long rings. They also involved routines for proton-halo generation and transport, as well as aperture checks and proton–matter interactions. These simulations require high statistics to achieve accurate estimates of collimation cleaning. A typical simulation run involves tracking some 20–60 million primary halo protons for 200 LHC turns – equivalent to monitoring a single proton travelling a distance of 0.03 light-years. Several runs are needed to study the system in different conditions. Additional complex energy- deposition and thermo-mechanical finite-element computations are then used to establish heat loads in magnets, radiation doses and collimator structural behaviour for various loss scenarios. Such a highly demanding simulation process was possible only as a result of computing power developed over recent years.

The backbone of the collimation system is located at two warm insertion regions (IRs): the momentum cleaning at IR3 and betatron cleaning at IR7, which comprise 9 and 19 movable collimators per beam, respectively. Robust primary and secondary collimators made of a carbon-fibre composite define the momentum and betatron cuts for the beam halo. In 2012, in IR7 they were at ±4.3–6.3σ (with σ being the nominal standard deviation of the beam profile in the transverse plane) from the circulating 140 MJ beams, which passed through collimator apertures as small as 2.1 mm at a rate of around 11,000 times per second.

Additional tungsten absorbers protect the superconducting magnets downstream of the warm insertions. While these are more efficient in catching hadronic and electromagnetic showers, they are also more fragile against beam losses, so they are retracted further from the beam orbit. Further local protection is provided for the experiments in IR1, IR2, IR5 and IR8: tungsten collimators shield the inner triplet magnets that otherwise would be exposed to beam losses because they are the magnets with the tightest aperture restrictions in the LHC in collision conditions. Injection and dump protection elements are installed in IR2, IR8 and IR6. The collimation system must provide continuous cleaning and protection during all stages of beam operation: injection, ramp, squeeze and physics.

An LHC collimator

An LHC collimator consists of two jaws that define a slit for the beam, effectively constraining the beam halo from both sides (figure 2). These jaws are enclosed in a vacuum tank that can be rotated in the transverse plane to intercept the halo, whether it is horizontal, vertical or skew. Precise sensors monitor the jaw positions and collimator gaps. Temperature sensors are also mounted on the jaws. All of these critical parameters are connected to the beam-interlock system and trigger a beam dump if potentially dangerous conditions are detected.

At the LHC’s top energy, a beam size of less than 200 μm requires that the collimators act as high-precision devices. The correct system functionality relies on establishing the collimator hierarchy with position accuracies to within a fraction of the beam size. Collimation movements around the ring must also be synchronized to within better than 20 ms to achieve good relative positioning of devices during transient phases of the operational cycle. A unique feature of the control system is that the stepping motors can be driven according to arbitrary functions of time, synchronously with other accelerator systems such as power converters and radio-frequency cavities during ramp and squeeze.

These requirements place unprecedented constraints on the mechanical design, which is optimized to ensure good flatness along the 1-m-long jaw, even under extreme conditions. Extensive measurements were performed during prototyping and production, both for quality assurance and to obtain all of the required position calibrations. The collimator design has the critical feature that it is possible to measure a gap outside the beam vacuum that is directly related to the collimation gap seen by the beam. Some non-conformities in jaw flatness could not be avoided and were addressed by installing the affected jaws at locations of larger β functions (therefore larger beam size), in a way that is not critical for the overall performance.

Set-up and performance

The first step in collimation set-up is to adjust the collimators to the stored beam position. There are unavoidable uncertainties in the beam orbit and collimator alignment in the tunnel, so a beam-based alignment procedure has been established to set the jaws precisely around the beam orbit. The primary collimators are used to create reference cuts in phase space. Then all other jaws are moved symmetrically round the beam until they touch the reference beam halo. The results of this halo-based set-up provide information on the beam positions and sizes at each collimator. The theoretical target settings for the various collimators are determined from simulations to protect the available machine aperture. The beam-based alignment results are then used to generate appropriate setting functions for the collimator positions throughout the operational cycle. For each LHC fill, the system requires some 450 setting functions versus time, 1200 discrete set points and about 10,000 critical threshold settings versus time. Another 600 functions are used as redundant gap thresholds for different beam energies and optics configurations.

This complex system worked well during the first LHC operation with a minimum number of false errors and failures, showing that the choice of hardware and controls are fully appropriate for the challenging accelerator environment at the LHC. Collimator alignment and the handling of complex settings have always been major concerns for the operation of the large and distributed LHC collimation system. The experience accumulated in the first run indicates that these critical aspects have been addressed successfully.

Beam losses

The result of the cleaning mechanism from the LHC collimation process is always visible in the control room. Unavoidable beam losses occur continuously at the primary collimators and can be observed online by the operations team as the largest loss spikes on the fixed display showing the beam losses around the ring. The local leakage to cold magnets is in most cases below 0.00001 of the peak losses, with a few isolated loss locations around IR7 where the cleaning reaches levels up to a few 0.0001 (figure 3). So far, this excellent performance has ensured a quench-free operation, even in cases of extreme beam losses from circulating beams. Moreover, this was achieved throughout the year with only one collimator alignment in IR3 and IR7, thanks to the remarkable stability of the machine and of the collimator settings.

However, collimators in the interaction regions required regular setting up for each new machine configuration that was requested for the experiments. Eighteen of these collimators are being upgraded in the current long shutdown to reduce the time spent on alignment: the new tertiary collimator design has integrated beam-position monitors to enable a fast alignment without dedicated beam-based alignment fills. This upgrade will also eventually contribute to improving the peak luminosity performance by reducing further the colliding beam sizes, thanks to better control of the beam orbit next to the inner triplet.

The LHC collimation system performance is validated after set-up with provoked beam losses, which are artificially induced by deliberately driving transverse beam instabilities. Beam-loss monitors then record data at 3600 locations around the ring. As these losses occur under controlled conditions they can be compared in detail with simulations. As predicted, performance is limited by a few isolated loss locations, namely the IR7 dispersion-suppressor magnets, which catch particles that have lost energy in single diffractive scattering at the primary collimator. This limitation of the system will be addressed in future upgrades, in particular in the High Luminosity LHC era.

The first three-year operational run has shown that the LHC’s precise and complex collimation system works at the expected high performance, reaching unprecedented levels of cleaning efficiency. The system has shown excellent stability: the machine was regularly operated with stored beam energies of more than 140 MJ, with no loss-induced quenches of superconducting magnets. This excellent performance was among the major contributors to the rapid commissioning of high-intensity beams at the LHC as well as to the squeezing of 4 TeV beams to 60 cm at collision points – a crucial aspect of the successful operation in 2012 that led to the discovery of a Higgs boson.

•The success of the collimation system during the first years of LHC operation was the result of the efforts of the many motivated people involved in this project from different CERN departments and from external collaborators. All of these people, and Ralph Assmann who led the project until 2012, are gratefully acknowledged.

bright-rec iop pub iop-science physcis connect