At the recent Moriond Electroweak (EW) conference at La Thuile, the LHCb collaboration presented an updated angular analysis of the decay B → K*0 μ+μ– using the experiment’s full data set from the LHC’s Run 1 (LHCb Collaboration 2015). This is an update of an earlier measurement based on the 2011 data alone, which showed a significant discrepancy in one angular observable (referred to as P´5) compared with predictions from the Standard Model. Because the discrepancy could be interpreted as a sign of physics beyond the Standard Model, it provoked considerable discussion within the particle-physics community, and the update with the full Run 1 sample has been eagerly awaited.
The decay of a B meson (containing a b quark and a d quark) into a K*0 meson (s and d) and a pair of muons is quite a rare process, occurring around once for every million B meson decays. At quark level, the decay involves a change of the quark flavour, b → s, without any change in charge. Such flavour-changing neutral processes are forbidden at the lowest perturbative order in the Standard Model, and come from higher-order loop processes involving virtual W bosons. In many extensions of the Standard Model, new particles can also contribute to the decay, leading to an enhancement or (through interference) a suppression in the rate of the decay. The contributions from new particles beyond the Standard Model can also change the angular distributions of the kaon and pion from the K*0 decay, and of the muons.
The analysis shown at Moriond, which is the first by any experiment to explore the full angular distribution of the decay, confirms the discrepancy seen in the 2011 data. At low dimuon masses, there is poor agreement between the current Standard Model predictions and the data for the P´5 observable. The two measurements in the range 4 < q2 < 8 GeV2/c4 are both 2.9σ from the Standard Model calculation (see figure).
Two invited theory talks followed LHCb’s presentation at Moriond. Both speakers were able to give an initial interpretation of the results, and found a consistent picture (see, for example, Straub and Altmannshofer 2015). A model-independent analysis favours a best-fit point that is about 4σ from the current Standard Model predictions.
It is, however, still too soon to claim evidence of new particles. The major challenge in interpreting the results lies in separating the interesting physics from poorly known QCD effects, which could be larger than first expected and hence responsible for the discrepancy. No matter the cause of the anomaly, there will need to be some rethinking of the current understanding of the B → K*0 μ+μ– decay.
Measurements of the differential cross-section in proton–proton (pp) or proton–antiproton (pp) scattering have generally proved consistent with a pure exponential dependence at low values of the square of the four-momentum transfer, ǀtǀ. However, slight deviations have been observed, notably in elastic pp and pp scattering at the Intersecting Storage Rings at CERN. Now, the TOTEM experiment has made a precision measurement of elastic pp scattering at the LHC, and finds that the data exclude a purely exponential behaviour of the cross-section at low ǀtǀ at a total energy of 8 TeV in the centre of mass.
The TOTEM experiment, which co-inhabits point 5 on the LHC with CMS, includes a system of Roman Pots, which allow detectors to be brought close to the beam so as to intercept particles scattered at very small angles to the beam. The Roman Pots are in two stations on opposite sides of interaction point 5, and each station is equipped with detectors at both 214 m and 220 m from the interaction point. The detectors consist of stacks of silicon-strip sensors, specially designed to have a narrow insensitive region, of a few tens of micrometres, along the edge that faces the beam (CERN Courier September 2009 p19).
TOTEM collected the data during a special run at the LHC in July 2012, in which the Roman Pots were brought in to a distance of only 9.5 times the transverse beam size of the beam. During 11 hours of data taking, the experiment amassed 7.2-million tagged elastic events at a collision energy of 8 TeV. The large data set has allowed a precise measurement of the elastic pp cross-section, with both statistical and systematic uncertainties below 1%, except for overall normalization. As a result of this precision, TOTEM is able to exclude a purely exponential differential cross-section in the range 0.027 < |t| < 0.2 GeV2, with a significance greater than 7σ. In contrast, parameterizations with either quadratic or cubic polynomials in the exponent are compatible with the data.
The IceCube Neutrino Observatory has measured neutrino oscillations via atmospheric muon-neutrino disappearance. This opens up new possibilities for particle physics with the experiment at the South Pole that was originally designed to detect neutrinos from distant cosmic sources.
IceCube records more than 100,000 atmospheric neutrinos a year, most of them muon neutrinos, and its sub-detector DeepCore allows the detection of neutrinos with energies from 100 GeV down to 10 GeV. These lower-energy neutrinos are key to IceCube’s oscillation studies. Based on current best-fit oscillation parameters, IceCube should see fewer muon neutrinos at energies around 25 GeV reaching the detector after passing through the Earth. Using data taken between May 2011 and April 2014, the analysis selected muon-neutrino candidates in DeepCore with energies in the region of 6–56 GeV. The detector surrounding DeepCore was used as a veto to suppress the atmospheric muon background. Nearly 5200 neutrino candidates were found, compared with the 6800 or so expected in the non-oscillation scenario. The reconstructed energy and arrival time for these events were used to obtain values for the neutrino-oscillation parameters, Δm322 = 2.72+0.19–0.20 × 10–3 ev2 and sin2 θ23 = 0.53+0.09–0.12. These results are compatible and comparable in precision to those of dedicated oscillation experiments.
The collaboration is currently planning the Precision IceCube Next Generation Upgrade (PINGU), in which a much higher density of optical modules in the whole central region will reduce the energy threshold to a few giga-electron-volts. By carefully measuring coherent neutrino interactions with electrons in the Earth (the Mikheyev–Smirnov–Wolfenstein effect), this should allow determination of the neutrino-mass hierarchy, and which neutrino flavour is heaviest.
The long-baseline neutrino experiment formerly known as LBNE has a new name: Deep Underground Neutrino Experiment (DUNE). Served by an intense neutrino beam from Fermilab’s Long Baseline Neutrino Facility, DUNE will have near detectors at Fermilab and four 10-kt far detectors at the Sanford Underground Research Facility in South Dakota. In March, the DUNE collaboration – now with more than 700 scientists from 148 institutions in 23 countries – elected two new spokespersons: André Rubbia from ETH Zurich, and Mark Thomson from the University of Cambridge. One will serve as spokesperson for two years, the other for three years, to provide continuity in leadership.
As many as 340 physicists, engineers, science managers and journalists gathered in Washington DC for the first annual meeting of the global Future Circular Collider (FCC) study. The FCC week covered all aspects of the study – designs of 100-km hadron and lepton colliders, infrastructures, technology R&D, experiments and physics.
The meeting began with an exciting presentation by US congressman Bill Foster, who recalled the history of the LHC as well as the former design studies for a Very Large Hadron Collider. A special session on Thursday was devoted to the experience with the US LHC Accelerator Research Program (LARP), to the US particle-physics strategy, and US R&D activities in high-field magnets and superconducting RF. A well-attended industrial exhibition and a complementary “industry fast-track” session were focused on Nb3Sn and high-temperature superconductor development.
James Siegrist from the US Department of Energy (DOE) pointed the way for aligning the high-field magnet R&D efforts at the four leading US magnet laboratories (Brookhaven, Fermilab, Berkeley Lab and the National High Magnetic Field Laboratory) with the goals of the FCC study. An implementation plan for joint magnet R&D will be composed in the near future. Discussions with further US institutes and universities are ongoing, and within the coming months several other DOE laboratories should join the FCC collaboration. A first US demonstrator magnet could be ready as early as 2016.
A total of 51 institutes have joined the FCC collaboration since February 2014, and the FCC study has been recognized by the European Commission (EC). Through the EuroCirCol project within the HORIZON2020 programme, the EC will fund R&D by 16 beneficiaries – including KEK in Japan – on the core components of the hadron collider. The four key themes addressed by EuroCirCol are the FCC-hh arc design (led by CEA Saclay), the interaction-region design (John Adams Institute), the cryo-beam-vacuum system (CELLS consortium), and the high-field magnet design (CERN). On the last day of the FCC week, the first meeting of the FCC International Collaboration was held. Leonid Rivkin was confirmed as chair of the board, with a mandate consistent with the production of the Conceptual Design Report, that is, to the end of 2018.
The next FCC Week will be held in Rome on 11–15 April 2016.
• The FCC Week in Washington was jointly organized by CERN and the US DOE, with support from the IEEE Council of Superconductivity. More than a third of the participants (120) came from the US. CERN (93), Germany (20), China (16), UK (16), Italy (12), France (11), Russia (11), Japan (10), Switzerland (10) and Spain (6) were also strongly represented. For further information, visit cern.ch/fccw2015.
The quest for new heavy chemical elements is the subject of intense research, as the synthesis and identification of these new elements fill up empty boxes in the familiar Periodic Table. The measurement of their properties for a proper classification in the table has proved challenging, because the isotopes of these elements are short-lived and new methods must be devised to cope with synthesis rates that yield only one atom at a time. Now, an international team led by researchers from the Japanese Atomic Energy Agency (JAEA) in Tokai has developed an elegant experimental strategy to measure the first ionization potential of the heaviest actinide, lawrencium (atomic number, Z = 103).
Using a new surface ion source (figure 1) and a mass-separated beam, the team’s measurement of 4.96±0.08 eV – published recently in Nature (Sato et al. 2015) – agrees perfectly with state-of-the-art quantum chemical calculations that include relativistic effects, which play an increasingly important role in this region of the Periodic Table. The result confirms the extremely low binding energy of the outermost valence electron in this element, therefore confirming its position as the last element in the actinide series. This is in line with the concept of heavier homologues of the lanthanide rare earths, which was introduced by Glenn Seaborg in the 1940s.
In the investigations at JAEA the researchers have exploited the isotope-separation online (ISOL) technique, which has been used for nuclear-physics studies at CERN’s ISOLDE facility since the 1960s. The technique has now been adapted to perform ionization studies with the one-atom-at-a-time rates that are accessible for studies of lawrencium. A new surface-ion source was developed and calibrated with a series of lanthanide isotopes of known ionization potentials. The ionization probability of the mass-separated lawrencium could then be exploited to determine its ionization potential using the calibration master curve.
The special position of lawrencium in the Periodic Table has placed the element at the focus of questions on the influence of relativistic effects, and the determination of properties to confirm its position as the last actinide. The two aspects most frequently addressed have concerned its ground-state electronic configuration and the value of its first ionization potential.
Relativistic effects strongly affect the electron configurations of the heaviest elements. In the actinides, the relativistic expansion of the 5f orbital contributes to the actinide contraction – the regular decrease in the ionic radii with increasing Z. Together with direct relativistic effects on the 7s and 7p1/2 orbitals, this influences the binding energies of valence electrons and the energetic ordering of the electron configurations. However, it is difficult to measure the energy levels of the heaviest actinides with Z > 100 by a spectroscopic method because these elements are not available in a weighable amount.
The ground-state electronic configuration of lawrencium (Lr) is expected to be [Rn]5f147s27p1/2. This is different from that of its homologue in the lanthanide series, lutetium, which is [Xe]4f146s25d. The reason for this change is the stabilization by strong relativistic effects of the 7p1/2 orbital of Lr below the 6d orbital. Lr, therefore, is anticipated to be the first element with a 7p1/2 orbital in its electronic ground state. As the measurement of the ionization potential directly reflects the binding energy of a valence electron under the influence of relativistic effects, its experimental determination provides direct information on the energetics of the electronic orbitals of Lr, including relativistic effects, and a test for modern theories. However, this measurement cannot answer questions about the electronic configuration itself. Nevertheless, as figure 2 shows, the experimental result is in excellent agreement with a new theoretical calculation that includes these effects and favours the [Rn]5f147s27p1/2 ground-state configuration.
Astronomers using observations from the NASA/ESA Hubble Space Telescope and NASA’s Chandra X-ray Observatory have studied how dark matter in clusters of galaxies behaves when the clusters collide. The results confirm the distinct existence of dark matter with high significance, and show that dark matter interacts with itself even less than thought previously.
Although there is more dark matter than visible matter in the universe, dark matter remains extremely elusive and is, most likely, in a form outside of the Standard Model of particle physics. Dark matter does not reflect, absorb or emit light, making it transparent. The presence of a massive clump of dark matter can be probed only by its gravitational distortion of space–time, which bends the light path in its vicinity. This weak gravitational-lensing effect distorts the shape of background galaxies, making it possible to infer the spatial distribution of dark matter (CERN Courier January/February 2007 p11).
Collisions between clusters of galaxies provide a way to estimate the interaction of dark matter with itself. The “bullet cluster” is a prime example of such a collision, showing that while the hot gas is slowed down by ram pressure, the motion of both the dark matter and galaxies seems to be unaltered by the event (CERN Courier October 2006 p9). It constraints the self-interaction cross-section of dark matter by unit mass to σDM/m < 1.25 cm2/g (68% CL). To tighten this constraint further, a group of astronomers led by David Harvey – affiliated to both the École Polytechnique Fédérale de Lausanne (EPFL) and the University of Edinburgh – studied a sample of 72 mergers identified in 30 colliding systems, with archival observations by Hubble in the visible range and by Chandra in X-rays.
The team determined the central position of the hot gas glowing in X-rays, the galaxies and dark matter in each of the 72 collisions. The researchers assume that the direction of motion is given by the line connecting the location of the gas and of the galaxies, and then measure the position of the dark-matter component, both parallel and perpendicular to this direction. The latter serves as a check, and is found to be consistent with zero on average, as expected. Along the line of motion, the distribution of the offsets between dark matter and gas is found to be inconsistent (at 7.6σ) with the hypothesis that dark matter does not exist, i.e. that all of the cluster’s mass – except only about 3% in the form of stars in galaxies – is co-spatial with the hot gas. This rules out dark-matter alternatives such as modified Newtonian dynamics (MOND).
More interestingly, the ratio of dark matter and gas offsets from galaxies is a dimensionless measure of the drag force acting on dark matter. The authors of the study measured an average value of –0.04±0.07 (68% CL), which they translate to an upper limit of σDM/m < 0.47 cm2/g (95% CL) on the momentum transfer cross-section of dark matter. They note that this result rules out parts of the hidden-sector dark-matter models that predict σDM/m = 1 barn/GeV = 0.6 cm2/g, which is similar to nuclear cross-sections in the Standard Model. Such a high coupling in the dark sector would not have been in conflict with the orders-of-magnitude lower coupling between dark matter and Standard Model particles, which is at most in the order of picobarns.
An era came to an end on 30 September 2014, when the National Synchrotron Light Source (NSLS) ended its last run and dumped its last beam after more than 30 years of operation at Brookhaven National Laboratory. NSLS was the first of the modern synchrotron light sources, and had an enormous impact on synchrotron-light-based science during the past decades. It contributed a wealth of pioneering scientific results, including work that resulted in two Nobel prizes. The following day, 1 October, a new era began for Brookhaven, with the start-up of the new facility, NSLS-II, which is designed to provide the brightest beams ever produced by a synchrotron light source.
The mission for a follow-up to NSLS was to provide a factor of 10 more flux and up to four orders of magnitude more brightness relative to the earlier machine (where brightness is defined as the number of photons per second divided by the beam cross-section and the divergence at the emission points, integrated over a narrow bandwidth of 1%). It was to be capable of achieving energy resolution of a fraction of a milli-electron-volt and spatial resolution on the nanometre scale. This ambition was acknowledged in 2005, when NSLS-II received CD-0, the first of five “critical decisions” for the construction of any new science facility funded by the US Department of Energy (DOE). The new light source was to enable novel science opportunities in all fields of synchrotron-radiation-based science, and would allow experiments that were not possible at any of the other facilities at that time. The project went swiftly through the design and R&D phase with critical decisions CD-1 and CD-2, and in June 2009 CD-3 was approved, allowing construction of the facility to begin.
The NSLS-II electron storage ring consists of 30 double-bend achromates (DBA) separated by 15 long (9.3 m) and 15 short (6.6 m) straight sections for insertion devices, which are the source of ultra-bright synchrotron radiation. The ring is designed for a beam energy of 3 GeV. To achieve the desired high brightness based on a horizontal beam emittance of εx = 0.8 π nrad m, it has a large circumference of 792 m. The bending magnets are fairly long (2.69 m) and weak (0.4 T). These design choices have two advantages. They allow the design of a stable lattice with a beam emittance close to the DBA minimum emittance, and at the same time, the synchrotron-radiation power of photons emitted in the bending magnets is fairly moderate (283 keV per turn per electron). This allows an efficient doubling of the radiation-damping rate, and therefore a reduction of the beam emittance by a factor of two, by the use of six 3.4-m-long damping wigglers with a peak field of 1.85 T.
NSLS-II has a conventional system of electromagnets for bending, focusing and nonlinear corrections. However, the field quality of these magnets is pushed beyond what has been achieved previously (ΔB/B = 10–5 – 10–4 at r = 25 mm). Further, the alignment of the magnetic centres with respect to each other is held to unprecedentedly small tolerances with rms values of less than 10 μm.
The other critical parameter for high-brightness performance is the beam current of 500 mA. High beam current is obtained with an accelerating structure based on two single-cell 500-MHz superconducting cavities of the type known as CESR-B. This RF system offers advantages for beam stability because the structures exhibit weak parasitic RF modes and are superior for suppressing beam-loading effects.
In addition, beyond-state-of-the-art instrumentation is required to control the orbital stability of the beam with its small beam sizes (σy = 3 μm at the insertion devices). Therefore, both a novel beam-position monitor system with a resolution and stability of less than 200 nm and a fast orbit-feedback system have been designed and implemented. These will limit the motion of the beam orbit to within 10% of the (vertical) beam size for frequencies up to 1 kHz.
The vacuum system is made of extruded, keyhole-shaped aluminium. The antechamber houses two non-evaporable getter strips for distributed pumping. The girder system is designed for high thermal stability and to avoid amplification of mechanical vibrations below 30 Hz.
All of the electronics and power supplies are located on the tunnel roof and are housed in sealed air-cooled racks, protecting the sensitive equipment from dust, temperature fluctuations, humidity and leaking cooling water. This protection is a major element of the strategy to achieve high operational reliability for the more than 1000 magnet power supplies, the beam-position monitors, controls and vacuum-control equipment. The facility aims for a reliability greater than 95% once its operation is matured fully.
The NSLS-II injector consists of a 200-MeV S-band linac, which feeds the 3-GeV combined-function booster synchrotron for on-energy injection in “top-off” mode, where frequent injection maintains the beam current. The booster synchrotron was designed and built by the Budker Institute of Nuclear Physics in Novosibirsk, and installed in collaboration with NSLS-II staff.
The civil construction with the accelerator tunnels and the ring-shaped experimental floor was completed in 2012. Installation of the accelerator components, which started in 2011, was completed in 2013.
The commissioning of the linac was already possible in April 2012 and the commissioning of the booster synchrotron followed in December 2013. Storage-ring commissioning took place soon after, in April 2014. The commissioning time for the entire complex was remarkably short, the superb robustness and reproducibility of the machine being demonstrated by the fact that restarts are possible only a few hours after shutdowns.
The summer of 2014 saw the installation of the first NSLS-II insertion devices. Three pairs of 3.4-m-long damping wigglers with peak fields of 1.85 T not only provide a factor of two in emittance reduction by enhanced radiation damping, they are also powerful sources (195 kW at a beam current of 500 mA) of photons up to energies of 100 keV. The workhorses of NSLS-II are in-vacuum undulators with a period of 20–23 mm and an extremely small gap height of 5 mm. Four such devices up to 3 m in length are part of the initial installation. There is also a pair of 2-m-long elliptical polarizing undulators (EPUs). The insertion devices were commissioned with their corresponding front-end systems during autumn 2014.
An initial suite of six beamlines is also part of the scope of the NSLS-II project. These beamlines are based on state-of-the-art – or beyond – beamline technology. They cover a range of synchrotron-light experimental techniques, including powder diffraction (XPD), coherent hard X-ray scattering (CHX), nano-focus imaging (HNX), inelastic X-ray scattering with extreme energy resolution < 1 meV (IXS), X-ray spectroscopy (SRX) and coherent soft X-ray scattering (CSX). All of these beamlines have started technical commissioning. The first light emitted by the NSLS-II EPU was observed on 23 October in the CSX beamline, followed by similar events for the other beamlines.
At the same time that the science commissioning of the existing beamlines at NSLSI-II is taking place, nine further insertion-device beamlines are under construction. The first three, known as the ABBIX beamlines, are scheduled to start up in the spring of 2016. They are specialized for biological research. The other six insertion-device beamlines – the so-called “NEXT” beamlines – are planned to start up the following autumn. Finally, there is an ongoing programme that consists of reusing NSLS equipment and integrating it into five new beamlines (NxtGen) that will receive bending-magnet radiation. As the field of the NSLS-II dipole magnets is weak, some of the source points are equipped with a wavelength-shifter consisting of a three-pole wiggler with 1.2 T peak field.
A number of non-Brookhaven institutions have responded positively to the opportunity to work with NSLS-II, and they will develop five additional beamlines in collaboration with NSLS-II staff. Therefore by 2018, NSLS-II will run with 27 beamlines and will have recovered from the reduction in the scientific programme between the shutdown of NSLS and the development period of the NSLS-II user facility. In its final configuration, the NSLS-II facility will host more than 60 beamlines.
The construction of NSLS-II within budget ($912 million) and to schedule is the result of excellent teamwork between scientists, engineers and technicians. In a ceremony on 6 February, the US secretary of energy, Ernest Moniz, dedicated the new facility. The first science results from NSLS-II were reported as early as March (Wang et al. 2015), and the science programme will start for most beamlines in the summer. The bright future of the NSLS-II era has begun.
For the past two years, teams from the CMS collaboration, many from distant countries, have been hard at work at LHC point 5 at Cessy in France. Their goal – to ensure that the CMS detector will be able to handle the improved performance of the LHC when it starts operations at higher energy and luminosity. More than 60,000 visitors to the CMS underground experimental cavern during the first long shutdown (LS1) witnessed scenes of intense and spectacular activity – from movements of the 1500-tonne endcap modules to the installation of the delicate pixel tracker, only the size of a portable toolbox but containing almost 70-million active sensors.
This endeavour involved planning for a huge programme of work (CERN Courier April 2013 p17). Since LS1 began, more than 1000 separate work packages have been carried out, ranging from the repairs and maintenance required after three years of operation during the LHC’s Run 1, through consolidation work for a long-term future, to the installation of completely new detector systems as well as the extension of existing ones. In addition to the many CMS teams involved, the programme relied on the strong general support and substantial direct contributions from physics and technical departments at CERN. This article, by no means exhaustive, aims to provide some insight into LS1 as it happened at point 5.
An early start
Vital contributions started as early as 2009, well before LS1 began. One example is the refurbishment by CERN’s General Services and Physics Departments of building 904 on the Prévessin site, to provide 2000 m2 of detector-assembly laboratories, which were used for the new parts of the muon detector. Another is the creation by CMS (mainly through contracts managed by CERN’s Engineering Department) of the Operational Support Centre in the surface-assembly building at point 5. This centre incorporates work areas for all of the CMS systems that had to be brought to the surface during LS1, and includes a cold-storage, cold-maintenance facility where the pixel tracker was kept until the new beampipe was fitted. There is also a workshop area suitable for modifying elements activated by collision products, which, as the LS1 story progressed, provided useful flexibility for dealing with unexpected work.
The highest-priority objective for CMS during LS1 was to operate the tracker cold
The highest-priority objective for CMS during LS1 was to operate the tracker cold. The silicon sensors of this innermost subdetector, which surrounds the LHC beampipe, must endure more than 109 particles a second passing through it, and cannot be completely replaced until about a decade from now. The damaging effects of this particle flux, sustained over many years of operation, can be mitigated by operating the sensor system at a temperature that is 20–30 °C lower than the few degrees above zero used so far. Alongside modifications to allow delivery of the coolant at much lower temperatures, a new system of humidity control had to be introduced to prevent condensation and icing. This involved sealing the tracker envelope, while making provision for a flow of up to 400 m3/h of dry gas. The system installed by CMS is a novel one at CERN: it dries air and then optionally removes oxygen via filtering membranes. The first full-scale tests took place at the end of 2013, and there was great satisfaction when an operating temperature of –20 °C was achieved stably.
However, as one challenge faded, a new one emerged immediately. On warming up, tell-tale drips of water were visible coming from the insulated bundles of pipework carrying the coolant into the detector – indication that air at room temperature and humidity had been reaching the cold pipes inside the system and forming ice. Fortunately, tests soon showed that an additional flow of dry air, injected separately into the pipework bundles, would suppress this problem. Responding to CMS’s request for help, the Engineering Department recently delivered a new dry-air plant that will make humidity suppression in the cooling distribution feasible on a routine basis, with a comfortable margin in capacity.
Another high-priority project for LS1 involved the muon detectors. A fourth triggering and measurement station in each of the endcaps was incorporated into the original CMS design, but it was not considered essential for initial operation. These stations are now needed to increase the power to discriminate between interesting low-momentum muons originating from the collision (e.g. potentially from a Higgs-boson decay) and fake muon signatures caused by backgrounds. Seventy-two new cathode-strip chambers (CSCs) and 144 new resistive-plate chambers (RPCs) were assembled across a three-year period by a typical CMS multinational team from institutes in Belgium, Bulgaria, China, Colombia, Egypt, Georgia, India, Italy, Korea, Mexico, Pakistan, Russia and the US, as well as from CERN. They were then installed as superposed layers of CSCs and RPCs on the two existing discs at the ends of the steel yoke that forms the structural backbone of CMS. Teams worked on the installation and commissioning in two major bursts of activity, matching the periods when the required detector configuration was available, and completing the job in late spring 2014.
A further improvement of the endcap muon system was achieved by installing new on-chamber electronics boards in the first, innermost layer of the CSCs to withstand the higher luminosity, while reusing the older electronics in one of the new fourth layers, where it is easier to cope with the collision rate. Here again, the unexpected had to be dealt with. One of the two layers had just been re-installed after months of re-fitting work, when tests revealed a potential instability caused by the accidental omission of a tiny passive electronic component. It was considered significantly risky to leave this uncorrected, so the installation teams had to go into full reverse. Working late into the evenings and at weekends to avoid interfering with previously scheduled activities, they partially extracted all 36 chambers, corrected the fault, put them back in place and re-commissioned them.
No part of the detector escaped the attention of the upgrade and maintenance teams. The modular structure of CMS, which can be separated into 13 major slices, was fully exploited to allow simultaneous activity, with as many as eight mobile work platforms frequently in use to give access to different slices and different parts of their 14 m diameter. Multiple maintenance interventions on the five barrel-yoke wheels restored the number of working channels to 99.7% – a figure not seen since 2009, just after installation. Similar interventions on the CSC and RPC stations on the endcap disks were also successful, with the few per cent that had degraded over the past few years restored completely. In addition, to improve maintainability, some key on-board electronics from the barrel part of the muon system was moved from the underground experimental cavern to the neighbouring service cavern, where it will now remain accessible during LHC operation. All of the photo-transducers and much of the on-detector electronics of the hadron calorimeter (HCAL) are to be replaced over the next few years, and a substantial part of this work was completed during LS1. In particular, photo-transducers of a new type were installed in the outer barrel and forward parts of the sytem, which will lead to an immediate improvement in performance.
The rate of proton–proton collisions will be five times higher
The need for some work streams was completely unforeseen until revealed by routine inspection. The most notable example was the discovery of a charred feed-through connector serving the environmental-screen heaters of one of the two preshower systems for the electromagnetic calorimeter (ECAL). Full diagnosis (under-rated capacitors) and subsequent repair of both preshower systems required their removal to the surface, where a semi-clean lab was created at short notice within the Operational Support Centre. The repairs and re-installation were a complete success, and the preshower system has been re-commissioned recently at its planned operating temperature of –8 °C.
The CMS consolidation programme had also to prepare the infrastructure of the experiment – originally designed for a 10-year operating lifetime – for running well into the 2030s. LHC operating periods lasting around three years will be interleaved with substantial shutdowns of one to two years in length. Moreover, the rate of proton–proton collisions will be five times higher, and the integrated number of collisions (ultimately) 10 times higher, than the original design goal.
Key adaptations were made during LS1 to address redundancy in the power and cryogenics systems, to extend the predicted lifetime of the one-of-a-kind CMS magnet. Further measures for protection against power glitches were implemented through an extension of the detector’s short-term uninterruptible power supply. Changes to the detector cooling included modifications for greater capacity and redundancy, as well as the addition of a new system in preparation for the upcoming upgrade of the pixel tracker, based on two-phase (evaporating liquid) carbon dioxide. This technology, new for CMS, involved the installation of precision-built concentric vacuum-insulated feed and return lines – difficult-to-modify structures that have to be made extremely accurately to ensure proper integration with the constricted channels that feed services into the apparatus. These changes presented challenges for the CMS Integration Office, where the “compact” in CMS was defended vigorously every day in computer models and then in the caverns.
The most massive change to the structure of the experiment was the addition of the new 125-tonne shielding discs
New detectors were not the only large-scale additions to CMS. The most massive change to the structure of the experiment was the addition of the new 125-tonne shielding discs – yoke endcap disc four (YE4) – installed outside of the fourth endcap muon station at either end of the detector. Each shielding disc, 14 m in diameter but only 125 mm thick, was made of 12 iron sector casings. Following manufacture and pre-assembly tests in Pakistan, these discs, whose design and preparation took five years, were disassembled for shipping to CERN and then re-assembled on the Meyrin site, where they were filled with a special dense (haemetite) shielding concrete, mixed for this specific application by CERN’s civil engineers. Loaded with a small percentage of boron, this concoction will act as a “sponge” to soak up many of the low-energy neutrons that give unwanted hits in the detector, and whose numbers will increase as the LHC beam intensities get higher.
The YE4 discs, transported in sectors to point 5, were the first slices of CMS to be assembled underground – all of the existing major elements had been pre-assembled on the surface and lowered into the underground cavern in sequence (CERN Courier July/August 2006 p28). In the original concept, the YE4 discs could be separated from the supporting YE3 only by driving the whole endcap system back to the cavern headwall, where YE4 could be unhooked and supported. Because all of the other slices of the CMS “swiss roll” can be displaced from one another to give access to the detectors sandwiched in between, it was decided late in the project – in fact, after assembly had already started – to equip each YE4 shielding disc with air pads and a system of electric screw-jacks. This would allow the YE4 disc to separate from the supporting neighbour disc (YE3) by up to 3.7 m without the necessity to move it to the headwall – a major operation. In fact, one so-called “push-back system” was used immediately after assembly of the YE4 disc, to permit installation of RPCs with the endcaps partially closed. This maintained the rapid-access modularity that was a core feature of the CMS design (CERN Courier October 2008 p48).
The final change was at the heart of CMS, in preparation for the installation during the LHC’s year-end technical stop of 2016–2017 of an upgraded pixel tracker – the closest physics detector to the collision point. The 0.8-mm-thick central beampipe used during Run 1, with an outer diameter of 59.6 mm, was replaced by a similar one of 45-mm outer diameter and, like the first one, made of beryllium, to be as transparent as possible to particles emanating from the LHC collisions. The narrower beampipe will allow the first layer of the new pixel tracker to be closer to the collision point than before. This geometrical improvement, combined with an additional fourth layer of sensors, will upgrade the tracker’s ability to resolve where a charged particle originated. When running under conditions of high pile-up in Run 2 and Run 3 – that is, with many more protons colliding every time counter-rotating bunches meet at the centre of CMS – the disentangling of which tracks belong to which collision vertices will be crucial for most physics analyses.
The delicate operations of removing and replacing the beampipe – requiring the detector to be open fully – are possible only in a long shutdown. The new beampipe, designed jointly with CERN’s Technology Department, which procured and prepared it on behalf of CMS, was installed in June 2014. Its installation was followed immediately by vacuum pumping, combined with heating (“bake-out”) to more than 200 ºC, to expel gas molecules attached to the chamber walls. This ensured that the operating pressure of around 10–10 mbar would be possible – and achieved eventually. Following the bake-out of the new central beampipe, several mechanical tests were made to ensure that the upgraded pixel tracker can be installed in the limited time window that will be available in 2016–2017.
It is probable that a proverb exists in every language and culture involved in CMS, warning against relaxing before the job is finished. In mid-August 2014, the end of the LS1 project seemed to be on the horizon. The beampipe bake-out was being completed and preparations for the pixel tracker’s re-installation were underway, so many team members took the opportunity for a quick summer holiday. Then, their mobile phones began to buzz with reports of the first indications of a severe fault found in pre-installation tests of the barrel pixel system, which had been removed only to allow the change of beampipe. About 25% (around 50) of the modules in one quadrant were not responding. By the end of August, the half-shell containing the faulty quadrant had been transported to its makers at the Paul Scherrer Institute (PSI) for detailed investigation.
On 5 September, the diagnostics revealed that the reason for failure was electro-migration-induced shorts between adjacent bond pads of the high-density interconnect – a flexible, complex, multilayer printed circuit used to extract the signals. An investigation showed that the most likely origin was a brief and inadvertent lapse in humidity-control protocols in the course of routine calibration exercises many months earlier, when the pixel system was up in the surface laboratory. By 18 September, a comprehensive strategy of replacement and repair had been worked out by the PSI team. Because this required purchasing new components and restarting the production of detector modules, the revised schedule foresaw the detector being back at CERN by the end of November, with installation planned for around 8 December, almost exactly two months later than intended originally.
A new end game
At this late stage, with insufficient contingency remaining in the baseline schedule to accommodate the delay, it was decided to change radically the end-game sequence of the shutdown. Instead of waiting for the repair of the pixel tracker, CMS was closed immediately to conduct a short magnet-test, to identify any problems that otherwise would not have appeared until the final closure for beam. After finishing the remaining work on the bulkhead seal that allows the tracker to be operated cold, this sequence of closing the detector, testing the magnet and then re-opening CMS became the critical path for two months, with the remaining upgrade activity being postponed or re-arranged around the new schedule. The new sequence implied unexpected tight deadlines for several teams – particularly those working on the magnet and the forward region – and a massive extra workload for the heavy-engineering team. The additional closing and opening sequence required 36 single movements of heavy discs, and 16 insertions and removals of the heavy-raiser platforms that support the forward calorimeters at beam height. A concerted and exceptional effort resulted in the magnet yoke being closed by mid-October, and both forward regions being closed and ready for magnetic field by 6 November.
The closing of the experiment, just in time for first beam in 2015, brought the saga of LS1 to a happy ending.
The following day, the magnet was ramped to 1 T and then discharged. This sequence allowed yoke elements to settle, and also verified that the control and safety systems performed as expected. By 10 November, enough liquid helium had been accumulated for 36 hours of operation at full field, and the test programme resumed. However, at 2.4 T, the main elevator providing underground access stopped working, owing to some field-sensitive floor-level sensors having been installed mistakenly during routine maintenance. After reducing the field temporarily to allow personnel to leave the underground areas, the ramp-up continued, reaching the working value of 3.8 T at around 7.00 p.m., demonstrating that the magnet’s upgraded power and cryogenics system worked well. Despite the rapid endcap-yoke closure with only approximate axial alignment, the movements under the magnetic forces of the endcap discs (including the new YE4s) and the forward systems were well within the ranges observed previously, although specific movements occurred at different field values. The new beampipe support system and the new phototransducers of the HCAL and beam-halo monitors were shown to be tolerant to the magnetic field. Most importantly, the environmental seal around the tracker and the new dry-gas injection system functioned well enough in the magnetic field to allow tracker operation at –20 °C. The top-priority task of LS1 could therefore be declared a success.
Following this, the opening of the detector was a race against time to meet the target of installing the barrel and forward pixel trackers, and enclosing them in a stable environment before CERN’s 2014 end-of-year closure. This was achieved successfully, providing a fortuitous “dry run” of what will have to be done during the year-end stop of 2016–2017, when the new pixel tracker will be installed. Following a thorough check and pre-calibration of the pixel system, the last new elements of CMS in the LS1 project – upgraded beam monitors and the innovative pixel luminosity telescope (CERN Courier March 2015 p6) – were installed by the end of the first week of February 2015.
The closing of the experiment, just in time for first beam in 2015, brought the saga of LS1 to a happy ending. It is time to celebrate with the collaboration teams, contractors and CERN technical groups, who have all contributed to the successful outcome. The imminent start of Run 2 now raises the exciting prospect of new physics, but behind the scenes preparations for the next CMS shutdown adventure have already begun.
It is nearly two years since the beams in the LHC were switched off and Long Shutdown 1 (LS1) began. Since then, a myriad of scientists and engineers have been repairing and consolidating the accelerator and the experiments for running at the unprecedented energy of 13 TeV (or 6.5 TeV/beam) – almost twice that of 2012.
In terms of installation work, ALICE is now complete. The remaining five super modules of the transition radiation detector (TRD), which were missing in Run 1, have been produced and installed. At the same time, the low-voltage distribution system for the TRD was re-worked to eliminate intermittent overheating problems that were experienced during the previous operational phase. On the read-out side, the data transmission over the optical links was upgraded to double the throughput to 4 GB/s. The TRD pre-trigger system used in Run 1 – a separate, minimum-bias trigger derived from the ALICE veto (V0) and start-counter (T0) detectors – was replaced by a new, ultrafast (425 ns) level-0 trigger featuring a complete veto and “busy” logic within the ALICE central trigger processor (CTP). This implementation required the relocation of racks hosting the V0 and T0 front-end cards to reduce cable delays to the CTP, together with optimization of the V0 front-end firmware for faster generation of time hits in minimum-bias triggers.
The ALICE electromagnetic calorimeter system was augmented with the installation of eight (six full-size and two one-third-size) super modules of the brand new dijet calorimeter (DCal). This now sits back-to-back with the existing electromagnetic calorimeter (EMCal), and brings the total azimuthal calorimeter coverage to 174° – that is, 107° (EMCal) plus 67° (DCal). One module of the photon spectrometer calorimeter (PHOS) was added to the pre-existing three modules and equipped with one charged-particle veto (CPV) detector module. The CPV is based on multiwire proportional chambers with pad read-out, and is designed to suppress the detection of charged hadrons in the PHOS calorimeter.
The overall PHOS/DCal set-up is located in the bottom part of the ALICE detector, and is now held in place by a completely new support structure. During LS1, the read-out electronics of the three calorimeters was fully upgraded from serial to parallel links, to allow operation at a 48 kHz lead–lead interaction rate with a minimum-bias trigger. The PHOS level-0 and level-1 trigger electronics was also upgraded, the latter being interfaced with the neighbouring DCal modules. This will allow the DCal/PHOS system to be used as a single calorimeter able to produce both shower and jet triggers from its full acceptance.
The gas mixture of the ALICE time-projection chamber (TPC) was changed from Ne(90):CO2(10) to Ar(90):CO2(10), to allow for a more stable response to the high particle fluxes generated during proton–lead and lead–lead running without significant degradation of momentum resolution at the lowest transverse momenta. The read-out electronics for the TPC chambers was fully redesigned, doubling the data lines and introducing more field-programmable gate-array (FPGA) capacity for faster processing and online noise removal. One of the 18 TPC sectors (on one side) is already instrumented with a pre-production series of the new read-out cards, to allow for commissioning before operation with the first proton beams in Run 2. The remaining boards are being produced and will be installed on the TPC during the first LHC Technical Stop (TS1). The increased read-out speed will be exploited fully during the four weeks of lead collisions foreseen for mid November 2015. For lead running, ALICE will operate mainly with minimum-bias triggers at a collision rate of 8 kHz or higher, which will produce a track load in the TPC equivalent to operation at 700 kHz in proton running.
LS1 has also seen the design and installation of a new subsystem – the ALICE diffractive (AD) detector. This consists of two double layers of scintillation counters placed far from the interaction region on both sides, one in the ALICE cavern (at z = 16 m) and one in the LHC tunnel (at z = –19 m). The AD photomultiplier tubes are all accessible from the ALICE cavern, and the collected light is transported via clear optical fibres.
The ALICE muon chambers (MCH) underwent a major hardware consolidation of the low-voltage system in which the bus bars were fully re-soldered to minimize the effects of spurious chamber occupancies. The muon trigger (MTR) gas-distribution system was switched to closed-loop operation, and the gas inlet and outlet “beaks” were replaced with flexible material to avoid cracking from mechanical stress. One of the MTR resistive-plate chambers was instrumented with a pre-production front-end card being developed for the upgrade programme in LS2.
The increased read-out rates of the TPC and TRD have been matched by a complete upgrade (replacement) of both the data-acquisition (DAQ) and high-level trigger (HLT) computer clusters. In addition, the DAQ and HLT read-out/receiver cards have been redesigned, and now feature higher-density parallel optical connectivity on a PCIe-bus interface and a common FPGA design. The ALICE CTP board was also fully redesigned to double the number of trigger classes (logic combinations of primary inputs from trigger detectors) from 50 to 100, and to handle the new, faster level-0 trigger architecture developed to increase the efficiency of the TRD minimum-bias inspection.
Regarding data-taking operations, a full optimization of the DAQ and HLT sequences was performed with the aim of maximizing the running efficiency. All of the detector-initialization procedures were analysed to identify and eliminate bottlenecks, to speed up the start- and end-of-run phases. In addition, an in-run recovery protocol was implemented on both the DAQ/HLT/CTP and the detector sides to allow, in case of hiccups, on-the-fly front-end resets and reconfiguration without the need to stop the ongoing run. The ALICE HLT software framework was in turn modified to discard any possible incomplete events originating during online detector recovery. At the detector level, the leakage of “busy time” between the central barrel and muon-arm read-out detectors has been minimized by implementing multievent buffers on the shared trigger detectors. In addition, the central barrel and the muon-arm triggers can now be paused independently to allow for the execution of the in-run recovery.
Towards routine running
The ALICE control room was renovated completely during LS1, with the removal of the internal walls to create an ergonomic open space with 29 universal workstations. Desks in the front rows face 11 extra-large-format LED screens displaying the LHC and ALICE controls and status. They are reserved for the shift crew and the run-co-ordination team. Four concentric lateral rows of desks are reserved for the work of detector experts. The new ALICE Run Control Centre also includes an access ramp for personnel with reduced mobility. In addition, there are three large windows – one of which can be transformed into a semi-transparent, back-lit touchscreen – for the best visitor experience with minimal disturbance to the ALICE operators.
Following the detector installations and interventions on almost all of the components of the hardware, electronics, and supporting systems, the ALICE teams began an early integration campaign at the end of 2014, allowing the ALICE detector to start routine cosmic running with most of the central-barrel detectors by the end of December. The first weeks of 2015 have seen intensive work on performing track alignment of the central-barrel detectors using cosmic muons under different magnetic-field settings. Hence, ALICE’s solenoid magnet has also been extensively tested – together with the dipole magnet in the muon arm – after almost two years of inactivity. Various special runs, such as TPC and TRD krypton calibrations, have been performed, producing a spectacular 5 PB of raw data in a single week, and providing a challenging stress test for the online systems.
The ALICE detector is located at point 2 of the LHC, and the end of the TI2 transfer line – which injects beam 1 (the clockwise beam) into the LHC from the Super Proton Synchrotron (SPS) – is 300 m from the interaction region. This set-up implies additional vacuum equipment and protection collimators close (80 m) to the ALICE cavern, which are a potential source of background interactions. The LHC teams have refurbished most of these components during LS1 to improve the background conditions during proton operations in Run 2.
ALICE took data during the injection tests in early March when beam from the SPS was injected into the LHC and dumped half way along the ring (CERN Courier April 2015 p5). The tests also produced so-called beam-splash events on the SPS beam dump and the TI2 collimator, which were used by ALICE to perform the time alignment for the trigger detectors and to calibrate the beam-monitoring system. The splash events were recorded using all of the ALICE detectors that could be operated safely in such conditions, including the muon arm.
The LHC sector tests mark the beginning of Run 2. The ALICE collaboration plans to exploit fully the first weeks of LHC running with proton collisions at a luminosity of about 1031 Hz/cm2. The aim will be to collect rare triggers and switch to a different trigger strategy (an optimized balance of minimum bias and rare triggers) when the LHC finally moves to operation with a proton bunch separation of 25 ns.
Control of ALICE’s operating luminosity during the 25 ns phase will be challenging, because the experiment has to operate with very intense beam currents but relatively low luminosity in the interaction region. This requires using online systems to monitor the luminous beam region continuously, to control its transverse size and ensure proper feedback to the LHC operators. At the same time, optimized trigger algorithms will be employed to reduce the fraction of pile-up events in the detector.
The higher energy of proton collisions of Run 2 will result in a significant increase in the cross-sections for hard probes, and the long-awaited first lead–lead run after LS1 will see ALICE operating at a luminosity of 1027 Hz/cm2. However, the ALICE collaboration is already looking into the future with its upgrade plans for LS2, focusing on physics channels that do not exhibit hardware trigger signatures in a high-multiplicity environment like that in lead–lead collisions. At the current event storage rate of 0.5 kHz, the foreseen boost of luminosity from the present 1027 Hz/cm2 to more than 6 × 1027 Hz/cm2 will increase the collected statistics by a factor of 100. This will require free-running data acquisition and storage of the full data stream to tape for offline analysis.
In this way, the LS2 upgrades will allow ALICE to exploit the full potential of the LHC for a complete characterization of quark–gluon plasma through measurements of unprecedented precision.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.