Bluefors – leaderboard other pages

Topics

First measurement of ionization potential casts light on ‘last’ actinide

CCnew14_04_15

The quest for new heavy chemical elements is the subject of intense research, as the synthesis and identification of these new elements fill up empty boxes in the familiar Periodic Table. The measurement of their properties for a proper classification in the table has proved challenging, because the isotopes of these elements are short-lived and new methods must be devised to cope with synthesis rates that yield only one atom at a time. Now, an international team led by researchers from the Japanese Atomic Energy Agency (JAEA) in Tokai has developed an elegant experimental strategy to measure the first ionization potential of the heaviest actinide, lawrencium (atomic number, Z = 103).

Using a new surface ion source (figure 1) and a mass-separated beam, the team’s measurement of 4.96±0.08 eV – published recently in Nature (Sato et al. 2015) – agrees perfectly with state-of-the-art quantum chemical calculations that include relativistic effects, which play an increasingly important role in this region of the Periodic Table. The result confirms the extremely low binding energy of the outermost valence electron in this element, therefore confirming its position as the last element in the actinide series. This is in line with the concept of heavier homologues of the lanthanide rare earths, which was introduced by Glenn Seaborg in the 1940s.

In the investigations at JAEA the researchers have exploited the isotope-separation online (ISOL) technique, which has been used for nuclear-physics studies at CERN’s ISOLDE facility since the 1960s. The technique has now been adapted to perform ionization studies with the one-atom-at-a-time rates that are accessible for studies of lawrencium. A new surface-ion source was developed and calibrated with a series of lanthanide isotopes of known ionization potentials. The ionization probability of the mass-separated lawrencium could then be exploited to determine its ionization potential using the calibration master curve.

CCnew15_04_15

The special position of lawrencium in the Periodic Table has placed the element at the focus of questions on the influence of relativistic effects, and the determination of properties to confirm its position as the last actinide. The two aspects most frequently addressed have concerned its ground-state electronic configuration and the value of its first ionization potential.

Relativistic effects strongly affect the electron configurations of the heaviest elements. In the actinides, the relativistic expansion of the 5f orbital contributes to the actinide contraction – the regular decrease in the ionic radii with increasing Z. Together with direct relativistic effects on the 7s and 7p1/2 orbitals, this influences the binding energies of valence electrons and the energetic ordering of the electron configurations. However, it is difficult to measure the energy levels of the heaviest actinides with Z > 100 by a spectroscopic method because these elements are not available in a weighable amount.

The ground-state electronic configuration of lawrencium (Lr) is expected to be [Rn]5f147s27p1/2. This is different from that of its homologue in the lanthanide series, lutetium, which is [Xe]4f146s25d. The reason for this change is the stabilization by strong relativistic effects of the 7p1/2 orbital of Lr below the 6d orbital. Lr, therefore, is anticipated to be the first element with a 7p1/2 orbital in its electronic ground state. As the measurement of the ionization potential directly reflects the binding energy of a valence electron under the influence of relativistic effects, its experimental determination provides direct information on the energetics of the electronic orbitals of Lr, including relativistic effects, and a test for modern theories. However, this measurement cannot answer questions about the electronic configuration itself. Nevertheless, as figure 2 shows, the experimental result is in excellent agreement with a new theoretical calculation that includes these effects and favours the [Rn]5f147s27p1/2 ground-state configuration.

Dark-matter self-interactions are weak

Astronomers using observations from the NASA/ESA Hubble Space Telescope and NASA’s Chandra X-ray Observatory have studied how dark matter in clusters of galaxies behaves when the clusters collide. The results confirm the distinct existence of dark matter with high significance, and show that dark matter interacts with itself even less than thought previously.

Although there is more dark matter than visible matter in the universe, dark matter remains extremely elusive and is, most likely, in a form outside of the Standard Model of particle physics. Dark matter does not reflect, absorb or emit light, making it transparent. The presence of a massive clump of dark matter can be probed only by its gravitational distortion of space–time, which bends the light path in its vicinity. This weak gravitational-lensing effect distorts the shape of background galaxies, making it possible to infer the spatial distribution of dark matter (CERN Courier January/February 2007 p11).

Collisions between clusters of galaxies provide a way to estimate the interaction of dark matter with itself. The “bullet cluster” is a prime example of such a collision, showing that while the hot gas is slowed down by ram pressure, the motion of both the dark matter and galaxies seems to be unaltered by the event (CERN Courier October 2006 p9). It constraints the self-interaction cross-section of dark matter by unit mass to σDM/m < 1.25 cm2/g (68% CL). To tighten this constraint further, a group of astronomers led by David Harvey – affiliated to both the École Polytechnique Fédérale de Lausanne (EPFL) and the University of Edinburgh – studied a sample of 72 mergers identified in 30 colliding systems, with archival observations by Hubble in the visible range and by Chandra in X-rays.

The team determined the central position of the hot gas glowing in X-rays, the galaxies and dark matter in each of the 72 collisions. The researchers assume that the direction of motion is given by the line connecting the location of the gas and of the galaxies, and then measure the position of the dark-matter component, both parallel and perpendicular to this direction. The latter serves as a check, and is found to be consistent with zero on average, as expected. Along the line of motion, the distribution of the offsets between dark matter and gas is found to be inconsistent (at 7.6σ) with the hypothesis that dark matter does not exist, i.e. that all of the cluster’s mass – except only about 3% in the form of stars in galaxies – is co-spatial with the hot gas. This rules out dark-matter alternatives such as modified Newtonian dynamics (MOND).

More interestingly, the ratio of dark matter and gas offsets from galaxies is a dimensionless measure of the drag force acting on dark matter. The authors of the study measured an average value of –0.04±0.07 (68%  CL), which they translate to an upper limit of σDM/m < 0.47 cm2/g (95% CL) on the momentum transfer cross-section of dark matter. They note that this result rules out parts of the hidden-sector dark-matter models that predict σDM/m = 1 barn/GeV = 0.6 cm2/g, which is similar to nuclear cross-sections in the Standard Model. Such a high coupling in the dark sector would not have been in conflict with the orders-of-magnitude lower coupling between dark matter and Standard Model particles, which is at most in the order of picobarns.

Brookhaven ushers in a new bright era

An era came to an end on 30 September 2014, when the National Synchrotron Light Source (NSLS) ended its last run and dumped its last beam after more than 30 years of operation at Brookhaven National Laboratory. NSLS was the first of the modern synchrotron light sources, and had an enormous impact on synchrotron-light-based science during the past decades. It contributed a wealth of pioneering scientific results, including work that resulted in two Nobel prizes. The following day, 1 October, a new era began for Brookhaven, with the start-up of the new facility, NSLS-II, which is designed to provide the brightest beams ever produced by a synchrotron light source.

The mission for a follow-up to NSLS was to provide a factor of 10 more flux and up to four orders of magnitude more brightness relative to the earlier machine (where brightness is defined as the number of photons per second divided by the beam cross-section and the divergence at the emission points, integrated over a narrow bandwidth of 1%). It was to be capable of achieving energy resolution of a fraction of a milli-electron-volt and spatial resolution on the nanometre scale. This ambition was acknowledged in 2005, when NSLS-II received CD-0, the first of five “critical decisions” for the construction of any new science facility funded by the US Department of Energy (DOE). The new light source was to enable novel science opportunities in all fields of synchrotron-radiation-based science, and would allow experiments that were not possible at any of the other facilities at that time. The project went swiftly through the design and R&D phase with critical decisions CD-1 and CD-2, and in June 2009 CD-3 was approved, allowing construction of the facility to begin.

The NSLS-II electron storage ring consists of 30 double-bend achromates (DBA) separated by 15 long (9.3 m) and 15 short (6.6 m) straight sections for insertion devices, which are the source of ultra-bright synchrotron radiation. The ring is designed for a beam energy of 3 GeV. To achieve the desired high brightness based on a horizontal beam emittance of εx = 0.8 π nrad m, it has a large circumference of 792 m. The bending magnets are fairly long (2.69 m) and weak (0.4 T). These design choices have two advantages. They allow the design of a stable lattice with a beam emittance close to the DBA minimum emittance, and at the same time, the synchrotron-radiation power of photons emitted in the bending magnets is fairly moderate (283 keV per turn per electron). This allows an efficient doubling of the radiation-damping rate, and therefore a reduction of the beam emittance by a factor of two, by the use of six 3.4-m-long damping wigglers with a peak field of 1.85 T.

NSLS-II has a conventional system of electromagnets for bending, focusing and nonlinear corrections. However, the field quality of these magnets is pushed beyond what has been achieved previously (ΔB/B = 10–5 – 10–4 at r = 25 mm). Further, the alignment of the magnetic centres with respect to each other is held to unprecedentedly small tolerances with rms values of less than 10 μm.

The other critical parameter for high-brightness performance is the beam current of 500 mA. High beam current is obtained with an accelerating structure based on two single-cell 500-MHz superconducting cavities of the type known as CESR-B. This RF system offers advantages for beam stability because the structures exhibit weak parasitic RF modes and are superior for suppressing beam-loading effects.

In addition, beyond-state-of-the-art instrumentation is required to control the orbital stability of the beam with its small beam sizes (σy = 3 μm at the insertion devices). Therefore, both a novel beam-position monitor system with a resolution and stability of less than 200 nm and a fast orbit-feedback system have been designed and implemented. These will limit the motion of the beam orbit to within 10% of the (vertical) beam size for frequencies up to 1 kHz.

The vacuum system is made of extruded, keyhole-shaped aluminium. The antechamber houses two non-evaporable getter strips for distributed pumping. The girder system is designed for high thermal stability and to avoid amplification of mechanical vibrations below 30 Hz.

All of the electronics and power supplies are located on the tunnel roof and are housed in sealed air-cooled racks, protecting the sensitive equipment from dust, temperature fluctuations, humidity and leaking cooling water. This protection is a major element of the strategy to achieve high operational reliability for the more than 1000 magnet power supplies, the beam-position monitors, controls and vacuum-control equipment. The facility aims for a reliability greater than 95% once its operation is matured fully.

The NSLS-II injector consists of a 200-MeV S-band linac, which feeds the 3-GeV combined-function booster synchrotron for on-energy injection in “top-off” mode, where frequent injection maintains the beam current. The booster synchrotron was designed and built by the Budker Institute of Nuclear Physics in Novosibirsk, and installed in collaboration with NSLS-II staff.

The civil construction with the accelerator tunnels and the ring-shaped experimental floor was completed in 2012. Installation of the accelerator components, which started in 2011, was completed in 2013.

The commissioning of the linac was already possible in April 2012 and the commissioning of the booster synchrotron followed in December 2013. Storage-ring commissioning took place soon after, in April 2014. The commissioning time for the entire complex was remarkably short, the superb robustness and reproducibility of the machine being demonstrated by the fact that restarts are possible only a few hours after shutdowns.

The summer of 2014 saw the installation of the first NSLS-II insertion devices. Three pairs of 3.4-m-long damping wigglers with peak fields of 1.85 T not only provide a factor of two in emittance reduction by enhanced radiation damping, they are also powerful sources (195 kW at a beam current of 500 mA) of photons up to energies of 100 keV. The workhorses of NSLS-II are in-vacuum undulators with a period of 20–23 mm and an extremely small gap height of 5 mm. Four such devices up to 3 m in length are part of the initial installation. There is also a pair of 2-m-long elliptical polarizing undulators (EPUs). The insertion devices were commissioned with their corresponding front-end systems during autumn 2014.

An initial suite of six beamlines is also part of the scope of the NSLS-II project. These beamlines are based on state-of-the-art – or beyond – beamline technology. They cover a range of synchrotron-light experimental techniques, including powder diffraction (XPD), coherent hard X-ray scattering (CHX), nano-focus imaging (HNX), inelastic X-ray scattering with extreme energy resolution < 1 meV (IXS), X-ray spectroscopy (SRX) and coherent soft X-ray scattering (CSX). All of these beamlines have started technical commissioning. The first light emitted by the NSLS-II EPU was observed on 23 October in the CSX beamline, followed by similar events for the other beamlines.

At the same time that the science commissioning of the existing beamlines at NSLSI-II is taking place, nine further insertion-device beamlines are under construction. The first three, known as the ABBIX beamlines, are scheduled to start up in the spring of 2016. They are specialized for biological research. The other six insertion-device beamlines – the so-called “NEXT” beamlines – are planned to start up the following autumn. Finally, there is an ongoing programme that consists of reusing NSLS equipment and integrating it into five new beamlines (NxtGen) that will receive bending-magnet radiation. As the field of the NSLS-II dipole magnets is weak, some of the source points are equipped with a wavelength-shifter consisting of a three-pole wiggler with 1.2 T peak field.

A number of non-Brookhaven institutions have responded positively to the opportunity to work with NSLS-II, and they will develop five additional beamlines in collaboration with NSLS-II staff. Therefore by 2018, NSLS-II will run with 27 beamlines and will have recovered from the reduction in the scientific programme between the shutdown of NSLS and the development period of the NSLS-II user facility. In its final configuration, the NSLS-II facility will host more than 60 beamlines.

The construction of NSLS-II within budget ($912 million) and to schedule is the result of excellent teamwork between scientists, engineers and technicians. In a ceremony on 6 February, the US secretary of energy, Ernest Moniz, dedicated the new facility. The first science results from NSLS-II were reported as early as March (Wang et al. 2015), and the science programme will start for most beamlines in the summer. The bright future of the NSLS-II era has begun.

• NSLS-II was constructed under DOE contract No. DE-AC02-98CH10886. For further information, visit www.bnl.gov/ps/nsls2/about-NSLS-II.php.

Chronicles of CMS: the saga of LS1

For the past two years, teams from the CMS collaboration, many from distant countries, have been hard at work at LHC point 5 at Cessy in France. Their goal – to ensure that the CMS detector will be able to handle the improved performance of the LHC when it starts operations at higher energy and luminosity. More than 60,000 visitors to the CMS underground experimental cavern during the first long shutdown (LS1) witnessed scenes of intense and spectacular activity – from movements of the 1500-tonne endcap modules to the installation of the delicate pixel tracker, only the size of a portable toolbox but containing almost 70-million active sensors.

This endeavour involved planning for a huge programme of work (CERN Courier April 2013 p17). Since LS1 began, more than 1000 separate work packages have been carried out, ranging from the repairs and maintenance required after three years of operation during the LHC’s Run 1, through consolidation work for a long-term future, to the installation of completely new detector systems as well as the extension of existing ones. In addition to the many CMS teams involved, the programme relied on the strong general support and substantial direct contributions from physics and technical departments at CERN. This article, by no means exhaustive, aims to provide some insight into LS1 as it happened at point 5.

An early start

Vital contributions started as early as 2009, well before LS1 began. One example is the refurbishment by CERN’s General Services and Physics Departments of building 904 on the Prévessin site, to provide 2000 m2 of detector-assembly laboratories, which were used for the new parts of the muon detector. Another is the creation by CMS (mainly through contracts managed by CERN’s Engineering Department) of the Operational Support Centre in the surface-assembly building at point 5. This centre incorporates work areas for all of the CMS systems that had to be brought to the surface during LS1, and includes a cold-storage, cold-maintenance facility where the pixel tracker was kept until the new beampipe was fitted. There is also a workshop area suitable for modifying elements activated by collision products, which, as the LS1 story progressed, provided useful flexibility for dealing with unexpected work.

The highest-priority objective for CMS during LS1 was to operate the tracker cold

The highest-priority objective for CMS during LS1 was to operate the tracker cold. The silicon sensors of this innermost subdetector, which surrounds the LHC beampipe, must endure more than 109 particles a second passing through it, and cannot be completely replaced until about a decade from now. The damaging effects of this particle flux, sustained over many years of operation, can be mitigated by operating the sensor system at a temperature that is 20–30 °C lower than the few degrees above zero used so far. Alongside modifications to allow delivery of the coolant at much lower temperatures, a new system of humidity control had to be introduced to prevent condensation and icing. This involved sealing the tracker envelope, while making provision for a flow of up to 400 m3/h of dry gas. The system installed by CMS is a novel one at CERN: it dries air and then optionally removes oxygen via filtering membranes. The first full-scale tests took place at the end of 2013, and there was great satisfaction when an operating temperature of –20 °C was achieved stably.

However, as one challenge faded, a new one emerged immediately. On warming up, tell-tale drips of water were visible coming from the insulated bundles of pipework carrying the coolant into the detector – indication that air at room temperature and humidity had been reaching the cold pipes inside the system and forming ice. Fortunately, tests soon showed that an additional flow of dry air, injected separately into the pipework bundles, would suppress this problem. Responding to CMS’s request for help, the Engineering Department recently delivered a new dry-air plant that will make humidity suppression in the cooling distribution feasible on a routine basis, with a comfortable margin in capacity.

Another high-priority project for LS1 involved the muon detectors. A fourth triggering and measurement station in each of the endcaps was incorporated into the original CMS design, but it was not considered essential for initial operation. These stations are now needed to increase the power to discriminate between interesting low-momentum muons originating from the collision (e.g. potentially from a Higgs-boson decay) and fake muon signatures caused by backgrounds. Seventy-two new cathode-strip chambers (CSCs) and 144 new resistive-plate chambers (RPCs) were assembled across a three-year period by a typical CMS multinational team from institutes in Belgium, Bulgaria, China, Colombia, Egypt, Georgia, India, Italy, Korea, Mexico, Pakistan, Russia and the US, as well as from CERN. They were then installed as superposed layers of CSCs and RPCs on the two existing discs at the ends of the steel yoke that forms the structural backbone of CMS. Teams worked on the installation and commissioning in two major bursts of activity, matching the periods when the required detector configuration was available, and completing the job in late spring 2014.

A further improvement of the endcap muon system was achieved by installing new on-chamber electronics boards in the first, innermost layer of the CSCs to withstand the higher luminosity, while reusing the older electronics in one of the new fourth layers, where it is easier to cope with the collision rate. Here again, the unexpected had to be dealt with. One of the two layers had just been re-installed after months of re-fitting work, when tests revealed a potential instability caused by the accidental omission of a tiny passive electronic component. It was considered significantly risky to leave this uncorrected, so the installation teams had to go into full reverse. Working late into the evenings and at weekends to avoid interfering with previously scheduled activities, they partially extracted all 36 chambers, corrected the fault, put them back in place and re-commissioned them.

No part of the detector escaped the attention of the upgrade and maintenance teams. The modular structure of CMS, which can be separated into 13 major slices, was fully exploited to allow simultaneous activity, with as many as eight mobile work platforms frequently in use to give access to different slices and different parts of their 14 m diameter. Multiple maintenance interventions on the five barrel-yoke wheels restored the number of working channels to 99.7% – a figure not seen since 2009, just after installation. Similar interventions on the CSC and RPC stations on the endcap disks were also successful, with the few per cent that had degraded over the past few years restored completely. In addition, to improve maintainability, some key on-board electronics from the barrel part of the muon system was moved from the underground experimental cavern to the neighbouring service cavern, where it will now remain accessible during LHC operation. All of the photo-transducers and much of the on-detector electronics of the hadron calorimeter (HCAL) are to be replaced over the next few years, and a substantial part of this work was completed during LS1. In particular, photo-transducers of a new type were installed in the outer barrel and forward parts of the sytem, which will lead to an immediate improvement in performance.

The rate of proton–proton collisions will be five times higher

The need for some work streams was completely unforeseen until revealed by routine inspection. The most notable example was the discovery of a charred feed-through connector serving the environmental-screen heaters of one of the two preshower systems for the electromagnetic calorimeter (ECAL). Full diagnosis (under-rated capacitors) and subsequent repair of both preshower systems required their removal to the surface, where a semi-clean lab was created at short notice within the Operational Support Centre. The repairs and re-installation were a complete success, and the preshower system has been re-commissioned recently at its planned operating temperature of –8 °C.

The CMS consolidation programme had also to prepare the infrastructure of the experiment – originally designed for a 10-year operating lifetime – for running well into the 2030s. LHC operating periods lasting around three years will be interleaved with substantial shutdowns of one to two years in length. Moreover, the rate of proton–proton collisions will be five times higher, and the integrated number of collisions (ultimately) 10 times higher, than the original design goal.

Key adaptations were made during LS1 to address redundancy in the power and cryogenics systems, to extend the predicted lifetime of the one-of-a-kind CMS magnet. Further measures for protection against power glitches were implemented through an extension of the detector’s short-term uninterruptible power supply. Changes to the detector cooling included modifications for greater capacity and redundancy, as well as the addition of a new system in preparation for the upcoming upgrade of the pixel tracker, based on two-phase (evaporating liquid) carbon dioxide. This technology, new for CMS, involved the installation of precision-built concentric vacuum-insulated feed and return lines – difficult-to-modify structures that have to be made extremely accurately to ensure proper integration with the constricted channels that feed services into the apparatus. These changes presented challenges for the CMS Integration Office, where the “compact” in CMS was defended vigorously every day in computer models and then in the caverns.

The most massive change to the structure of the experiment was the addition of the new 125-tonne shielding discs

New detectors were not the only large-scale additions to CMS. The most massive change to the structure of the experiment was the addition of the new 125-tonne shielding discs – yoke endcap disc four (YE4) – installed outside of the fourth endcap muon station at either end of the detector. Each shielding disc, 14 m in diameter but only 125 mm thick, was made of 12 iron sector casings. Following manufacture and pre-assembly tests in Pakistan, these discs, whose design and preparation took five years, were disassembled for shipping to CERN and then re-assembled on the Meyrin site, where they were filled with a special dense (haemetite) shielding concrete, mixed for this specific application by CERN’s civil engineers. Loaded with a small percentage of boron, this concoction will act as a “sponge” to soak up many of the low-energy neutrons that give unwanted hits in the detector, and whose numbers will increase as the LHC beam intensities get higher.

The YE4 discs, transported in sectors to point 5, were the first slices of CMS to be assembled underground – all of the existing major elements had been pre-assembled on the surface and lowered into the underground cavern in sequence (CERN Courier July/August 2006 p28). In the original concept, the YE4 discs could be separated from the supporting YE3 only by driving the whole endcap system back to the cavern headwall, where YE4 could be unhooked and supported. Because all of the other slices of the CMS “swiss roll” can be displaced from one another to give access to the detectors sandwiched in between, it was decided late in the project – in fact, after assembly had already started – to equip each YE4 shielding disc with air pads and a system of electric screw-jacks. This would allow the YE4 disc to separate from the supporting neighbour disc (YE3) by up to 3.7 m without the necessity to move it to the headwall – a major operation. In fact, one so-called “push-back system” was used immediately after assembly of the YE4 disc, to permit installation of RPCs with the endcaps partially closed. This maintained the rapid-access modularity that was a core feature of the CMS design (CERN Courier October 2008 p48).

The final change was at the heart of CMS, in preparation for the installation during the LHC’s year-end technical stop of 2016–2017 of an upgraded pixel tracker – the closest physics detector to the collision point. The 0.8-mm-thick central beampipe used during Run 1, with an outer diameter of 59.6 mm, was replaced by a similar one of 45-mm outer diameter and, like the first one, made of beryllium, to be as transparent as possible to particles emanating from the LHC collisions. The narrower beampipe will allow the first layer of the new pixel tracker to be closer to the collision point than before. This geometrical improvement, combined with an additional fourth layer of sensors, will upgrade the tracker’s ability to resolve where a charged particle originated. When running under conditions of high pile-up in Run 2 and Run 3 – that is, with many more protons colliding every time counter-rotating bunches meet at the centre of CMS – the disentangling of which tracks belong to which collision vertices will be crucial for most physics analyses.

The delicate operations of removing and replacing the beampipe – requiring the detector to be open fully – are possible only in a long shutdown. The new beampipe, designed jointly with CERN’s Technology Department, which procured and prepared it on behalf of CMS, was installed in June 2014. Its installation was followed immediately by vacuum pumping, combined with heating (“bake-out”) to more than 200 ºC, to expel gas molecules attached to the chamber walls. This ensured that the operating pressure of around 10–10 mbar would be possible – and achieved eventually. Following the bake-out of the new central beampipe, several mechanical tests were made to ensure that the upgraded pixel tracker can be installed in the limited time window that will be available in 2016–2017.

It is probable that a proverb exists in every language and culture involved in CMS, warning against relaxing before the job is finished. In mid-August 2014, the end of the LS1 project seemed to be on the horizon. The beampipe bake-out was being completed and preparations for the pixel tracker’s re-installation were underway, so many team members took the opportunity for a quick summer holiday. Then, their mobile phones began to buzz with reports of the first indications of a severe fault found in pre-installation tests of the barrel pixel system, which had been removed only to allow the change of beampipe. About 25% (around 50) of the modules in one quadrant were not responding. By the end of August, the half-shell containing the faulty quadrant had been transported to its makers at the Paul Scherrer Institute (PSI) for detailed investigation.

On 5 September, the diagnostics revealed that the reason for failure was electro-migration-induced shorts between adjacent bond pads of the high-density interconnect – a flexible, complex, multilayer printed circuit used to extract the signals. An investigation showed that the most likely origin was a brief and inadvertent lapse in humidity-control protocols in the course of routine calibration exercises many months earlier, when the pixel system was up in the surface laboratory. By 18 September, a comprehensive strategy of replacement and repair had been worked out by the PSI team. Because this required purchasing new components and restarting the production of detector modules, the revised schedule foresaw the detector being back at CERN by the end of November, with installation planned for around 8 December, almost exactly two months later than intended originally.

A new end game

At this late stage, with insufficient contingency remaining in the baseline schedule to accommodate the delay, it was decided to change radically the end-game sequence of the shutdown. Instead of waiting for the repair of the pixel tracker, CMS was closed immediately to conduct a short magnet-test, to identify any problems that otherwise would not have appeared until the final closure for beam. After finishing the remaining work on the bulkhead seal that allows the tracker to be operated cold, this sequence of closing the detector, testing the magnet and then re-opening CMS became the critical path for two months, with the remaining upgrade activity being postponed or re-arranged around the new schedule. The new sequence implied unexpected tight deadlines for several teams – particularly those working on the magnet and the forward region – and a massive extra workload for the heavy-engineering team. The additional closing and opening sequence required 36 single movements of heavy discs, and 16 insertions and removals of the heavy-raiser platforms that support the forward calorimeters at beam height. A concerted and exceptional effort resulted in the magnet yoke being closed by mid-October, and both forward regions being closed and ready for magnetic field by 6 November.

The closing of the experiment, just in time for first beam in 2015, brought the saga of LS1 to a happy ending.

The following day, the magnet was ramped to 1 T and then discharged. This sequence allowed yoke elements to settle, and also verified that the control and safety systems performed as expected. By 10 November, enough liquid helium had been accumulated for 36 hours of operation at full field, and the test programme resumed. However, at 2.4 T, the main elevator providing underground access stopped working, owing to some field-sensitive floor-level sensors having been installed mistakenly during routine maintenance. After reducing the field temporarily to allow personnel to leave the underground areas, the ramp-up continued, reaching the working value of 3.8 T at around 7.00 p.m., demonstrating that the magnet’s upgraded power and cryogenics system worked well. Despite the rapid endcap-yoke closure with only approximate axial alignment, the movements under the magnetic forces of the endcap discs (including the new YE4s) and the forward systems were well within the ranges observed previously, although specific movements occurred at different field values. The new beampipe support system and the new phototransducers of the HCAL and beam-halo monitors were shown to be tolerant to the magnetic field. Most importantly, the environmental seal around the tracker and the new dry-gas injection system functioned well enough in the magnetic field to allow tracker operation at –20 °C. The top-priority task of LS1 could therefore be declared a success.

Following this, the opening of the detector was a race against time to meet the target of installing the barrel and forward pixel trackers, and enclosing them in a stable environment before CERN’s 2014 end-of-year closure. This was achieved successfully, providing a fortuitous “dry run” of what will have to be done during the year-end stop of 2016–2017, when the new pixel tracker will be installed. Following a thorough check and pre-calibration of the pixel system, the last new elements of CMS in the LS1 project – upgraded beam monitors and the innovative pixel luminosity telescope (CERN Courier March 2015 p6) – were installed by the end of the first week of February 2015.

The closing of the experiment, just in time for first beam in 2015, brought the saga of LS1 to a happy ending. It is time to celebrate with the collaboration teams, contractors and CERN technical groups, who have all contributed to the successful outcome. The imminent start of Run 2 now raises the exciting prospect of new physics, but behind the scenes preparations for the next CMS shutdown adventure have already begun.

ALICE: from LS1 to readiness for Run 2

 

It is nearly two years since the beams in the LHC were switched off and Long Shutdown 1 (LS1) began. Since then, a myriad of scientists and engineers have been repairing and consolidating the accelerator and the experiments for running at the unprecedented energy of 13 TeV (or 6.5 TeV/beam) – almost twice that of 2012.

In terms of installation work, ALICE is now complete. The remaining five super modules of the transition radiation detector (TRD), which were missing in Run 1, have been produced and installed. At the same time, the low-voltage distribution system for the TRD was re-worked to eliminate intermittent overheating problems that were experienced during the previous operational phase. On the read-out side, the data transmission over the optical links was upgraded to double the throughput to 4 GB/s. The TRD pre-trigger system used in Run 1 – a separate, minimum-bias trigger derived from the ALICE veto (V0) and start-counter (T0) detectors – was replaced by a new, ultrafast (425 ns) level-0 trigger featuring a complete veto and “busy” logic within the ALICE central trigger processor (CTP). This implementation required the relocation of racks hosting the V0 and T0 front-end cards to reduce cable delays to the CTP, together with optimization of the V0 front-end firmware for faster generation of time hits in minimum-bias triggers.

The ALICE electromagnetic calorimeter system was augmented with the installation of eight (six full-size and two one-third-size) super modules of the brand new dijet calorimeter (DCal). This now sits back-to-back with the existing electromagnetic calorimeter (EMCal), and brings the total azimuthal calorimeter coverage to 174° – that is, 107° (EMCal) plus 67° (DCal). One module of the photon spectrometer calorimeter (PHOS) was added to the pre-existing three modules and equipped with one charged-particle veto (CPV) detector module. The CPV is based on multiwire proportional chambers with pad read-out, and is designed to suppress the detection of charged hadrons in the PHOS calorimeter.

The overall PHOS/DCal set-up is located in the bottom part of the ALICE detector, and is now held in place by a completely new support structure. During LS1, the read-out electronics of the three calorimeters was fully upgraded from serial to parallel links, to allow operation at a 48 kHz lead–lead interaction rate with a minimum-bias trigger. The PHOS level-0 and level-1 trigger electronics was also upgraded, the latter being interfaced with the neighbouring DCal modules. This will allow the DCal/PHOS system to be used as a single calorimeter able to produce both shower and jet triggers from its full acceptance.

The gas mixture of the ALICE time-projection chamber (TPC) was changed from Ne(90):CO2(10) to Ar(90):CO2(10), to allow for a more stable response to the high particle fluxes generated during proton–lead and lead–lead running without significant degradation of momentum resolution at the lowest transverse momenta. The read-out electronics for the TPC chambers was fully redesigned, doubling the data lines and introducing more field-programmable gate-array (FPGA) capacity for faster processing and online noise removal. One of the 18 TPC sectors (on one side) is already instrumented with a pre-production series of the new read-out cards, to allow for commissioning before operation with the first proton beams in Run 2. The remaining boards are being produced and will be installed on the TPC during the first LHC Technical Stop (TS1). The increased read-out speed will be exploited fully during the four weeks of lead collisions foreseen for mid November 2015. For lead running, ALICE will operate mainly with minimum-bias triggers at a collision rate of 8 kHz or higher, which will produce a track load in the TPC equivalent to operation at 700 kHz in proton running.

LS1 has also seen the design and installation of a new subsystem – the ALICE diffractive (AD) detector. This consists of two double layers of scintillation counters placed far from the interaction region on both sides, one in the ALICE cavern (at z = 16 m) and one in the LHC tunnel (at z = –19 m). The AD photomultiplier tubes are all accessible from the ALICE cavern, and the collected light is transported via clear optical fibres.

The ALICE muon chambers (MCH) underwent a major hardware consolidation of the low-voltage system in which the bus bars were fully re-soldered to minimize the effects of spurious chamber occupancies. The muon trigger (MTR) gas-distribution system was switched to closed-loop operation, and the gas inlet and outlet “beaks” were replaced with flexible material to avoid cracking from mechanical stress. One of the MTR resistive-plate chambers was instrumented with a pre-production front-end card being developed for the upgrade programme in LS2.

The increased read-out rates of the TPC and TRD have been matched by a complete upgrade (replacement) of both the data-acquisition (DAQ) and high-level trigger (HLT) computer clusters. In addition, the DAQ and HLT read-out/receiver cards have been redesigned, and now feature higher-density parallel optical connectivity on a PCIe-bus interface and a common FPGA design. The ALICE CTP board was also fully redesigned to double the number of trigger classes (logic combinations of primary inputs from trigger detectors) from 50 to 100, and to handle the new, faster level-0 trigger architecture developed to increase the efficiency of the TRD minimum-bias inspection.

Regarding data-taking operations, a full optimization of the DAQ and HLT sequences was performed with the aim of maximizing the running efficiency. All of the detector-initialization procedures were analysed to identify and eliminate bottlenecks, to speed up the start- and end-of-run phases. In addition, an in-run recovery protocol was implemented on both the DAQ/HLT/CTP and the detector sides to allow, in case of hiccups, on-the-fly front-end resets and reconfiguration without the need to stop the ongoing run. The ALICE HLT software framework was in turn modified to discard any possible incomplete events originating during online detector recovery. At the detector level, the leakage of “busy time” between the central barrel and muon-arm read-out detectors has been minimized by implementing multievent buffers on the shared trigger detectors. In addition, the central barrel and the muon-arm triggers can now be paused independently to allow for the execution of the in-run recovery.

Towards routine running

The ALICE control room was renovated completely during LS1, with the removal of the internal walls to create an ergonomic open space with 29 universal workstations. Desks in the front rows face 11 extra-large-format LED screens displaying the LHC and ALICE controls and status. They are reserved for the shift crew and the run-co-ordination team. Four concentric lateral rows of desks are reserved for the work of detector experts. The new ALICE Run Control Centre also includes an access ramp for personnel with reduced mobility. In addition, there are three large windows – one of which can be transformed into a semi-transparent, back-lit touchscreen – for the best visitor experience with minimal disturbance to the ALICE operators.

Following the detector installations and interventions on almost all of the components of the hardware, electronics, and supporting systems, the ALICE teams began an early integration campaign at the end of 2014, allowing the ALICE detector to start routine cosmic running with most of the central-barrel detectors by the end of December. The first weeks of 2015 have seen intensive work on performing track alignment of the central-barrel detectors using cosmic muons under different magnetic-field settings. Hence, ALICE’s solenoid magnet has also been extensively tested – together with the dipole magnet in the muon arm – after almost two years of inactivity. Various special runs, such as TPC and TRD krypton calibrations, have been performed, producing a spectacular 5 PB of raw data in a single week, and providing a challenging stress test for the online systems.

The ALICE detector is located at point 2 of the LHC, and the end of the TI2 transfer line – which injects beam 1 (the clockwise beam) into the LHC from the Super Proton Synchrotron (SPS) – is 300 m from the interaction region. This set-up implies additional vacuum equipment and protection collimators close (80 m) to the ALICE cavern, which are a potential source of background interactions. The LHC teams have refurbished most of these components during LS1 to improve the background conditions during proton operations in Run 2.

ALICE took data during the injection tests in early March when beam from the SPS was injected into the LHC and dumped half way along the ring (CERN Courier April 2015 p5). The tests also produced so-called beam-splash events on the SPS beam dump and the TI2 collimator, which were used by ALICE to perform the time alignment for the trigger detectors and to calibrate the beam-monitoring system. The splash events were recorded using all of the ALICE detectors that could be operated safely in such conditions, including the muon arm.

The LHC sector tests mark the beginning of Run 2. The ALICE collaboration plans to exploit fully the first weeks of LHC running with proton collisions at a luminosity of about 1031 Hz/cm2. The aim will be to collect rare triggers and switch to a different trigger strategy (an optimized balance of minimum bias and rare triggers) when the LHC finally moves to operation with a proton bunch separation of 25 ns.

Control of ALICE’s operating luminosity during the 25 ns phase will be challenging, because the experiment has to operate with very intense beam currents but relatively low luminosity in the interaction region. This requires using online systems to monitor the luminous beam region continuously, to control its transverse size and ensure proper feedback to the LHC operators. At the same time, optimized trigger algorithms will be employed to reduce the fraction of pile-up events in the detector.

The higher energy of proton collisions of Run 2 will result in a significant increase in the cross-sections for hard probes, and the long-awaited first lead–lead run after LS1 will see ALICE operating at a luminosity of 1027 Hz/cm2. However, the ALICE collaboration is already looking into the future with its upgrade plans for LS2, focusing on physics channels that do not exhibit hardware trigger signatures in a high-multiplicity environment like that in lead–lead collisions. At the current event storage rate of 0.5 kHz, the foreseen boost of luminosity from the present 1027 Hz/cm2 to more than 6 × 1027 Hz/cm2 will increase the collected statistics by a factor of 100. This will require free-running data acquisition and storage of the full data stream to tape for offline analysis.

In this way, the LS2 upgrades will allow ALICE to exploit the full potential of the LHC for a complete characterization of quark–gluon plasma through measurements of unprecedented precision.

Gauge Theories of the Strong, Weak, and Electromagnetic Interactions (2nd edition)

By Chris Quigg
Princeton University Press
Hardback: £52.00 $75.00
Also available as an e-book, and at the CERN bookshop

CCboo2_03_15

The answer lies in the second edition of Chris Quigg’s Gauge Theories of the Strong, Weak, and Electromagnetic Interactions. By a remarkable coincidence, this essentially revised volume fills in much of what the “gifted amateur” wants to know about how QFT is applied in traditional particle physics. It is hard to find words to describe Quigg’s clean, high-quality work; as an author he is a virtuoso performer. He takes the reader through the Standard Model of particle physics to the first steps beyond it, showing the most important insights, describing open questions and proposing original literature and further reading. He has designed or collected many insightful figures that illustrate beautifully the intriguing properties of the Standard Model.

However, it’s hard for me personally to end the review on this high note since the research in the field of gauge theories of strong interactions does not end with the perturbative processes. Over the past 30 years, a vast new area has opened up with many fundamental insights. These connect to the QCD vacuum structure, the Hagedorn temperature and colour deconfinement as encapsulated in the new buzzword – quark–gluon plasma, the strongly-interacting colour-charged many-body state of quarks and gluons. Moreover, there is a wealth of numerical lattice results that accompany these developments.

I find no key word for this in the index of Quigg’s book, although there is mention of “confinement” (p336ff). On page 340, a phrase-long summary mentions the temperature of a chiral-symmetry-restoring transition (from what to what is not stated) that characterizes the lattice QCD results seen in figure 8.47 on p342. This one-phrase entry is all that describes in my estimate 20% of the experimental work at CERN of the past 25 years, and the majority of particle physics at Brookhaven for the past 15 years. In this section I also read how vacuum dielectric properties relate to confinement. I know this argument from Kenneth Wilson, as refined and elaborated on by TD Lee, and the lattice-QCD work initiated by Michael Creutz at Brookhaven, yet Quigg attributes this to an Abelian-interaction model that I did not think functioned.

The author, renowned for his work addressing two-particle interactions, represents in his book the traditional particle-physics programme as continued today at Fermilab, where the novel area of QCD many-body physics is not on the research menu, though it has come of age at CERN and Brookhaven. One can argue that this new science is not “particle physics” – but it is definitively part of “gauge theories of strong interactions”, words embedded in the title of Quigg’s book. Thus, quark–gluon plasma, vacuum structure and confinement glare brightly by their absence in this volume.

Looking again at both books it is remarkable how complementary they are for a CERN Courier reader. These are two excellent texts and together they cover most of modern QFT and its application in particle physics in 1000 pages at an affordable cost. I strongly recommend both, individually or as a set. As noted, however, the reader who purchases these two volumes may need a third one covering the new physics of deconfinement, QCD vacuum and thermal quarks and gluons – the quark–gluon plasma.

Neutrinos in High Energy and Astroparticle Physics

By José W F Valle and Jorge C Romão
Wiley-VCH
Paperback: £75 €€90
Also available at the CERN bookshop

CCboo3_03_15

Neutrinos have kept particle physicists excited for at least the past 20 years. After they were finally proved to be massive, two mass-squared differences and all three mixing angles have now been determined, the final remaining angle, θ13, in 2012 by the three reactor experiments: Daya Bay, RENO and Double Chooz. As neutrino masses are expected to be linked intimately to physics beyond the Standard Model that can be probed at the LHC, and as neutrinos are about to start a “second career” as astrophysical probes, it seems a perfect time to publish a new textbook on the elusive particle. The authors Jose Vallé and Jorge Romão are leading protagonists in the field who have devoted most of their careers to the puzzling neutrino. In this new book they share their experience of many years at the forefront of research.

They begin with a brief historical introduction, before reviewing the Standard Model and its problems and discussing the quantization of massive neutral leptons. The next three chapters deal with neutrino oscillations and absolute neutrino masses – the mass being one of the fundamental properties of neutrinos that is still unknown. Here the authors give a detailed discussion of the lepton-mixing matrix – the basic tool to describe oscillations – and seesaw models of various types. An interesting aspect is the thorough discussion of what could be called “Majorananess” and its relation to neutrino masses, lepton-number violation and neutrinoless double beta decay – for example, in the paragraphs dealing with the Majorana–Dirac confusion and black-box theorems, a point that is rarely covered in text books and often results in confusion.

Next, the book discusses how neutrino masses are implemented in the Standard Model’s SU(2) × U(1) gauge theory and the relationship to Higgs physics. This is followed by a detailed treatment of neutrinos and physics beyond the Standard Model (supersymmetry, unification and the flavour problem), which constitutes almost half of the entire book. Here the text exhibits its particular strength – also in comparison to the competing books by Carlo Giunti and Chung Kim, and by Vernon Barger, Danny Marfatia and Kerry Whisnant, both of which concentrate more on neutrino oscillation phenomenology – by discussing exhaustively how neutrino physics is linked to physics beyond Standard Model phenomenology, such as lepton-flavour violation or collider processes. The inclusion of a detailed discussion of these topics is a good choice and it makes the book valuable as a textbook, although it does make this part rather long and encyclopedic. Another strong point is the focus on model building. For example, the book discusses in detail the challenges in flavour-symmetry model building to accommodate a non-zero θ13, and the deviation of the lepton-mixing matrix from the simple tri-bi-maximal form.

The authors end with a brief chapter on cosmology, concentrating mainly on dark matter and its connection to neutrinos. While this chapter obviously cannot replace a dedicated introduction to cosmology, a few more details such as an introduction of the Friedmann equation could have been helpful here. In general, the treatment of astroparticle physics is shorter than expected from the title of the book. For example, the detection of extragalactic neutrinos at IceCube is not covered – indeed, IceCube is only mentioned in passing as an experiment that is sensitive to the indirect detection of dark matter. Also leptogenesis and supernova neutrinos are mentioned only briefly.

The book mainly serves as a detailed and concise, thorough and pedagogical introduction to the relationship of neutrinos to physics beyond the Standard Model, and in particular the related particle-physics phenomenology. This subject is highly topical and will be more so in the years to come. As such, Neutrinos in High Energy and Astroparticle Physics does an excellent job and belongs on the bookshelf of every graduate student and researcher who is seriously interested in this interdisciplinary and increasingly important topic.

Canonical Quantum Gravity: Fundamentals and Recent Developments

By Francesco Cianfrani et al
World Scientific
Hardback: £84
E-book: £63
Also available at the CERN bookshop

media_10585765

This book aims to present a pedagogical and self-consistent treatment of the canonical approach to quantum gravity, starting from its original formulation to the most recent developments in the field. It begins with an introduction to the formalism and concepts of general relativity, the standard cosmological model and the inflationary mechanism. After presenting the Lagrangian approach to the Einsteinian theory, the basic concepts of the canonical approach to quantum mechanics are provided, focusing on the formulations relevant for canonical quantum gravity. Different formulations are then compared, leading to a consistent picture of canonical quantum cosmology.

Quantum Field Theory for the Gifted Amateur

By Tom Lancaster and Stephen J Blundell
Oxford University Press
Hardback: £65 $110
Paperback: £29.99 $49.95
Also available as an e-book, and at the CERN bookshop

CCboo1_03_15

Many readers of CERN Courier will already have several introductions to quantum field theory (QFT) on their shelves. Indeed, it might seem that another book on this topic has missed its century – but that is not quite true. Tom Lancaster and Stephen Blundell offer a response to a frequently posed question: What should I read and study to learn QFT? Before this text it was impossible to name a contemporary book suitable for self-study, where there is regular interaction with an adviser but not classroom-style. Now, in this book I find a treasury of contemporary material presented concisely and lucidly in a format that I can recommend for independent study.

Quantum Field Theory for the Gifted Amateur is in my opinion a good investment, although of course one cannot squeeze all of QFT into 500 pages. Specifically, this is not a book about strong interactions; QCD is not in the book, not a word. Reading page 308 at the end of subsection 34.4 one might expect that some aspects of quarks and asymptotic freedom would appear late in chapter 46, but they do not. I found the word “quark” once – on page 308 – but as far as I can tell, “gluon” did not make its way at all into the part on “Some applications from the world of particle physics.”

If you are a curious amateur and hear about, for example, “Majorana” (p444ff) or perhaps “vacuum instability” (p457ff, done nicely) or “chiral symmetry” (p322ff), you can start self-study of these topics by reading these pages. However, it’s a little odd that although important current content is set up, it is not always followed with a full explanation. In these examples, oscillation into a different flavour is given just one phrase, on p449.

Some interesting topics – such as “coherent states” – are described in depth, but others central to QFT merit more words. For example, figure 41.6 is presented in the margin to explain how QED vacuum polarization works, illustrating equations 41.18-20. The figure gives the impression that the QED vacuum-polarization effect decreases the Coulomb–Maxwell potential strength, while the equations and subsequent discussion correctly show that the observed vacuum-polarization effect in atoms adds attraction to electron binding. The reader should be given an explanation of the subtle point that reconciles the intuitive impression from the figure with the equations.

Despite these issues, I believe that this volume offers an attractive, new “rock and roll” approach, filling a large void in the spectrum of QFT books, so my strong positive recommendation stands. The question that the reader of these lines will now have in mind is how to mitigate the absence of some material.

The LHC: a machine in training

After the long maintenance and consolidation campaign carried out during the first long shutdown, LS1, the early part of 2015 has been dominated by tests and magnet training to prepare the LHC for a collision energy of 13 TeV. With all of the hardware and software systems to be checked, a total of more than 10,000 test steps needed to be performed and analysed on the LHC’s magnet circuits.

The LHC’s backbone consists of 1232 superconducting dipole magnets with a field of up to 8.33 T operating in superfluid helium at 1.9 K, together with more than 500 superconducting quadrupole magnets operating at 4.2 K or 1.9 K. Many other superconducting and normal resistive magnets are used to allow the correction of all beam parameters, bringing the total number of magnets to more than 10,000. About 1700 power converters are necessary to feed the superconducting circuits.

The dipole magnets in the first of the LHC’s eight sectors were trained successfully to nominal current in December, and training continued throughout the first three months of 2015. Although all of the dipole magnets were tested individually before installation, they had to be trained together in the tunnel up to 10,980 A, the current that corresponds to a beam energy of 6.5 TeV.

Training involves repetitive quenches before a superconducting magnet reaches the target magnetic field. The quenches are caused by the sudden release of electromechanical stresses and a local increase in temperature that triggers a change from the superconductive to the resistive state. The entire coil is then warmed up and cooled down again – for the LHC dipoles, this might take several hours. The magnet protection system is crucial for detecting a quench and safely extracting the energy stored in the circuits – about 1 GJ per dipole circuit at nominal current.

The typical time needed to commission a dipole circuit fully is in the order of three to five weeks, and all of the interlock and protection systems have to be tested, both before and while ramping-up the current in steps. By mid-February, the dipole circuits in three sectors had been trained to the level equivalent to 6.5 TeV, with the total number of quenches confirming the initial prediction of about 100 quenches for all of the dipoles in the machine. By early March, four sectors were fully trained for 6.5-TeV operation, with a fifth well into its training programme.

If commissioning remains on schedule, the LHC should restart towards the end of March

On the weekend of 7–8 March, operators performed injection tests with beams of protons being sent part way around the LHC. Beam 1 passed through the ALICE detector up to point 3 of the LHC, where it was dumped on a collimator, and beam 2 went through the LHCb detector up to the beam dump at point 6. The team recorded various parameters, including the timings of the injection kickers and the beam trajectory in the injection lines and LHC beam pipe.

The ALICE and LHCb collaborations prepared their experiments to receive pulses of particles and recorded “splash” events as the particles travelled through their detectors. LHCb used the tests to commission the detector and the data-acquisition system, as well as to perform detector studies and alignments of the different sub-detectors. The ALICE collaboration meanwhile used muons originating from the Super Proton Synchrotron beam dump for timing studies of the trigger and to align the muon spectrometer.

If commissioning remains on schedule, the LHC should restart towards the end of March, with first collisions at 13 TeV in late May/early June.

bright-rec iop pub iop-science physcis connect