The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory has shattered its own record for producing polarized-proton collisions at 200 GeV collision energy. In the experimental run currently underway, accelerator physicists are delivering 1.2 × 1012 collisions per week – more than double the number routinely achieved in 2012, the last run dedicated to polarized-proton experiments at this collision energy.
The achievement is, in part, the result of a method called “electron lensing”, which uses negatively charged electrons to compensate for the tendency of the positively charged protons in one circulating beam to repel the like-charged protons in the other beam when the two oppositely directed beams pass through one another in the collider. In 2012, these beam–beam interactions limited the ability to produce high collision rates, so the RHIC team commissioned electron lenses and a new lattice to mitigate the beam–beam effect. RHIC is now the first collider to use electron lenses for head-on beam–beam compensation. The team also upgraded the source that produces the polarized protons to generate and feed more particles into the circulating beams, and made other improvements in the accelerator chain to achieve higher luminosity.
With new luminosity records for collisions of gold beams, plus the first-ever head-on collisions of gold with helium-3, 2014 proved to be an exceptional year for RHIC. Now, the collider is on track towards another year of record performance, and research teams are looking forward to a wealth of new insights from the data to come.
Construction of new panels of the pixel detector. The pixel detector is the innermost of ATLAS’s many layers, lying closest to the interaction point where particle collisions occur.
View of the ATLAS calorimeters from below as they were being moved to their final position before the detector closed for the LHC’s second run. Calorimeters measure energy carried by neutral and charged particles.
The ATLAS team watches as the first part of the Insertable B-Layer (IBL), a new component of the pixel subdetector, enters its support tube. The IBL was installed in May 2014, becoming the innermost layer of ATLAS’s inner detector region. It will provide an additional point for tracking particles. An additional point closer to the collision vertex significantly improves precision.
An ATLAS member vacuums the different sectors inside the 7000 tonne detector. Before the toroid magnets can be turned on for tests, the detector must be thoroughly cleaned. In December 2014, 110 ATLAS members worked in 10 different shifts for five days, cleaning and inspecting the detector and the cavern that houses it, to make sure that no object, however miniscule, may have been left behind during the months of upgrade and maintenance.
A thin gap chamber on one of the big wheels being replaced. The big wheels are the final layer of the muon spectrometer, which identifies muons and measures their momenta as they pass through the ATLAS detector. The muon spectrometer is the outermost component of the 25-m tall and 46-m long ATLAS detector.
ATLAS physicists Vincent Hedberg, left, and Giulio Avoni glue optical fibers for the construction of the LUCID calibration system. LUCID is a detector that will help ATLAS continue to measure luminosity with very high precision during the increased collision rates and increased energy expected in the next LHC Run.
The vacuum group’s team members lead the installation of LUCID and the LHC beam pipe. The beam pipe delivers the proton–proton collisions to the heart of the detector.
Raphaël Vuillermet, the technical co-ordination team’s engineer, supervises the separation of the muon spectrometer’s big wheels from the cavern balcony. There are four moveable big wheels at each end of the ATLAS detector, each measuring 23 m in diameter. The wheels are separated to access the interior of the muon stations to change faulty chambers.
Members of the ATLAS muon team inspect the monitored drift tubes of the muon spectrometer before the shielding that encircles the beam pipe, where collisions occur, is installed. The shielding is designed to maintain the integrity of the beam and to protect the sensitive components of the detector near the beamline.
The SESAME project – the Synchrotron-light for Experimental Science and Applications in the Middle East – passed an important milestone at the beginning of April, with the complete assembly and successful testing at CERN of the first of 16 magnetic cells for the electron storage ring.
Under construction in Jordan, SESAME is a unique joint venture that brings together scientists from its members: Bahrain, Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority and Turkey. The light source consists of an injector, comprised of a 20-MeV microtron and an 800-MeV booster synchrotron, which feeds a 2.5-GeV electron storage ring. CERN is responsible for the magnets of the storage ring and their powering scheme under CESSAMag – a project funded largely by the European Commission. Within the project, CERN has been collaborating with SESAME and the ALBA Synchrotron to design, test and characterize the components of the magnetic system.
The SESAME storage ring is built up from 16 magnetic cells, which make up the periodic structure of the machine, together with insertion regions where special synchrotron radiation can be produced. Each of the periodic cells consists of one bending magnet (a combined function dipole–quadrupole), two focusing and two defocusing magnets (quadrupoles) and four combined sextupole corrector magnets (including orbit and coupling correction). Orders were placed in the UK for the dipoles, in Spain and Turkey for the quadrupoles, and in France, Cyprus and Pakistan for the sextupoles. Italy, Israel and Switzerland are providing the power-supply components, and Iran, Pakistan and Turkey are providing additional in-kind support to CERN in the form of material and personnel.
The integration tests at CERN, which were carried out together with colleagues from SESAME, aimed at assembling a full periodic cell of the machine. Besides the magnets themselves, this involved the girder support structure as well as the vacuum chamber through which the electron beam will pass. The success of the tests demonstrates that these subsystems work together as foreseen.
Production of the magnets and their powering scheme is now in full swing. After acceptance tests and integration for the powering, the components will be shipped in batches to Jordan, where installation and commissioning of the storage ring is planned for 2016, followed by start-up the same year. The SESAME injector, which includes a booster synchrotron, is already operational.
An era came to an end on 30 September 2014, when the National Synchrotron Light Source (NSLS) ended its last run and dumped its last beam after more than 30 years of operation at Brookhaven National Laboratory. NSLS was the first of the modern synchrotron light sources, and had an enormous impact on synchrotron-light-based science during the past decades. It contributed a wealth of pioneering scientific results, including work that resulted in two Nobel prizes. The following day, 1 October, a new era began for Brookhaven, with the start-up of the new facility, NSLS-II, which is designed to provide the brightest beams ever produced by a synchrotron light source.
The mission for a follow-up to NSLS was to provide a factor of 10 more flux and up to four orders of magnitude more brightness relative to the earlier machine (where brightness is defined as the number of photons per second divided by the beam cross-section and the divergence at the emission points, integrated over a narrow bandwidth of 1%). It was to be capable of achieving energy resolution of a fraction of a milli-electron-volt and spatial resolution on the nanometre scale. This ambition was acknowledged in 2005, when NSLS-II received CD-0, the first of five “critical decisions” for the construction of any new science facility funded by the US Department of Energy (DOE). The new light source was to enable novel science opportunities in all fields of synchrotron-radiation-based science, and would allow experiments that were not possible at any of the other facilities at that time. The project went swiftly through the design and R&D phase with critical decisions CD-1 and CD-2, and in June 2009 CD-3 was approved, allowing construction of the facility to begin.
The NSLS-II electron storage ring consists of 30 double-bend achromates (DBA) separated by 15 long (9.3 m) and 15 short (6.6 m) straight sections for insertion devices, which are the source of ultra-bright synchrotron radiation. The ring is designed for a beam energy of 3 GeV. To achieve the desired high brightness based on a horizontal beam emittance of εx = 0.8 π nrad m, it has a large circumference of 792 m. The bending magnets are fairly long (2.69 m) and weak (0.4 T). These design choices have two advantages. They allow the design of a stable lattice with a beam emittance close to the DBA minimum emittance, and at the same time, the synchrotron-radiation power of photons emitted in the bending magnets is fairly moderate (283 keV per turn per electron). This allows an efficient doubling of the radiation-damping rate, and therefore a reduction of the beam emittance by a factor of two, by the use of six 3.4-m-long damping wigglers with a peak field of 1.85 T.
NSLS-II has a conventional system of electromagnets for bending, focusing and nonlinear corrections. However, the field quality of these magnets is pushed beyond what has been achieved previously (ΔB/B = 10–5 – 10–4 at r = 25 mm). Further, the alignment of the magnetic centres with respect to each other is held to unprecedentedly small tolerances with rms values of less than 10 μm.
The other critical parameter for high-brightness performance is the beam current of 500 mA. High beam current is obtained with an accelerating structure based on two single-cell 500-MHz superconducting cavities of the type known as CESR-B. This RF system offers advantages for beam stability because the structures exhibit weak parasitic RF modes and are superior for suppressing beam-loading effects.
In addition, beyond-state-of-the-art instrumentation is required to control the orbital stability of the beam with its small beam sizes (σy = 3 μm at the insertion devices). Therefore, both a novel beam-position monitor system with a resolution and stability of less than 200 nm and a fast orbit-feedback system have been designed and implemented. These will limit the motion of the beam orbit to within 10% of the (vertical) beam size for frequencies up to 1 kHz.
The vacuum system is made of extruded, keyhole-shaped aluminium. The antechamber houses two non-evaporable getter strips for distributed pumping. The girder system is designed for high thermal stability and to avoid amplification of mechanical vibrations below 30 Hz.
All of the electronics and power supplies are located on the tunnel roof and are housed in sealed air-cooled racks, protecting the sensitive equipment from dust, temperature fluctuations, humidity and leaking cooling water. This protection is a major element of the strategy to achieve high operational reliability for the more than 1000 magnet power supplies, the beam-position monitors, controls and vacuum-control equipment. The facility aims for a reliability greater than 95% once its operation is matured fully.
The NSLS-II injector consists of a 200-MeV S-band linac, which feeds the 3-GeV combined-function booster synchrotron for on-energy injection in “top-off” mode, where frequent injection maintains the beam current. The booster synchrotron was designed and built by the Budker Institute of Nuclear Physics in Novosibirsk, and installed in collaboration with NSLS-II staff.
The civil construction with the accelerator tunnels and the ring-shaped experimental floor was completed in 2012. Installation of the accelerator components, which started in 2011, was completed in 2013.
The commissioning of the linac was already possible in April 2012 and the commissioning of the booster synchrotron followed in December 2013. Storage-ring commissioning took place soon after, in April 2014. The commissioning time for the entire complex was remarkably short, the superb robustness and reproducibility of the machine being demonstrated by the fact that restarts are possible only a few hours after shutdowns.
The summer of 2014 saw the installation of the first NSLS-II insertion devices. Three pairs of 3.4-m-long damping wigglers with peak fields of 1.85 T not only provide a factor of two in emittance reduction by enhanced radiation damping, they are also powerful sources (195 kW at a beam current of 500 mA) of photons up to energies of 100 keV. The workhorses of NSLS-II are in-vacuum undulators with a period of 20–23 mm and an extremely small gap height of 5 mm. Four such devices up to 3 m in length are part of the initial installation. There is also a pair of 2-m-long elliptical polarizing undulators (EPUs). The insertion devices were commissioned with their corresponding front-end systems during autumn 2014.
An initial suite of six beamlines is also part of the scope of the NSLS-II project. These beamlines are based on state-of-the-art – or beyond – beamline technology. They cover a range of synchrotron-light experimental techniques, including powder diffraction (XPD), coherent hard X-ray scattering (CHX), nano-focus imaging (HNX), inelastic X-ray scattering with extreme energy resolution < 1 meV (IXS), X-ray spectroscopy (SRX) and coherent soft X-ray scattering (CSX). All of these beamlines have started technical commissioning. The first light emitted by the NSLS-II EPU was observed on 23 October in the CSX beamline, followed by similar events for the other beamlines.
At the same time that the science commissioning of the existing beamlines at NSLSI-II is taking place, nine further insertion-device beamlines are under construction. The first three, known as the ABBIX beamlines, are scheduled to start up in the spring of 2016. They are specialized for biological research. The other six insertion-device beamlines – the so-called “NEXT” beamlines – are planned to start up the following autumn. Finally, there is an ongoing programme that consists of reusing NSLS equipment and integrating it into five new beamlines (NxtGen) that will receive bending-magnet radiation. As the field of the NSLS-II dipole magnets is weak, some of the source points are equipped with a wavelength-shifter consisting of a three-pole wiggler with 1.2 T peak field.
A number of non-Brookhaven institutions have responded positively to the opportunity to work with NSLS-II, and they will develop five additional beamlines in collaboration with NSLS-II staff. Therefore by 2018, NSLS-II will run with 27 beamlines and will have recovered from the reduction in the scientific programme between the shutdown of NSLS and the development period of the NSLS-II user facility. In its final configuration, the NSLS-II facility will host more than 60 beamlines.
The construction of NSLS-II within budget ($912 million) and to schedule is the result of excellent teamwork between scientists, engineers and technicians. In a ceremony on 6 February, the US secretary of energy, Ernest Moniz, dedicated the new facility. The first science results from NSLS-II were reported as early as March (Wang et al. 2015), and the science programme will start for most beamlines in the summer. The bright future of the NSLS-II era has begun.
For the past two years, teams from the CMS collaboration, many from distant countries, have been hard at work at LHC point 5 at Cessy in France. Their goal – to ensure that the CMS detector will be able to handle the improved performance of the LHC when it starts operations at higher energy and luminosity. More than 60,000 visitors to the CMS underground experimental cavern during the first long shutdown (LS1) witnessed scenes of intense and spectacular activity – from movements of the 1500-tonne endcap modules to the installation of the delicate pixel tracker, only the size of a portable toolbox but containing almost 70-million active sensors.
This endeavour involved planning for a huge programme of work (CERN Courier April 2013 p17). Since LS1 began, more than 1000 separate work packages have been carried out, ranging from the repairs and maintenance required after three years of operation during the LHC’s Run 1, through consolidation work for a long-term future, to the installation of completely new detector systems as well as the extension of existing ones. In addition to the many CMS teams involved, the programme relied on the strong general support and substantial direct contributions from physics and technical departments at CERN. This article, by no means exhaustive, aims to provide some insight into LS1 as it happened at point 5.
An early start
Vital contributions started as early as 2009, well before LS1 began. One example is the refurbishment by CERN’s General Services and Physics Departments of building 904 on the Prévessin site, to provide 2000 m2 of detector-assembly laboratories, which were used for the new parts of the muon detector. Another is the creation by CMS (mainly through contracts managed by CERN’s Engineering Department) of the Operational Support Centre in the surface-assembly building at point 5. This centre incorporates work areas for all of the CMS systems that had to be brought to the surface during LS1, and includes a cold-storage, cold-maintenance facility where the pixel tracker was kept until the new beampipe was fitted. There is also a workshop area suitable for modifying elements activated by collision products, which, as the LS1 story progressed, provided useful flexibility for dealing with unexpected work.
The highest-priority objective for CMS during LS1 was to operate the tracker cold
The highest-priority objective for CMS during LS1 was to operate the tracker cold. The silicon sensors of this innermost subdetector, which surrounds the LHC beampipe, must endure more than 109 particles a second passing through it, and cannot be completely replaced until about a decade from now. The damaging effects of this particle flux, sustained over many years of operation, can be mitigated by operating the sensor system at a temperature that is 20–30 °C lower than the few degrees above zero used so far. Alongside modifications to allow delivery of the coolant at much lower temperatures, a new system of humidity control had to be introduced to prevent condensation and icing. This involved sealing the tracker envelope, while making provision for a flow of up to 400 m3/h of dry gas. The system installed by CMS is a novel one at CERN: it dries air and then optionally removes oxygen via filtering membranes. The first full-scale tests took place at the end of 2013, and there was great satisfaction when an operating temperature of –20 °C was achieved stably.
However, as one challenge faded, a new one emerged immediately. On warming up, tell-tale drips of water were visible coming from the insulated bundles of pipework carrying the coolant into the detector – indication that air at room temperature and humidity had been reaching the cold pipes inside the system and forming ice. Fortunately, tests soon showed that an additional flow of dry air, injected separately into the pipework bundles, would suppress this problem. Responding to CMS’s request for help, the Engineering Department recently delivered a new dry-air plant that will make humidity suppression in the cooling distribution feasible on a routine basis, with a comfortable margin in capacity.
Another high-priority project for LS1 involved the muon detectors. A fourth triggering and measurement station in each of the endcaps was incorporated into the original CMS design, but it was not considered essential for initial operation. These stations are now needed to increase the power to discriminate between interesting low-momentum muons originating from the collision (e.g. potentially from a Higgs-boson decay) and fake muon signatures caused by backgrounds. Seventy-two new cathode-strip chambers (CSCs) and 144 new resistive-plate chambers (RPCs) were assembled across a three-year period by a typical CMS multinational team from institutes in Belgium, Bulgaria, China, Colombia, Egypt, Georgia, India, Italy, Korea, Mexico, Pakistan, Russia and the US, as well as from CERN. They were then installed as superposed layers of CSCs and RPCs on the two existing discs at the ends of the steel yoke that forms the structural backbone of CMS. Teams worked on the installation and commissioning in two major bursts of activity, matching the periods when the required detector configuration was available, and completing the job in late spring 2014.
A further improvement of the endcap muon system was achieved by installing new on-chamber electronics boards in the first, innermost layer of the CSCs to withstand the higher luminosity, while reusing the older electronics in one of the new fourth layers, where it is easier to cope with the collision rate. Here again, the unexpected had to be dealt with. One of the two layers had just been re-installed after months of re-fitting work, when tests revealed a potential instability caused by the accidental omission of a tiny passive electronic component. It was considered significantly risky to leave this uncorrected, so the installation teams had to go into full reverse. Working late into the evenings and at weekends to avoid interfering with previously scheduled activities, they partially extracted all 36 chambers, corrected the fault, put them back in place and re-commissioned them.
No part of the detector escaped the attention of the upgrade and maintenance teams. The modular structure of CMS, which can be separated into 13 major slices, was fully exploited to allow simultaneous activity, with as many as eight mobile work platforms frequently in use to give access to different slices and different parts of their 14 m diameter. Multiple maintenance interventions on the five barrel-yoke wheels restored the number of working channels to 99.7% – a figure not seen since 2009, just after installation. Similar interventions on the CSC and RPC stations on the endcap disks were also successful, with the few per cent that had degraded over the past few years restored completely. In addition, to improve maintainability, some key on-board electronics from the barrel part of the muon system was moved from the underground experimental cavern to the neighbouring service cavern, where it will now remain accessible during LHC operation. All of the photo-transducers and much of the on-detector electronics of the hadron calorimeter (HCAL) are to be replaced over the next few years, and a substantial part of this work was completed during LS1. In particular, photo-transducers of a new type were installed in the outer barrel and forward parts of the sytem, which will lead to an immediate improvement in performance.
The rate of proton–proton collisions will be five times higher
The need for some work streams was completely unforeseen until revealed by routine inspection. The most notable example was the discovery of a charred feed-through connector serving the environmental-screen heaters of one of the two preshower systems for the electromagnetic calorimeter (ECAL). Full diagnosis (under-rated capacitors) and subsequent repair of both preshower systems required their removal to the surface, where a semi-clean lab was created at short notice within the Operational Support Centre. The repairs and re-installation were a complete success, and the preshower system has been re-commissioned recently at its planned operating temperature of –8 °C.
The CMS consolidation programme had also to prepare the infrastructure of the experiment – originally designed for a 10-year operating lifetime – for running well into the 2030s. LHC operating periods lasting around three years will be interleaved with substantial shutdowns of one to two years in length. Moreover, the rate of proton–proton collisions will be five times higher, and the integrated number of collisions (ultimately) 10 times higher, than the original design goal.
Key adaptations were made during LS1 to address redundancy in the power and cryogenics systems, to extend the predicted lifetime of the one-of-a-kind CMS magnet. Further measures for protection against power glitches were implemented through an extension of the detector’s short-term uninterruptible power supply. Changes to the detector cooling included modifications for greater capacity and redundancy, as well as the addition of a new system in preparation for the upcoming upgrade of the pixel tracker, based on two-phase (evaporating liquid) carbon dioxide. This technology, new for CMS, involved the installation of precision-built concentric vacuum-insulated feed and return lines – difficult-to-modify structures that have to be made extremely accurately to ensure proper integration with the constricted channels that feed services into the apparatus. These changes presented challenges for the CMS Integration Office, where the “compact” in CMS was defended vigorously every day in computer models and then in the caverns.
The most massive change to the structure of the experiment was the addition of the new 125-tonne shielding discs
New detectors were not the only large-scale additions to CMS. The most massive change to the structure of the experiment was the addition of the new 125-tonne shielding discs – yoke endcap disc four (YE4) – installed outside of the fourth endcap muon station at either end of the detector. Each shielding disc, 14 m in diameter but only 125 mm thick, was made of 12 iron sector casings. Following manufacture and pre-assembly tests in Pakistan, these discs, whose design and preparation took five years, were disassembled for shipping to CERN and then re-assembled on the Meyrin site, where they were filled with a special dense (haemetite) shielding concrete, mixed for this specific application by CERN’s civil engineers. Loaded with a small percentage of boron, this concoction will act as a “sponge” to soak up many of the low-energy neutrons that give unwanted hits in the detector, and whose numbers will increase as the LHC beam intensities get higher.
The YE4 discs, transported in sectors to point 5, were the first slices of CMS to be assembled underground – all of the existing major elements had been pre-assembled on the surface and lowered into the underground cavern in sequence (CERN Courier July/August 2006 p28). In the original concept, the YE4 discs could be separated from the supporting YE3 only by driving the whole endcap system back to the cavern headwall, where YE4 could be unhooked and supported. Because all of the other slices of the CMS “swiss roll” can be displaced from one another to give access to the detectors sandwiched in between, it was decided late in the project – in fact, after assembly had already started – to equip each YE4 shielding disc with air pads and a system of electric screw-jacks. This would allow the YE4 disc to separate from the supporting neighbour disc (YE3) by up to 3.7 m without the necessity to move it to the headwall – a major operation. In fact, one so-called “push-back system” was used immediately after assembly of the YE4 disc, to permit installation of RPCs with the endcaps partially closed. This maintained the rapid-access modularity that was a core feature of the CMS design (CERN Courier October 2008 p48).
The final change was at the heart of CMS, in preparation for the installation during the LHC’s year-end technical stop of 2016–2017 of an upgraded pixel tracker – the closest physics detector to the collision point. The 0.8-mm-thick central beampipe used during Run 1, with an outer diameter of 59.6 mm, was replaced by a similar one of 45-mm outer diameter and, like the first one, made of beryllium, to be as transparent as possible to particles emanating from the LHC collisions. The narrower beampipe will allow the first layer of the new pixel tracker to be closer to the collision point than before. This geometrical improvement, combined with an additional fourth layer of sensors, will upgrade the tracker’s ability to resolve where a charged particle originated. When running under conditions of high pile-up in Run 2 and Run 3 – that is, with many more protons colliding every time counter-rotating bunches meet at the centre of CMS – the disentangling of which tracks belong to which collision vertices will be crucial for most physics analyses.
The delicate operations of removing and replacing the beampipe – requiring the detector to be open fully – are possible only in a long shutdown. The new beampipe, designed jointly with CERN’s Technology Department, which procured and prepared it on behalf of CMS, was installed in June 2014. Its installation was followed immediately by vacuum pumping, combined with heating (“bake-out”) to more than 200 ºC, to expel gas molecules attached to the chamber walls. This ensured that the operating pressure of around 10–10 mbar would be possible – and achieved eventually. Following the bake-out of the new central beampipe, several mechanical tests were made to ensure that the upgraded pixel tracker can be installed in the limited time window that will be available in 2016–2017.
It is probable that a proverb exists in every language and culture involved in CMS, warning against relaxing before the job is finished. In mid-August 2014, the end of the LS1 project seemed to be on the horizon. The beampipe bake-out was being completed and preparations for the pixel tracker’s re-installation were underway, so many team members took the opportunity for a quick summer holiday. Then, their mobile phones began to buzz with reports of the first indications of a severe fault found in pre-installation tests of the barrel pixel system, which had been removed only to allow the change of beampipe. About 25% (around 50) of the modules in one quadrant were not responding. By the end of August, the half-shell containing the faulty quadrant had been transported to its makers at the Paul Scherrer Institute (PSI) for detailed investigation.
On 5 September, the diagnostics revealed that the reason for failure was electro-migration-induced shorts between adjacent bond pads of the high-density interconnect – a flexible, complex, multilayer printed circuit used to extract the signals. An investigation showed that the most likely origin was a brief and inadvertent lapse in humidity-control protocols in the course of routine calibration exercises many months earlier, when the pixel system was up in the surface laboratory. By 18 September, a comprehensive strategy of replacement and repair had been worked out by the PSI team. Because this required purchasing new components and restarting the production of detector modules, the revised schedule foresaw the detector being back at CERN by the end of November, with installation planned for around 8 December, almost exactly two months later than intended originally.
A new end game
At this late stage, with insufficient contingency remaining in the baseline schedule to accommodate the delay, it was decided to change radically the end-game sequence of the shutdown. Instead of waiting for the repair of the pixel tracker, CMS was closed immediately to conduct a short magnet-test, to identify any problems that otherwise would not have appeared until the final closure for beam. After finishing the remaining work on the bulkhead seal that allows the tracker to be operated cold, this sequence of closing the detector, testing the magnet and then re-opening CMS became the critical path for two months, with the remaining upgrade activity being postponed or re-arranged around the new schedule. The new sequence implied unexpected tight deadlines for several teams – particularly those working on the magnet and the forward region – and a massive extra workload for the heavy-engineering team. The additional closing and opening sequence required 36 single movements of heavy discs, and 16 insertions and removals of the heavy-raiser platforms that support the forward calorimeters at beam height. A concerted and exceptional effort resulted in the magnet yoke being closed by mid-October, and both forward regions being closed and ready for magnetic field by 6 November.
The closing of the experiment, just in time for first beam in 2015, brought the saga of LS1 to a happy ending.
The following day, the magnet was ramped to 1 T and then discharged. This sequence allowed yoke elements to settle, and also verified that the control and safety systems performed as expected. By 10 November, enough liquid helium had been accumulated for 36 hours of operation at full field, and the test programme resumed. However, at 2.4 T, the main elevator providing underground access stopped working, owing to some field-sensitive floor-level sensors having been installed mistakenly during routine maintenance. After reducing the field temporarily to allow personnel to leave the underground areas, the ramp-up continued, reaching the working value of 3.8 T at around 7.00 p.m., demonstrating that the magnet’s upgraded power and cryogenics system worked well. Despite the rapid endcap-yoke closure with only approximate axial alignment, the movements under the magnetic forces of the endcap discs (including the new YE4s) and the forward systems were well within the ranges observed previously, although specific movements occurred at different field values. The new beampipe support system and the new phototransducers of the HCAL and beam-halo monitors were shown to be tolerant to the magnetic field. Most importantly, the environmental seal around the tracker and the new dry-gas injection system functioned well enough in the magnetic field to allow tracker operation at –20 °C. The top-priority task of LS1 could therefore be declared a success.
Following this, the opening of the detector was a race against time to meet the target of installing the barrel and forward pixel trackers, and enclosing them in a stable environment before CERN’s 2014 end-of-year closure. This was achieved successfully, providing a fortuitous “dry run” of what will have to be done during the year-end stop of 2016–2017, when the new pixel tracker will be installed. Following a thorough check and pre-calibration of the pixel system, the last new elements of CMS in the LS1 project – upgraded beam monitors and the innovative pixel luminosity telescope (CERN Courier March 2015 p6) – were installed by the end of the first week of February 2015.
The closing of the experiment, just in time for first beam in 2015, brought the saga of LS1 to a happy ending. It is time to celebrate with the collaboration teams, contractors and CERN technical groups, who have all contributed to the successful outcome. The imminent start of Run 2 now raises the exciting prospect of new physics, but behind the scenes preparations for the next CMS shutdown adventure have already begun.
After the long maintenance and consolidation campaign carried out during the first long shutdown, LS1, the early part of 2015 has been dominated by tests and magnet training to prepare the LHC for a collision energy of 13 TeV. With all of the hardware and software systems to be checked, a total of more than 10,000 test steps needed to be performed and analysed on the LHC’s magnet circuits.
The LHC’s backbone consists of 1232 superconducting dipole magnets with a field of up to 8.33 T operating in superfluid helium at 1.9 K, together with more than 500 superconducting quadrupole magnets operating at 4.2 K or 1.9 K. Many other superconducting and normal resistive magnets are used to allow the correction of all beam parameters, bringing the total number of magnets to more than 10,000. About 1700 power converters are necessary to feed the superconducting circuits.
The dipole magnets in the first of the LHC’s eight sectors were trained successfully to nominal current in December, and training continued throughout the first three months of 2015. Although all of the dipole magnets were tested individually before installation, they had to be trained together in the tunnel up to 10,980 A, the current that corresponds to a beam energy of 6.5 TeV.
Training involves repetitive quenches before a superconducting magnet reaches the target magnetic field. The quenches are caused by the sudden release of electromechanical stresses and a local increase in temperature that triggers a change from the superconductive to the resistive state. The entire coil is then warmed up and cooled down again – for the LHC dipoles, this might take several hours. The magnet protection system is crucial for detecting a quench and safely extracting the energy stored in the circuits – about 1 GJ per dipole circuit at nominal current.
The typical time needed to commission a dipole circuit fully is in the order of three to five weeks, and all of the interlock and protection systems have to be tested, both before and while ramping-up the current in steps. By mid-February, the dipole circuits in three sectors had been trained to the level equivalent to 6.5 TeV, with the total number of quenches confirming the initial prediction of about 100 quenches for all of the dipoles in the machine. By early March, four sectors were fully trained for 6.5-TeV operation, with a fifth well into its training programme.
If commissioning remains on schedule, the LHC should restart towards the end of March
On the weekend of 7–8 March, operators performed injection tests with beams of protons being sent part way around the LHC. Beam 1 passed through the ALICE detector up to point 3 of the LHC, where it was dumped on a collimator, and beam 2 went through the LHCb detector up to the beam dump at point 6. The team recorded various parameters, including the timings of the injection kickers and the beam trajectory in the injection lines and LHC beam pipe.
The ALICE and LHCb collaborations prepared their experiments to receive pulses of particles and recorded “splash” events as the particles travelled through their detectors. LHCb used the tests to commission the detector and the data-acquisition system, as well as to perform detector studies and alignments of the different sub-detectors. The ALICE collaboration meanwhile used muons originating from the Super Proton Synchrotron beam dump for timing studies of the trigger and to align the muon spectrometer.
If commissioning remains on schedule, the LHC should restart towards the end of March, with first collisions at 13 TeV in late May/early June.
On 13 January, less than three weeks after being launched into space, the NUCLEON satellite experiment was switched on to collect its first cosmic-ray events. Orbiting the Earth on board the RESURS-P No.2 satellite, NUCLEON has been designed to investigate directly the energy spectrum of cosmic-ray nuclei and their chemical composition from 100 GeV to 1000 TeV (1011–1015 eV), as well as the cosmic-ray electron spectrum from 20 GeV to 3 TeV. It is well known that the region of the “knee” – 1014–1016 eV – is crucial for understanding the origin of cosmic rays, as well as their acceleration and propagation in the Galaxy.
NUCLEON has been produced by a collaboration between the Skobeltsyn Institute of Nuclear Physics of Moscow State University (SINP MSU) as the main partner, together with the Joint Institute for Nuclear Research (JINR) and other Russian scientific and industrial centres. It consists of silicon and scintillator detectors, a carbon target, a tungsten γ-converter and a small electromagnet calorimeter.
The charge-detection system, which consists of four thin detector layers of 1.5 × 1.5 cm silicon pads, is located in front of the carbon target. It is designed for precision measurement of the charge of the primary-particle charge.
A new technique, based on the generalized kinematical method developed for emulsions, is used to measure the cosmic-ray energy. Avoiding the use of heavy absorbers, the Kinematic Lightweight Energy Meter (KLEM) technique gives an energy resolution of 70% or better, according to simulations. Placed just behind the target, this energy-measurement system consists of silicon microstrip layers with tungsten layers to convert secondary γ-rays to electron–positron pairs. This significantly increases the number of secondary particles and therefore improves the accuracy of the energy determination for a primary particle.
The small electromagnet calorimeter (six tungsten/silicon microstrip layers 180 × 180 mm weighing about 60 kg, owing to satellite limitations) has a thickness of 12 radiation lengths, and will measure the primary cosmic-ray energy for some of the events. The effective geometric factor is more than 0.2 m2sr for the full detector and close to 0.1 m2sr for the calorimeter. The NUCLEON device must allow separation of the electromagnetic and hadronic cosmic-ray components at a rejection level of better than 1 in 103 for the events in the calorimeter aperture.
The design, production and tests of the trigger system were JINR’s responsibility. The system consists of six multistrip scintillator layers to select useful events by measuring the charged-particle multiplicity crossing the trigger planes. The two-level trigger systems have a duplicated structure for reliability, and will provide more than 108 events with energy above 1011 eV during the planned five years of data taking.
The NUCLEON prototypes were tested many times at CERN’s Super Proton Synchrotron (SPS) with high-energy electron, hadron and heavy-ion beams. The last test at CERN, which took place in 2013 at the H2 heavy-ion beam, was dedicated to testing NUCLEON’s charge-measurement system. The results showed that it provides a charge resolution better than 0.3 charge units in the region up to atomic number Z = 30 (figure 2). The Z < 5 beam particles were suppressed by the NUCLEON trigger system.
In 2013, NUCLEON was installed on the RESURS-P No. 2 satellite platform for combined tests at the Samara-PROGRESS space-qualification workshop, some 1000 km southeast of Moscow. The complex NUCLEON tests were continued in 2014 at the Baikonur spaceport, in conjunction with the satellite and the Soyuz-2.1b rocket, before the successful launch on 26 December. The satellite is now in a Sun-synchronous orbit with inclination 97.276° and a middle altitude of 475 km. The total weight of the NUCLEON apparatus is 375 kg, with a power consumption of 175 W.
The flight tests of the NUCLEON detector were continued during January and February, and the NUCLEON team hopes to present the preliminary results at the summer conferences this year. The next step after this experiment will be the High-Energy cosmic-Ray Observatory (HERO) to study high-energy primary cosmic-ray radiation from space. The first HERO prototype is to be tested at the SPS in autumn.
On 31 December, commissioning of the Taiwan Photon Source (TPS) at the National Synchrotron Radiation Research Center (NSRRC) brought 2014 to a close on a highly successful note as a 3 GeV electron beam circulated in the new storage ring for the first time. A month later, the TPS was inaugurated in a ceremony that officially marked the end of the 10-year journey since the project was proposed in 2004, the past five years being dedicated to the design, development, construction and installation of the storage ring.
The new photon source is based on a 3 GeV electron accelerator consisting of a low-emittance synchrotron storage ring 518.4 m in circumference and a booster ring (CERN Courier June 2010 p16). The two rings are designed in a concentric fashion and housed in a doughnut-shaped building next to a smaller circular building where the Taiwan Light Source (TLS), the first NSRRC accelerator, sits (see cover). The TLS and the new TPS will together serve scientists worldwide whose experiments require photons ranging from infrared radiation to hard X-rays with energies above 10 keV.
Four-stage commissioning
The task of commissioning the TPS comprised four major stages involving: the linac system plus the transportation of the electron beam from the linac to the booster ring; the booster ring; the transportation of the electron beam from the booster ring to the storage ring; and, finally, the storage ring. Following the commissioning of the linac system in May 2011, the acceptance tests of key TPS subsystems progressed one after the other over the next three years. The 700 W liquid-helium cryogenic system, beam-position monitor electronics, power supplies for quadrupole and sextupole magnets, and two sets of 2 m-long in-vacuum undulators completed their acceptance tests in 2012. Two modules of superconducting cavities passed their 300 kW high-power tests. The welding, assembly and baking of the 14 m-long vacuum chambers designed and manufactured by in-house engineers were completed in 2013. Then, once the installation of piping and cable trays had begun, the power supply and other utilities were brought in, and set-up could start on the booster ring and subsystems in the storage ring.
The installation schedule was also determined by the availability of magnets. By April 2014, 80% of the 800 magnets had been installed in the TPS tunnel, allowing completion of the accelerator installation in July (bottom right). Following the final alignment of each component, preparation for the integration tests of the complete TPS system in the pre-commissioning phase was then fully under way by autumn.
The US$230 million project (excluding the NSRRC staff wages) involved more than 145 full-time staff members
The performance tests and system integration of the 14 subsystems in the pre-commissioning stage started in August. By 12 December, the TPS team had begun commissioning the booster ring. The electron beam was accelerated to 3 GeV on 16 December and the booster’s efficiency reached more than 60% a day later. Commissioning of the storage ring began on 29 December. On the next day, the team injected the electrons for the first time and the beam completed one cycle. The 3 GeV electron beam with a stored current of 1 mA was then achieved and the first synchrotron light was observed in the early afternoon on 31 December (far right). The stored current reached 5 mA a few hours later, just before the shut down for the New Year holiday. As of the second week of February 2015, the TPS stored beam current had increased to 50 mA.
The US$230 million project (excluding the NSRRC staff wages) involved more than 145 full-time staff members in design and construction. Like any other multi-million-dollar, large-scale project, reaching “first light” required ingenious problem solving and use of resources. Following the groundbreaking ceremony in February 2010, the TPS project was on a fast track, after six months of preparing the land for construction. Pressures came from the worldwide financial crisis, devaluation of the domestic currency, reduction of the initial approved funding, attrition of young engineers who were recruited by high-tech industries once they had been trained with special skills, and bargaining with vendors. In addition, the stringent project requirements left little room for even small deviations from the delivery timetable or system specifications, which could have allowed budget re-adjustments.
To meet its mandate on time, the project placed reliance and pressure on experienced staff members. Indeed, more than half of the TPS team and the supporting advisors had participated in the construction of the TLS in 1980s. During construction of the TPS, alongside the in-house team were advisers from all over the world whose expertise played an important role in problem solving. In addition, seven intensive review meetings took place, conducted by the Machine Advisory Committee.
From the land preparation in 2010 onwards, the civil-construction team faced daily challenges. For example, at the heart of the Hsinchu Science Park, the TPS site is surrounded by heavy traffic, 24 hours a day, all year round. To eliminate the impact of vibration from all possible sources, the 20 m wide concrete floor of the accelerator tunnel is 1.6 m thick. Indeed, the building overall can resist an earthquake acceleration of 0.45 g, which is higher than the Safe Shutdown Earthquake criteria for US nuclear power plants required by the US Nuclear Regulatory Commission.
The civil engineering took an unexpected turn at the very start when a deep trench of soft soil, garbage and rotting plants was uncovered 14 m under the foundations. The 100 m long trench was estimated to be 10 m wide and nearly 10 m thick. The solution was to fill the trench with a customized lightweight concrete with the hardness and geological characteristics of the neighbouring foundations. The delay in construction caused by clearing out the soft soil led to installation of the first accelerator components inside the TPS shielding walls in a dusty, unfinished building with no air conditioning. The harsh working environment in summer, with temperatures sometimes reaching 38 °C, made the technological challenges seem almost easy.
Technology transfer
The ultra-high-vacuum system was designed and manufactured by NSRRC scientists and engineers, who also trained local manufacturers in the special technique of welding, the clean-room setup, and processing in an oil-free environment. This transfer of technology is helping the factories to undertake work involving the extensive use of lightweight aluminum alloy in the aviation industry. During the integration tests, the magnetic permeability of the vacuum system in the booster ring, perfectly tailored for the TPS, proved not to meet the required standard. The elliptical chambers were removed immediately to undergo demagnetization heat-treatment in a furnace heated to 1050 °C. For the 2 m long components this annealing took place in a local factory, while shorter components were treated at the NSRRC. The whole system was back online after only three weeks – with an unexpected benefit. After the annealing process, the relative magnetic permeability of the stainless vacuum steel chambers reached 1.002, lower than the specification of 1.01 currently adopted at light-source facilities worldwide.
The power supplies of the booster dipole magnets were produced abroad and had several problems. These included protection circuits that overheated to the extent that a fire broke out, causing the system to shut down during initial integration tests in August. As the vendor could not schedule a support engineer to arrive on site before late November, the NSRRC engineers instead quickly implemented a reliable solution themselves and resumed the integration process in about a week. The power supplies for the quadrupole and sextupole magnets of the storage ring were co-produced by the NSRRC and a domestic manufacturer, and deliver a current of 250 A, stable to less than 2.5 mA. Technology transfer from the NSRRC to the manufacturer on the design and production of this precise power supply is another byproduct of the TPS project.
24-hour shifts
Ahead of completion of the TPS booster ring, the linac was commissioned at a full-scale test site built as an addition to the original civil-construction plan (CERN Courier July/August 2011 p11). The task of disassembling and moving the linac to the TPS booster ring, re-assembling it and testing it again was not part of the initial plan in 2009. The relocation process nearly doubled the effort and work time. As a result, the four-member NSRRC linac team had to work 24-hour shifts to keep to the schedule and budget – saving US$700,000 of disassembly and re-assembly fees had this been carried out by the original manufacturer. After the linac had been relocated, the offsite test facility was transformed into a test site for the High-Brightness Injector Group.
Initially, the TPS design included four superconducting radiofrequency (SRF) modules based on the 500 MHz modules designed and manufactured at KEK in Japan for the KEKB storage ring. However, after the worldwide financial crisis in 2008 caused the cost of materials to soar nearly 30%, the number of SRF modules was reduced to three and the specification for the stored electron beam was reduced from 400 mA to 300 mA. But collaboration and technology transfer on a higher-order mode-damped SRF cavity for high-intensity storage rings from KEK has allowed the team at NSRRC to modify the TPS cavity to produce higher operational power and enable a stored electron beam of up to 500 mA – better, that is, than the original specification. (Meanwhile, the first phase of commissioning in December used three conventional five-cell cavities from the former PETRA collider at DESY – one for the booster and two for the storage ring – which had been purchased from DESY and refurbished by the NSRRC SRF team.)
The TPS accelerator uses more than 800 magnets designed by the NSRRC magnet group, which were contracted to manufacturers in New Zealand and Denmark for mass production. To control the electron beam’s orbit as defined by the specification, the magnetic pole surfaces must be machined to an accuracy of less than 0.01 mm. At the time, the New Zealand factory was also producing complicated and highly accurate magnets for the NSLS-II accelerator at Brookhaven National Laboratory. To prevent delays in delivering the TPS magnets – a possible result of limited factory resources being shared by two large accelerator projects – the NSRRC assigned staff members to stay at the overseas factory to perform on-site inspection and testing at the production line. Any product that failed to meet the specification was returned to the production line immediately. The manufacturer in New Zealand also constructed a laboratory that simulated the indoor environment of the TPS with a constant ambient temperature. Once the magnets reached an equilibrium temperature corresponding to a room temperature of 25°C in the controlled laboratory, various tests were conducted.
Like the linac, the TPS cryogenic system was commissioned at a separate, specially constructed test site. The helium cryogenic plant was dissembled and reinstalled inside the TPS storage ring in March 2014, followed by two months of function tests. With the liquid nitrogen tanks situated at the northeast corner, outside and above the TPS building, feeding the TPS cooling system – which stretches more than several hundred metres – is a complex operation. It needs to maintain a smooth transport and a long-lasting fluid momentum, without triggering any part of the system to shut down because of fluctuations in the coolant temperature or pressure. The cold test and the heat-load test of the liquid helium transfer-line is scheduled to finish by the end of March 2015 so that the liquid helium supply will be ready for the SRF cavities early in April.
Since both the civil engineering and the construction of the accelerator itself proceeded in parallel, the TPS team needed to conduct acceptance tests of most subsystems off-site, owing to the compact and limited space in the NSRRC campus. When all of the components began to arrive at the yet-to-be completed storage ring, the installation schedule was planned mainly according to the availability of magnets. This led to a two-step installation plan. In the first half of the ring, bare girders were set up first, followed by the installation of the magnets as they were delivered and then the vacuum chambers. For the second half of the ring, girders with pre-mounted magnets were installed, followed by the vacuum chambers. This allowed error-sorting with the beam-dynamics model to take place before finalizing the layout of the magnets for the minimum impact on the beam orbit. Afterwards, the final alignment of each component and tests of the integrated hardware were carried out in readiness for the commissioning phase.
Like other large-scale projects, leadership played a critical role in the success of completing the TPS construction to budget and on schedule. Given the government budget mechanism and the political atmosphere created by the worldwide economic turmoil over the past decade, leaders of the TPS project were frequently second-guessed on every major decision. Only by having the knowledge of a top physicist, the mindset of a peacemaker, the sharp sense of an investment banker and the quality of a versatile politician, were the project leaders able to guide the team to focus unwaveringly on the ultimate goal and turn each crisis into an opportunity.
On 12 January, after 23 months of hard work involving around 1000 people each day, the key to the LHC was symbolically handed back to the operations team. The team will now perform tests on the machine in preparation for the restart this spring.
Tests include training the LHC’s superconducting dipole magnets to the current level needed for 6.5 TeV beam energy. The main dipole circuit of a given sector is ramped up until a quench of a single dipole occurs. The quench-protection system then swings into action, energy is extracted from the circuit, and the current is ramped down. After careful analysis, the exercise is repeated. On the next ramp, the magnet that quenched should hold the current (i.e. is trained), while at a higher current another of the 154 dipoles in the circuit quenches. For 2015, the target current is 11,080 A for operation at 6.5 TeV (with some margin). Sector 6-7 was brought to this level successfully at the end of 2014, having taken 20 training quenches to get there. Getting all eight sectors to this level will be an important milestone.
The next big step is the first sector test, in which beam would enter the LHC for the first time since February 2013.
The next big step is the first sector test, in which beam would enter the LHC for the first time since February 2013. The aim is to send single bunches from the Super Proton Synchrotron into the LHC through the injection regions at points 2 and 8 for a single pass through the available downstream sectors. This will allow testing of synchronization, the injection system, beam instrumentation, magnet settings, machine aperture and the beam dump.
A full circuit of the machine with beam and the start of beam commissioning are foreseen for March. It should then take about two months to re-commission the operational cycle, commission the beam-based systems (transverse feedback, RF, injection, beam dump system, beam instrumentation, power converters, orbit and tune feedbacks, etc) and commission and test the machine-protection system to re-establish the high level of protection required. This will open the way for the first collisions of stable beams at 6.5 TeV – foreseen currently for May – initially with a low number of bunches.
On 26 January, the CMS collaboration installed their new Pixel Luminosity Telescope (PLT). Designed with LHC Run 2 in mind, the PLT uses radiation-hard CMS pixel sensors to provide near-instantaneous readings of the per-bunch luminosity – thereby helping LHC operators to provide the maximum useful luminosity to CMS. The PLT is comprised of two arrays of eight small-angle telescopes situated on either side of the CMS interaction point. Each telescope hovers only 1 cm away from the CMS beam pipe, where it uses three planes of pixel sensors to take separate, unique measurements of luminosity.
The Linac Coherent Light Source (LCLS) at SLAC produced its first laser-like X-ray pulses in April 2009. The unique and potentially transformative characteristics of the LCLS beam – in particular, the short femtosecond pulse lengths and the large numbers of photons per pulse (see The LCLS XFEL below) – have created whole new fields, especially in the study of biological materials. X-ray diffraction on nanocrystals, for example, reveals 3D structures at atomic resolution, and allows pump-probe analysis of functional changes in the crystallized molecules. New modalities of X-ray solution scattering include wide-angle scattering, which provides detailed pictures from pump-probe experiments, and fluctuational solution scattering, where the X-ray pulse freezes the rotation of the molecules in the beam, resulting in a rich, 2D scattering pattern. Even the determination of the structure of single particles is possible. This article focuses on examples from crystallography and time-resolved solution scattering.
An important example from crystallography concerns the structure of protein molecules. As a reminder, protein molecules, which are encoded in our genes, are linear polymers of the 20 naturally occurring amino-acid monomers. Proteins contain hundreds or thousands of amino acids and carry out most functions within cells or organs. They catalyse chemical reactions; act as motors in a variety of contexts; control the flow of substances into and out of cells; and mediate signalling processes. Knowledge of their atomic structures lies at the heart of mechanistic understanding in modern biology.
Serial femtosecond crystallography (SFX) provides a method of studying the structure of proteins. In SFX, still X-ray photographs are obtained from a stream of nanocrystals, each crystal being illuminated by a single pulse of a few femtoseconds duration. At the LCLS, the 1012 photons per pulse can produce observable diffraction from a protein crystal much less than 1 μm3. Critically, a 10 fs pulse will scatter from a specimen before radiation damage takes place, thereby eliminating such damage as an experimental issue. Figure 1 shows a typical SFX set-up for crystals of membrane proteins. The X-ray beam in yellow illuminates a stream of crystals, shown in the inset, being carried in a thin stream of highly viscous cubic-phase lipid (LCP). The high-pressure system that creates the jet is on the left. The rate of LCP flow is well matched to the 120 Hz arrival rate of the X-ray pulses, so not much material is wasted between shots. In the ideal case, each X-ray pulse scatters from a single crystal in the LCP flow. For soluble proteins, a jet of aqueous buffer replaces the LCP.
AT1R is found at the surface of vascular cells and serves as the principal regulator of blood pressure (figure 3). Although several AT1R blockers (ARBs) have been developed as anti-hypertensive drugs, the structural knowledge of the binding to AT1Rs has been lacking, owing mainly to the difficulties of growing high-quality crystals for structure determination. Using SFX at the LCLS, Vadim Cherezov and colleagues have successfully determined the room-temperature crystal structure of human AT1R in a complex with its selective receptor-blocker ZD7155 at 2.9 Å resolution (Zhang et al. 2015). The structure of the AT1R–ZD7155 complex reveals key features of AT1R and critical interactions for ZD7155 binding. Docking simulations, which predict the binding orientation of clinically used ARBs onto the AT1R structure, further elucidated both the common and distinct binding modes for these anti-hypertensive drugs. The results have provided fundamental insights into the AT1R structure-function relationship and structure-based drug design.
In solution scattering, an X-ray beam illuminates a volume of solution containing a large number of the particles of interest, creating a diffraction pattern. Because the experiment averages across many rotating molecules, the observed pattern is circularly symmetric and can be encapsulated by a radial intensity curve, I(q), where q = 4πsinθ/λ and 2θ is the scattering angle. The data are therefore essentially one-dimensional (figure 4b). The I(q) curves are quite smooth and can be well described by a modest number of parameters. They have traditionally been analysed to yield a few important physical characteristics of the scattering particle, such as its molecular mass and radius of gyration. Synchrotrons have enabled new classes of solution-scattering experiments, and the advent of XFEL sources is already providing further expansion of the methodology.
Chasing the protein quake
An elegant example of time-resolved wide-angle scattering (WAXS) at the LCLS comes from a group led by Richard Neutze at the University of Gothenberg (Arnlund et al. 2014), which has used multi-photon absorption to trigger an extremely rapid structural perturbation in the photosynthetic reaction centre from Blastochloris viridis, a non-sulphur purple bacterium that produces molecular oxygen valuable to our environment. The group measured the progress of this fluctuation using time-resolved WAXS. Appearing with a time constant of a few picoseconds, the perturbation falls away with a 10 ps time constant and, importantly, precedes the propagation of heat through the protein.
The photosynthetic reaction centre faces unique problems of energy management. The energy of a single photon of green light is approximately equal to the activation energy for the unfolding of the protein molecule. In the photosynthetic complex, photons are absorbed by light-harvesting antennae and then rapidly funnelled to the reaction centre through specialized channels. The hypothesis is that excess energy, which may also be deposited in the protein, is dissipated before damage can be done by a process named “a protein quake”, indicating a nanoscale analogue of the spreading of waves away from the epicentre of an earthquake.
The experiments performed at the coherent X-ray imaging (CXI) station at the LCLS used micro-jet injection of solubilized protein samples. An 800 nm laser pulse of 500 fs duration illuminating the sample was calibrated so that a heating signal could be observed in the difference between the WAXS spectra with and without the laser illumination (figure 5a). The XFEL was operated to produce 40 fs pulses at 120 Hz, and illuminated and dark samples were interleaved, each at 60 Hz. The team calibrated the delay time between the laser and XFEL pulses to within 5 ps, and collected scattering patterns across a series of 41 time delays to a maximum of 100 ps. Figure 5b shows the curves indicating the difference in scattering between activated and dark molecules that were generated at each time point.
The results from this study rely on knowing the equilibrium molecular structure of the complex
The results from this study rely on knowing the equilibrium molecular structure of the complex. Molecular-dynamics (MD) simulations and modelling play a key role in interpreting the data and developing an understanding of the “quake”. A combination of MD simulations of heat deposition and flow in a molecule and spectral decomposition of the time-resolved difference scattering curves provide a strong basis for a detailed understanding of the energy propagation in the system. Because the light pulse was tuned to the frequency of the photosystem’s antennae, cofactors (molecules within the photosynthetic complex) were instantaneously heated to a few thousand kelvin, before decaying with a half-life of about 7 ps through heat flow to the remainder of the protein. Also, principal component analysis revealed oscillations in the range q = 0.2–0.9 nm–1, corresponding to a crystallographic resolution of 31–7 nm, which are signatures of structural changes in the protein. The higher-angle scattering – corresponding to the heat motion – extends to a resolution of a few angstroms, with a time resolution extending to a picosecond. This study illustrates not only the rapid evolution of the technology and experimental prowess of the field, but brings it to bear on a problem that makes clear the biological relevance of extremely rapid dynamics.
Effective single-particle imaging (SPI) would eliminate the need for crystallization, and would open new horizons in structure determination. It is an arena in which electron microscopy is making great strides, and where XFELs face great challenges. Simulations have demonstrated the real possibility of recovering structures from many thousands of weak X-ray snapshots of molecules in random orientation. However, it has become clear, as actual experiments are carried out, that there are profound difficulties with collecting high-resolution data – at present the best resolution in 2D snapshot images is about 20 nm. A recent workshop on single-particle imaging at SLAC identified a number of sources of artifacts including complex detector nonlinearities, scattering from apertures, scattering from solvent, and shot-to-shot variation in beam intensity and position. In addition, the current capability to hit a single molecule with a pulse reliably is quite limited. Serious technical progress at XFEL beamlines will be necessary before the promise of SPI at XFELs is realized fully.
Currently, the only operational XFEL facilities are at the SPring-8 Angstrom Compact free-electron LAser (SACLA) at RIKEN in Japan (CERN Courier July/August 2011 p9) and the LCLS in the US, so competition for beamtime is intense. Within the next few years, the worldwide capacity to carry out XFEL experiments will increase dramatically. In 2017, the European XFEL will come on line in Hamburg, providing a pulse rate of 27 kHz compared with the 120 Hz rate at the LCLS. At about the same time, facilities at the Paul Scherrer Institute in Switzerland and at the Pohang Accelerator Laboratory in South Korea will produce first light. In addition, the technologies for performing and analysing experiments are improving rapidly. It seems more than fair to anticipate a rapid growth in crystallography, molecular movies, and other exciting experimental methods.
The LCLS XFEL
Hard X-ray free-electron lasers (XFELs) are derived from the undulator platform commonly used in synchrotron X-ray sources around the world. In the figure, (a) shows the undulator lattice, which comprises a series of alternating pairs of magnetic north and south poles defining a gap through which electron bunches travel. The undulator at the LCLS is 60 m long, compared with about 3 m for a synchrotron device. The bunches experience an alternating force normal to the magnetic field in the gap, transforming their linear path into a low-amplitude cosine trajectory.
In the reference frame of the electron bunch, the radiation that each electron emits has a wavelength equal to the spacing of the undulator magnets (a few centimetres) divided by the square of the relativistic factor γ = E/me2 (see below). Each electron interacts both with the radiation emitted by electrons preceding it in the bunch, and with the magnetic field within the undulator. Initially, the N electrons in the bunch have random phases (see figure, (b)), so that the radiated power is proportional to N.
As the bunch advances through the undulator, it breaks up into a series of microbunches of electrons separated by the wavelength of the emitted radiation. Without going into detail, this microbunching arises from a Lorenz force on the electron in the direction of propagation, which is generated by the interaction of the undulator field and the (small) component of the electron velocity perpendicular to the direction of propagation. This force tends to push the electrons into a position at the peak of the emitted radiation. All electrons within a single bunch radiate coherently, and the radiation from one microbunch is also coherent with that from the next, being separated by a single wavelength. Therefore, the power in the radiated field is proportional to N2.
The process of microbunching can be viewed as a resonance process, for which the following undulator equation describes the conditions for operation at wavelength λ.
The tables, above, show typical operating conditions for the CXI beamline at the LCLS. The values represent only a small subset of possible operating conditions. Note the small source size, the short pulse duration and the high photons per pulse.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.