On 12 January, after 23 months of hard work involving around 1000 people each day, the key to the LHC was symbolically handed back to the operations team. The team will now perform tests on the machine in preparation for the restart this spring.
Tests include training the LHC’s superconducting dipole magnets to the current level needed for 6.5 TeV beam energy. The main dipole circuit of a given sector is ramped up until a quench of a single dipole occurs. The quench-protection system then swings into action, energy is extracted from the circuit, and the current is ramped down. After careful analysis, the exercise is repeated. On the next ramp, the magnet that quenched should hold the current (i.e. is trained), while at a higher current another of the 154 dipoles in the circuit quenches. For 2015, the target current is 11,080 A for operation at 6.5 TeV (with some margin). Sector 6-7 was brought to this level successfully at the end of 2014, having taken 20 training quenches to get there. Getting all eight sectors to this level will be an important milestone.
The next big step is the first sector test, in which beam would enter the LHC for the first time since February 2013.
The next big step is the first sector test, in which beam would enter the LHC for the first time since February 2013. The aim is to send single bunches from the Super Proton Synchrotron into the LHC through the injection regions at points 2 and 8 for a single pass through the available downstream sectors. This will allow testing of synchronization, the injection system, beam instrumentation, magnet settings, machine aperture and the beam dump.
A full circuit of the machine with beam and the start of beam commissioning are foreseen for March. It should then take about two months to re-commission the operational cycle, commission the beam-based systems (transverse feedback, RF, injection, beam dump system, beam instrumentation, power converters, orbit and tune feedbacks, etc) and commission and test the machine-protection system to re-establish the high level of protection required. This will open the way for the first collisions of stable beams at 6.5 TeV – foreseen currently for May – initially with a low number of bunches.
On 26 January, the CMS collaboration installed their new Pixel Luminosity Telescope (PLT). Designed with LHC Run 2 in mind, the PLT uses radiation-hard CMS pixel sensors to provide near-instantaneous readings of the per-bunch luminosity – thereby helping LHC operators to provide the maximum useful luminosity to CMS. The PLT is comprised of two arrays of eight small-angle telescopes situated on either side of the CMS interaction point. Each telescope hovers only 1 cm away from the CMS beam pipe, where it uses three planes of pixel sensors to take separate, unique measurements of luminosity.
The discovery of high-energy astrophysical neutrinos initially announced by IceCube in 2013 provided an added boost to the planning for new, larger facilities that could study the signal in detail and identify its origins. Three large projects – KM3NeT in the Mediterranean Sea, IceCube-Gen2 at the South Pole and the Gigaton Volume Detector (GVD) in Lake Baikal – are already working together in the framework of the Global Neutrino Network (CERN Courier December 2014 p11).
In December, the RWTH Aachen University hosted a workshop on these projects and their low-energy sub-detectors, ORCA and PINGU, which aim at determination of the neutrino-mass hierarchy through precision measurements of atmospheric-neutrino oscillations. Some 80 participants from 11 different countries came to discuss visionary strategies for detector optimization and technological aspects common to the high-energy neutrino telescopes.
Photodetection techniques, as well as trigger and readout strategies, formed one particular focus. All of the detectors are based on optical modules consisting of photomultiplier tubes (PMTs) housed in a pressure-resistant glass vessel together with their digitization and read-out electronics. Representatives of the experiments shared their experiences on the development, in situ performance and mass-production of the different designs. While the baseline design for IceCube-Gen2 follows the proven IceCube modules closely, KM3NeT has successfully deployed and operated prototypes of a new design consisting of 31 3″ PMTs housed in a single glass sphere, which offer superior timing and intrinsic directional information. Adaption of this technology for IceCube is under investigation.
New and innovative designs for optical modules were also reviewed, for example a large-area sensor employing wavelength-shifting and light-guiding techniques to collect photons in the blue and UV range and guide them to a small-diameter low-noise PMT. Presentations from Hamamatsu Photonics and Nautilus Marine Service on the latest developments in photosensors and glass housings, respectively, complemented the other talks nicely.
In addition, discussions centred on auxiliary science projects that can be carried out at the planned infrastructures. These can serve as a test bed for completely new detection technologies, such as acoustic neutrino detection, which is possible in water and ice, or radio neutrino detection, which is limited to ice as the target medium. Furthermore, IceCube-Gen2 at the South Pole offers the unique possibility to install detectors on the surface above the telescope deep in the ice, the latter acting as a detector for high-energy muons from cosmic-ray-induced extensive air showers. Indeed, the interest in cosmic-ray detectors on top of an extended IceCube telescope reaches beyond the communities of the three big projects.
The second focus of the workshop addressed the physics potential of cosmic-ray detection on the multi-kilometre scale, and especially the use of a surface array as an air-shower veto for the detection of astrophysical neutrinos from the southern sky at the South Pole. The rationale for surface veto techniques is the fact that the main background to extraterrestrial neutrinos from the upper hemisphere consists of muons and neutrinos produced in the Earth’s atmosphere. These particles are correlated to extended air showers, which can be tagged by a surface array. While upward-moving neutrinos have to traverse the entire Earth and are absorbed above some 100 TeV energy, downward-moving neutrinos do not suffer from absorption. Therefore a surface veto is especially powerful for catching larger numbers of cosmic neutrinos at the very highest energies.
The capabilities of these surface extensions together with deep-ice components will be evaluated in the near future. Presentations at the workshop on various detection techniques – such as charged-particle detectors, imaging air-Cherenkov telescopes and Cherenkov timing arrays – allowed detailed comparisons of their capabilities. Parameters of interest are duty cycle, energy threshold and the cost for construction and installation. The development of different detectors for applications in harsh environments is already on its way and the first prototypes are scheduled to be tested in 2015.
• The Detector Design and Technology for Next Generation Neutrino Observatories workshop was supported by the Helmholtz Alliance for Astroparticle Physics (HAP), RWTH Aachen University, and Hamamatsu Photonics. For more information, visit hap2014.physik.rwth-aachen.de.
Quarkonia – charm or beauty quark/antiquark bound states – are prototypes of elementary systems governed by the strong force. Owing to the large masses and small velocities of the quarks, their mutual interaction becomes simpler to describe, therefore opening unique insights into the mechanism of strong interactions. For decades, research in the area of quarkonium production in hadron collisions has been hampered by anomalies and puzzles in theoretical calculations and experimental results, so that, until recently, the studies were stuck at a validation phase. Now, new CMS data are enabling a breakthrough by accomplishing cross-section measurements for quarkonium production that reach unprecedentedly high values of transverse momentum (pT).
The latest and most persistent “quarkonium puzzle”, lasting for more than 10 years, was the seeming impossibility of theory to reproduce simultaneously quarkonium yields and polarizations, as observed in hadronic interactions. Polarization is particularly sensitive to the mechanism of quark–antiquark (qq) bound-state formation, because it reveals the quantum properties of the pre-resonance qq pair. For example, if a 3S1 bound state (J/ψ or Υ) is measured to be unpolarized (isotropic decay distribution), the straightforward interpretation is that it evolved from an initial coloured 1S0 qq configuration. To extract this information from differential cross-section measurements requires an additional layer of interpretation, based on perturbative calculations of the pre-resonance qq kinematics in the laboratory reference frame. The fragility of this additional step will reveal itself, a posteriori, as the cause of the puzzle.
In recent years, CMS provided the first unambiguous evidence that the decays of 3S1 bottomonia (Υ(1,2,3S)) and charmonia (J/ψ, ψ(2S)) are always approximately isotropic (CMS Collaboration 2013): the pre-resonance qq is a 1S0 state neutralizing its colour into the final 3S1 bound state. This contradicted the idea that quarkonium states are produced mainly from a transversely polarized gluon (coloured 3S1 pre-resonance), as deduced traditionally from cross-section measurements. After having exposed the polarization problem with high-precision measurements, CMS is now providing the key to its clarification.
The new cross-section measurements allow a theory/data comparison at large values of the ratio pT/mass, where perturbative calculations are more reliable. First attempts to do so, not yet exploiting the exceptional high-pT reach of the newest data, were revealing. With theory calculations restricted to their region of validity, the cross-section measurements are actually found to agree with the polarization data, indicating that the bound-state formation through coloured 1S0 pre-resonance is dominant (G Bodwin et al. 2014, K-T Chao et al. 2012, P Faccioli et al. 2014).
Heading towards the solution of a decades-long puzzle, what of the fundamental question: how do quarks and antiquarks interact to form bound states? Future analyses will disclose the complete hierarchy of transitions from pre-resonances with different quantum properties to the family of observed bound states, providing a set of “Kepler” laws for the long-distance interactions between quark and antiquark.
New results from the ALICE collaboration are providing additional data to test ideas about how particles are produced out of the quark–gluon plasma (QGP) created in heavy-ion collisions at the LHC.
Experiments at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) observed an enhancement in pT-dependent baryon/meson ratios – specifically the p/π and Λ/K0S ratios – for central nucleus–nucleus (AA) collisions in comparison with proton–proton (pp) collisions, where particle production is assumed to be dominated by parton fragmentation. In addition, constituent-quark scaling was observed in the elliptic-flow parameter, v2, measured in AA collisions. To interpret these observations, the coalescence of quarks was suggested as an additional particle-production mechanism. The coalescence (or recombination) model postulates that three quarks must come together to form a baryon, while a quark and an antiquark must coalesce to form a meson. The pT and the v2 of the particle created is the sum of the respective values of the constituent quarks. Therefore, coalescence models generally predict differences between the pT spectra of baryons and mesons, predominantly in the range 2 < pT < 5 GeV/c, where the enhancement in the baryon/meson ratio has been measured.
While a similar enhancement in the p/π and Λ/K0S ratios is observed at the LHC, the mass scaling of v2 is not, calling into question the importance of the coalescence mechanism. The observed-particle pT spectra reflect the dynamics of the expanding QGP created in local thermal equilibrium, conferring to the final-state particles a common radial velocity independent of their mass, but a different momentum (hydrodynamic flow). The resulting blue shift in the pT spectrum therefore scales with particle mass, and is observed as a rise in the p/π and Λ/K0S ratios at low pT (see figure). In such a hydrodynamic description, particles with the same mass have pT spectra with similar shapes, independent of their quark content. The particular shape of the baryon/meson ratio observed in AA collisions therefore reflects the relative importance of hydrodynamic flow, parton fragmentation and quark coalescence. However, for the p/π and Λ/K0S ratios, the particles in the numerator and denominator differ in both mass and (anti)quark content, so coalescence and hydrodynamic effects cannot be disentangled. To test the role of coalescence further, it is instructive to conduct this study using a baryon and a meson that have similar mass.
Fortunately, nature provides two such particles: the proton, a baryon with mass 938 MeV/c2, and the φ meson, which has a mass of 1019 MeV/c2. If protons and φ mesons are produced predominantly through coalescence, their pT spectra will have different shapes. Hydrodynamic models alone would predict pT spectra with similar shapes owing to the small mass-difference (less than 9%), implying a p/φ ratio that is constant with pT.
For peripheral lead–lead collisions, where the small volume of the quark–gluon plasma reduces the influence of collective hydrodynamic motion on the pT spectra, the p/φ ratio has a strong dependence on pT, similar to that observed for pp collisions. In contrast, as the figure shows, in central lead–lead collisions – where the volume of the QGP produced is largest – the p/φratio has a very different pT dependence, and is constant within its uncertainties for pT < 4 GeV/c. The data therefore indicate that hydrodynamics is the leading contribution to particle pT spectra in central lead–lead collisions at LHC energies, and it does not seem necessary to invoke coalescence models.
In the coming year, the ALICE collaboration will measure a larger number of collisions at a higher energy. This will allow a more precise study of both the pT spectra and elliptic-flow parameters of the proton and φ meson, and will allow tighter constraints to be placed on theoretical models of particle production in heavy-ion collisions.
There is evidence for dark matter from many astronomical observations, yet so far, dark matter has not been seen in particle-physics experiments, and there is no evidence for non-gravitational interactions between dark matter and Standard Model particles. If such interactions exist, dark-matter particles could be produced in proton–proton collisions at the LHC. The dark matter would travel unseen through the ATLAS detector, but often one or more Standard Model particles would accompany it, either produced by the dark-matter interaction or radiated from the colliding partons. Observed particles with a large imbalance of momentum in the transverse plane of the detector could therefore signal the production of dark matter.
Because radiation from the colliding partons is most likely a jet, the “monojet” search is a powerful search for dark matter. The ATLAS collaboration now has a new result in this channel and, while it does not show evidence for dark-matter production at the LHC, it does set significantly improved limits on the possible rate for a variety of interactions. The reach of this analysis depends strongly on a precise determination of the background from Z bosons decaying to neutrinos at large-boson transverse-momentum. By deriving this background from data samples of W and Z bosons decaying to charged leptons, the analysis achieves a total background uncertainty in the result of 3–14%, depending on the transverse momentum.
To compare with non-collider searches for weakly interacting massive particle (WIMP) dark matter, the limits from this analysis have been translated via an effective field theory into upper limits on WIMP–nucleon scattering or on WIMP annihilation cross-sections. When the WIMP mass is much smaller than several hundred giga-electron-volts – the kinematic and trigger thresholds used in the analysis – the collider results are approximately independent of the WIMP mass. Therefore, the results play an important role in constraining light dark matter for several types of spin-independent scattering interactions (see figure). Moreover, collider results are insensitive to the Lorentz structure of the interaction. The results shown on spin-dependent interactions are comparable to the spin-independent results and significantly stronger than those of other types of experiments.
The effective theory is a useful and general way to relate collider results to other dark-matter experiments, but it cannot always be employed safely. One advantage of the searches at the LHC is that partons can collide with enough energy to resolve the mediating interaction directly, opening complementary ways to study it. In this situation, the effective theory breaks down, and simplified models specifying an explicit mediating particle are more appropriate.
The new ATLAS monojet result is sensitive to dark-matter production rates where both effective theory and simplified-model viewpoints are worthwhile. In general, for large couplings of the mediating particles to dark matter and quarks, the mediators are heavy enough to employ the effective theory, whereas for couplings of order unity the mediating particles are too light and the effective theory is an incomplete description of the interaction. The figures use two types of dashed lines to depict the separate ATLAS limits calculated for these two cases. In both, the calculation removes the portion of the signal cross-section that depends on the internal structure of the mediator, recovering a well-defined and general but conservative limit from the effective theory. In addition, the new result presents constraints on dark-matter production within one possible simplified model, where the mediator of the interaction is a Z’-like boson.
While the monojet analysis is generally the most powerful search when the accompanying Standard Model particle is radiated from the colliding partons, ATLAS has also employed other Standard Model particles in similar searches. They are especially important when these particles arise from the dark-matter interaction itself. Taken together, ATLAS has established a broad and robust programme of dark-matter searches that will continue to grow with the upcoming data-taking.
On 10 January, the ground-breaking ceremony for the Jiangmen Underground Neutrino Observatory (JUNO) took place in Jiangmen City, Guangdong Province, China. More than 300 scientists and officials from China and other countries attended and witnessed this historical moment.
JUNO is the second China-based neutrino project, following the Daya Bay Reactor experiment, and is designed to determine the neutrino mass-hierarchy via precision measurements of the reactor-neutrino energy spectrum. The experiment is scheduled to start data-taking in 2020 and is expected to operate for at least 20 years. The neutrino detector, which is the experiment’s core component, will be the world’s largest and highest-precision liquid scintillator detector.
After the determination of the θ13 mixing angle by Daya Bay and other experiments, the next challenge to the international neutrino community is to determine the neutrino-mass hierarchy. Sensitivity analysis shows that the preferred range for the experiment stations is 50–55 km from a nuclear reactor. Jinji Town, the detector site chosen for the JUNO experiment, is 53 km from both Yangjiang and Taishan Nuclear Power Plants, which provide a total thermal power of 35.8 GW. By 2020, the effective power will be the highest in the world.
The JUNO international collaboration, established on 28 July 2014, already consists of more than 300 members from 45 institutions in nine countries and regions, and more than 10 institutions from five countries are planning to join.
High Energy Stereoscopic System (HESS) has discovered three extremely luminous gamma-ray sources in the Large Magellanic Cloud (LMC) – a dwarf galaxy orbiting the Milky Way about 170,000 light-years away. The three objects are all exceptional. They comprise the most powerful supernova remnant and pulsar-wind nebula, as well as a superbubble – a new class of source in very high-energy (VHE) gamma rays.
The HESS array of telescopes, located in Namibia, observes flashes of Cherenkov light emitted by particle showers triggered by incident gamma rays in the upper atmosphere (CERN Courier January/February 2005 p30). This technique is sensitive to gamma rays at energies of tera-electron-volts – photons typically a thousand times more energetic than those observed by the Fermi Gamma-ray Space Telescope in the giga-electron-volt range (CERN Courier November 2008 p13). These high-energy photons are emitted by extremely energetic particles interacting with matter or radiation. They are therefore the best tracers of cosmic accelerators such as supernova remnants and pulsar wind nebulas – two different types of remains from the evolution of massive stars.
Resolving individual sources in a galaxy outside of the Milky Way is a new breakthrough for Cherenkov-telescope astronomy. HESS performed a deep observation of the largest star-forming region within the LMC, known as the Tarantula Nebula ( Picture of the month, CERN Courier June 2012 p12). The 210 hours of observation yielded the discovery of the three extremely energetic objects.
One of the new sources is the superbubble 30 Dor C. It is the first time that a superbubble has been detected in the VHE regime, and demonstrates that the objects are a source of highly energetic particles. With a diameter of 270 light-years, 30 Dor C is the largest-known X-ray-emitting shell, and appears to have been blown by several supernovas and strong stellar winds from massive stars. The detection by HESS is important because it shows that superbubbles are viable sources of galactic cosmic rays, complementary to individual supernova remnants (CERN Courier April 2013 p12).
Another source detected by HESS is the pulsar-wind nebula N 157B. This kind of nebula is formed by the wind of ultra-relativistic particles blown by a pulsar – a highly magnetized, rapidly spinning neutron star. The most famous is the Crab Nebula, one of the brightest sources in the gamma-ray sky (CERN Courier November 2008 p11). N 157B is similar, but outshines the Crab Nebula by an order of magnitude in VHE gamma rays, owing to a lower magnetic field and a stronger radiation field from neighbouring star-forming regions.
The third object is the supernova remnant N 132D, which is already known as a bright object in the radio and infrared wavebands. Although it is between 2500 and 6000 years old, it still outshines the strongest supernova remnants in the Galaxy in the VHE regime. Surprisingly, the remnant of the bright supernova SN 1987A – which exploded in the LMC 28 years ago – was not detected by HESS, in contrast to theoretical predictions. The current study published in Science shows the LMC to be a prime target for even deeper observations with the new HESS II 28-m telescope and the future Cherenkov Telescope Array (CERN Courier July/August 2012 p28).
The Linac Coherent Light Source (LCLS) at SLAC produced its first laser-like X-ray pulses in April 2009. The unique and potentially transformative characteristics of the LCLS beam – in particular, the short femtosecond pulse lengths and the large numbers of photons per pulse (see The LCLS XFEL below) – have created whole new fields, especially in the study of biological materials. X-ray diffraction on nanocrystals, for example, reveals 3D structures at atomic resolution, and allows pump-probe analysis of functional changes in the crystallized molecules. New modalities of X-ray solution scattering include wide-angle scattering, which provides detailed pictures from pump-probe experiments, and fluctuational solution scattering, where the X-ray pulse freezes the rotation of the molecules in the beam, resulting in a rich, 2D scattering pattern. Even the determination of the structure of single particles is possible. This article focuses on examples from crystallography and time-resolved solution scattering.
An important example from crystallography concerns the structure of protein molecules. As a reminder, protein molecules, which are encoded in our genes, are linear polymers of the 20 naturally occurring amino-acid monomers. Proteins contain hundreds or thousands of amino acids and carry out most functions within cells or organs. They catalyse chemical reactions; act as motors in a variety of contexts; control the flow of substances into and out of cells; and mediate signalling processes. Knowledge of their atomic structures lies at the heart of mechanistic understanding in modern biology.
Serial femtosecond crystallography (SFX) provides a method of studying the structure of proteins. In SFX, still X-ray photographs are obtained from a stream of nanocrystals, each crystal being illuminated by a single pulse of a few femtoseconds duration. At the LCLS, the 1012 photons per pulse can produce observable diffraction from a protein crystal much less than 1 μm3. Critically, a 10 fs pulse will scatter from a specimen before radiation damage takes place, thereby eliminating such damage as an experimental issue. Figure 1 shows a typical SFX set-up for crystals of membrane proteins. The X-ray beam in yellow illuminates a stream of crystals, shown in the inset, being carried in a thin stream of highly viscous cubic-phase lipid (LCP). The high-pressure system that creates the jet is on the left. The rate of LCP flow is well matched to the 120 Hz arrival rate of the X-ray pulses, so not much material is wasted between shots. In the ideal case, each X-ray pulse scatters from a single crystal in the LCP flow. For soluble proteins, a jet of aqueous buffer replaces the LCP.
AT1R is found at the surface of vascular cells and serves as the principal regulator of blood pressure (figure 3). Although several AT1R blockers (ARBs) have been developed as anti-hypertensive drugs, the structural knowledge of the binding to AT1Rs has been lacking, owing mainly to the difficulties of growing high-quality crystals for structure determination. Using SFX at the LCLS, Vadim Cherezov and colleagues have successfully determined the room-temperature crystal structure of human AT1R in a complex with its selective receptor-blocker ZD7155 at 2.9 Å resolution (Zhang et al. 2015). The structure of the AT1R–ZD7155 complex reveals key features of AT1R and critical interactions for ZD7155 binding. Docking simulations, which predict the binding orientation of clinically used ARBs onto the AT1R structure, further elucidated both the common and distinct binding modes for these anti-hypertensive drugs. The results have provided fundamental insights into the AT1R structure-function relationship and structure-based drug design.
In solution scattering, an X-ray beam illuminates a volume of solution containing a large number of the particles of interest, creating a diffraction pattern. Because the experiment averages across many rotating molecules, the observed pattern is circularly symmetric and can be encapsulated by a radial intensity curve, I(q), where q = 4πsinθ/λ and 2θ is the scattering angle. The data are therefore essentially one-dimensional (figure 4b). The I(q) curves are quite smooth and can be well described by a modest number of parameters. They have traditionally been analysed to yield a few important physical characteristics of the scattering particle, such as its molecular mass and radius of gyration. Synchrotrons have enabled new classes of solution-scattering experiments, and the advent of XFEL sources is already providing further expansion of the methodology.
Chasing the protein quake
An elegant example of time-resolved wide-angle scattering (WAXS) at the LCLS comes from a group led by Richard Neutze at the University of Gothenberg (Arnlund et al. 2014), which has used multi-photon absorption to trigger an extremely rapid structural perturbation in the photosynthetic reaction centre from Blastochloris viridis, a non-sulphur purple bacterium that produces molecular oxygen valuable to our environment. The group measured the progress of this fluctuation using time-resolved WAXS. Appearing with a time constant of a few picoseconds, the perturbation falls away with a 10 ps time constant and, importantly, precedes the propagation of heat through the protein.
The photosynthetic reaction centre faces unique problems of energy management. The energy of a single photon of green light is approximately equal to the activation energy for the unfolding of the protein molecule. In the photosynthetic complex, photons are absorbed by light-harvesting antennae and then rapidly funnelled to the reaction centre through specialized channels. The hypothesis is that excess energy, which may also be deposited in the protein, is dissipated before damage can be done by a process named “a protein quake”, indicating a nanoscale analogue of the spreading of waves away from the epicentre of an earthquake.
The experiments performed at the coherent X-ray imaging (CXI) station at the LCLS used micro-jet injection of solubilized protein samples. An 800 nm laser pulse of 500 fs duration illuminating the sample was calibrated so that a heating signal could be observed in the difference between the WAXS spectra with and without the laser illumination (figure 5a). The XFEL was operated to produce 40 fs pulses at 120 Hz, and illuminated and dark samples were interleaved, each at 60 Hz. The team calibrated the delay time between the laser and XFEL pulses to within 5 ps, and collected scattering patterns across a series of 41 time delays to a maximum of 100 ps. Figure 5b shows the curves indicating the difference in scattering between activated and dark molecules that were generated at each time point.
The results from this study rely on knowing the equilibrium molecular structure of the complex
The results from this study rely on knowing the equilibrium molecular structure of the complex. Molecular-dynamics (MD) simulations and modelling play a key role in interpreting the data and developing an understanding of the “quake”. A combination of MD simulations of heat deposition and flow in a molecule and spectral decomposition of the time-resolved difference scattering curves provide a strong basis for a detailed understanding of the energy propagation in the system. Because the light pulse was tuned to the frequency of the photosystem’s antennae, cofactors (molecules within the photosynthetic complex) were instantaneously heated to a few thousand kelvin, before decaying with a half-life of about 7 ps through heat flow to the remainder of the protein. Also, principal component analysis revealed oscillations in the range q = 0.2–0.9 nm–1, corresponding to a crystallographic resolution of 31–7 nm, which are signatures of structural changes in the protein. The higher-angle scattering – corresponding to the heat motion – extends to a resolution of a few angstroms, with a time resolution extending to a picosecond. This study illustrates not only the rapid evolution of the technology and experimental prowess of the field, but brings it to bear on a problem that makes clear the biological relevance of extremely rapid dynamics.
Effective single-particle imaging (SPI) would eliminate the need for crystallization, and would open new horizons in structure determination. It is an arena in which electron microscopy is making great strides, and where XFELs face great challenges. Simulations have demonstrated the real possibility of recovering structures from many thousands of weak X-ray snapshots of molecules in random orientation. However, it has become clear, as actual experiments are carried out, that there are profound difficulties with collecting high-resolution data – at present the best resolution in 2D snapshot images is about 20 nm. A recent workshop on single-particle imaging at SLAC identified a number of sources of artifacts including complex detector nonlinearities, scattering from apertures, scattering from solvent, and shot-to-shot variation in beam intensity and position. In addition, the current capability to hit a single molecule with a pulse reliably is quite limited. Serious technical progress at XFEL beamlines will be necessary before the promise of SPI at XFELs is realized fully.
Currently, the only operational XFEL facilities are at the SPring-8 Angstrom Compact free-electron LAser (SACLA) at RIKEN in Japan (CERN Courier July/August 2011 p9) and the LCLS in the US, so competition for beamtime is intense. Within the next few years, the worldwide capacity to carry out XFEL experiments will increase dramatically. In 2017, the European XFEL will come on line in Hamburg, providing a pulse rate of 27 kHz compared with the 120 Hz rate at the LCLS. At about the same time, facilities at the Paul Scherrer Institute in Switzerland and at the Pohang Accelerator Laboratory in South Korea will produce first light. In addition, the technologies for performing and analysing experiments are improving rapidly. It seems more than fair to anticipate a rapid growth in crystallography, molecular movies, and other exciting experimental methods.
The LCLS XFEL
Hard X-ray free-electron lasers (XFELs) are derived from the undulator platform commonly used in synchrotron X-ray sources around the world. In the figure, (a) shows the undulator lattice, which comprises a series of alternating pairs of magnetic north and south poles defining a gap through which electron bunches travel. The undulator at the LCLS is 60 m long, compared with about 3 m for a synchrotron device. The bunches experience an alternating force normal to the magnetic field in the gap, transforming their linear path into a low-amplitude cosine trajectory.
In the reference frame of the electron bunch, the radiation that each electron emits has a wavelength equal to the spacing of the undulator magnets (a few centimetres) divided by the square of the relativistic factor γ = E/me2 (see below). Each electron interacts both with the radiation emitted by electrons preceding it in the bunch, and with the magnetic field within the undulator. Initially, the N electrons in the bunch have random phases (see figure, (b)), so that the radiated power is proportional to N.
As the bunch advances through the undulator, it breaks up into a series of microbunches of electrons separated by the wavelength of the emitted radiation. Without going into detail, this microbunching arises from a Lorenz force on the electron in the direction of propagation, which is generated by the interaction of the undulator field and the (small) component of the electron velocity perpendicular to the direction of propagation. This force tends to push the electrons into a position at the peak of the emitted radiation. All electrons within a single bunch radiate coherently, and the radiation from one microbunch is also coherent with that from the next, being separated by a single wavelength. Therefore, the power in the radiated field is proportional to N2.
The process of microbunching can be viewed as a resonance process, for which the following undulator equation describes the conditions for operation at wavelength λ.
The tables, above, show typical operating conditions for the CXI beamline at the LCLS. The values represent only a small subset of possible operating conditions. Note the small source size, the short pulse duration and the high photons per pulse.
Even though HERA – the only electron–proton collider built so far – stopped running in mid-2007, analyses of the vast amounts of data from the Hermes, H1 and ZEUS experiments continue to produce important and high-impact measurements relevant to spin physics, the structure of the proton and other areas of QCD. Special efforts have been made to ensure that these unique data are safely preserved for future analyses for at least the next 10 years, within the framework of the Data Preservation in High-Energy Physics collaboration (CERN Courier May 2009 p21).
In November 2014, DESY hosted a workshop on “Future Physics with the HERA Data for Current and Planned Experiments”, to pull together experts and ask questions about what the HERA data still have to say and how they are relevant to other facilities. The aim was, in effect, to create a list of subjects that are still to be investigated or exploited fully. Across two days, almost 30 presentations and lively discussions occupied around 70 participants, both experimentalists and theorists, from across the globe.
The most recent results from the collaborations and a perspective from theory were presented first in a special HERA symposium, starting with a presentation on recent results from Hermes by Charlotte Van Hulse of the University of the Basque Country. She highlighted the semi-inclusive deep-inelastic scattering (DIS) data collected on a transversely polarized hydrogen target that provides access to various transverse-momentum-dependent parton distribution functions (PDFs), which are sensitive to correlations between quark spin, proton spin, the transverse momentum of quarks and/or of final-state hadrons.
Two talks followed that showed results from H1 and ZEUS, the first on proton structure by Aharon Levy of Tel Aviv University and the second on diffraction and hadronic final states by Alice Valkárová of Charles University, Prague. All of the measurements of inclusive DIS from H1 and ZEUS have been combined recently and QCD fits to these data have been performed, providing a new set of PDFs of the proton (see figure). The 15 years of data taking at HERA have culminated in a combination of 3000 data points, and their impact on knowledge of the structure of the proton will last for years to come. Also, recent jet measurements at HERA have enabled the strong coupling constant to be extracted with an experimental precision of <1%. This has been achieved through the simultaneous measurement of inclusive-jet, dijet and trijet cross-sections.
Providing a theoretical perspective, Robert Thorne of University College London discussed the contribution that data from HERA have made to the understanding of electroweak physics, physics beyond the Standard Model and, in particular, QCD and the structure of the proton. With a crowded auditorium, the symposium went significantly over time because the results shown provoked much discussion that continued into the evening.
The workshop started with general talks from Elke Aschenauer of Brookhaven and Hannes Jung of DESY, both of whom highlighted the need to measure particle production – either inclusively or, even better, by tagging specific particle species – in electron–proton (ep) scattering, differential in four kinematic quantities. Such detailed measurements can be useful in model building and in tuning Monte Carlo simulations, but they can also pin down the transverse-momentum distributions of partons, which are more commonly considered in spin physics. Jung also stressed the contribution that HERA data can make to understanding the nature of multi-parton interactions, by virtue of the unique ability of being able to contrast events in which the colliding photon is either point-like or hadronic-like, thereby turning multi-parton interactions “off” and “on”, respectively, within the same experimental set-up.
Updates are needed to Monte Carlo simulations to include the more advanced models for underlying events in ep scattering, as has been done for pp interactions. Simon Plätzer of the Institute for Particle Physics Phenomenology, Durham, discussed recent advances made for the HERWIG++ event generator to include ep processes. He emphasized that the program is ready for comparison with DIS processes, even including the next-to-leading-order matrix elements. As well as a personal perspective, Achim Geiser of DESY provided an extensive list of topics yet to be covered, which anyone interested could look at to see what most excites them.
Given some tension seen between theory and the HERA inclusive data at low photon virtualities, Q2, and low Bjorken-x, as presented by Levy, several talks, including those by Joachim Bartels of Hamburg University and Amanda Cooper-Sarkar of Oxford University, discussed this region as an avenue for future work. Clearly a joint H1 and ZEUS extraction of the longitudinal structure function, FL, is needed. Also, the more precise combined data sets now available demand a phenomenological analysis in which the proton structure function, F2, is parameterized in terms of x–λ, where the dependence on –λ could reveal information on the Pomeron and the applicability of the parton-evolution schemes to describe the structure of the proton.
A highlight of the workshop was the status of next-to-next-to-leading-order (NNLO) QCD predictions of jet production at HERA, presented by Thomas Gehrmann of Zurich University. The use of such predictions will allow more precise comparisons with data and, for example, reduced uncertainties on the extractions of PDFs and the strong coupling constant. The first full predictions and comparison to data for the production of dijets in DIS will be the first NNLO final-state prediction at HERA, and is expected during 2015.
In a wide-ranging talk on diffractive processes, Marta Ruspa of the University of Piemonte Orientale highlighted the crucial questions still to be answered, which relate to the consistency and combination of the H1 and ZEUS data for measurements of inclusive diffraction in DIS. These data allow the extraction of diffractive PDFs (DPDFs) in analogy to the conventional PDFs for inclusive DIS. Using DPDFs, and because factorization should hold, predictions can be made for other processes. The experimental results on the holding of factorization are not conclusive, however, and further investigation of the HERA data would help to clarify this issue and give a better understanding of the mechanism at the LHC, as Bartels indicated.
Ronan McNulty of University College Dublin discussed the overlaps in physics from HERA and the LHCb experiment at CERN, in particular the complementary information on extraction of proton PDFs and the measurement of vector-meson production, particularly J/ψ production, and its sensitivity to the gluon distribution in the proton. Similarly, Sasha Glazov of DESY proposed ideas for common HERA–LHC analyses in the area of PDF extractions and jet physics, where HERA has particularly precise measurements.
Alessandro Bacchetta of the University of Pavia and Emanuele Nocera of the University of Genova highlighted the many pioneering measurements made by the Hermes collaboration in mapping out the helicity and 3D structure of the proton. Open issues include the strange-quark spin content of the proton, electroweak structure functions, etc. These speakers also discussed how the final Hermes analyses, together with results from experiments such as COMPASS at CERN and others at Jefferson Lab and at a future electron–ion collider, could lead to a greater understanding of the complete picture of the structure of the proton.
In a presentation that was relatively technical but very important for this workshop, Dirk Kruecker of DESY outlined the status of the long-term and safe preservation of the HERA data. To ensure that the most is made of this data legacy, the collaborations are open to people outside of the traditional institutes who are interested in analysing the data. To gain access to the data and work on a publication along with a collaboration, interested people should contact the respective spokesperson. A summary document of the workshop will be published in early 2015, and should act as a useful reference for anyone interested in future analyses with the HERA data.
A summary talk by John Dainton of the University of Liverpool provided a thought-provoking and entertaining résumé of some of the highlights of HERA physics, and how they relate to other facilities and fit into the broad context of particle physics. After two intense days, the talks and discussions gave the workshop delegates renewed vigour with which to exploit the HERA data fully during the years to come, and push back the understanding of a rich and wide variety of QCD processes, such as the nature of diffraction and the structure of the proton.
To maintain scientific progress and exploit the full capacity of the LHC, the collider will need to operate at higher luminosity. Like shining a brighter light on an object, this will allow more accurate measurements of new particles, the observation of rarer processes, and increase the discovery reach with rare events at the high-energy frontier. The High-Luminosity LHC (HL-LHC) project began in 2011 under the framework of a European Union (EU) grant as a conceptual study, with the aim to increase its luminosity by a factor of 5–10 beyond the original design value and provide 3000 fb–1 in 10 to 12 years.
Two years later, CERN Council recognized the project as the top priority for CERN and for Europe (CERN Courier July/August 2013 p9), and then confirmed its priority status in CERN’s scientific and financial programme in 2014 by approving the laboratory’s medium-term plan for 2014–2019. Since this approval, new activities have started up to deliver key technologies that are needed for the upgrade. The latest results and recommendations by the various reviews that took place in 2014 were the main topics for discussion at the 4th Joint HiLumi LHC/LARP Annual Meeting, which was hosted by KEK in Tsukuba in November.
The latest updates
The event began with plenary sessions where members of the collaboration management – from CERN, KEK, the US LHC Accelerator Research Program (LARP) and the US Department of Energy – gave invited talks. The first plenary session closed with an update on the status of HL-LHC by the project leader, CERN’s Lucio Rossi, who also officially announced the new HL-LHC timeline. The plenary was followed by expert talks on residual dose-rate studies, layout and integration, optics and operation modes and progress on cooling, quench and assembly (together known as QXF). Akira Yamamoto of KEK presented the important results and recommendations of the recent superconducting cable review.
There were invited talks on the LHC Injectors Upgrade (LIU) by project leader Malika Meddahi from CERN, and on the outcomes of the 2nd ECFA HL-LHC Experiments Workshop held in October – an indication of the close collaboration with the experimentalists. One of the highlights of the plenaries was the status update on the Preliminary Design Report – the main deliverable of the project, which is to be published soon. There were three days of parallel sessions reviewing the progress in design and R&D in the various work packages – named in terms of activities – both with and without EU funding.
Refined optics and layout of the high-luminosity insertions have been provided by the activity on accelerator physics and performance, in collaboration with the other work packages. This new baseline takes into account the updated design of the magnets (in particular those of the matching section), the results of the energy deposition and collimation studies, and the constraints resulting from the integration of the components in the tunnel. The work towards the definition of the specifications for the magnets and their field quality has progressed, with an emphasis on the matching section for which a first iteration based on the requirements resulting from studies of beam dynamics has been completed. The outcomes include an updated impedance model of the LHC and a preliminary estimate of the resulting intensity limits and beam–beam effects. The studies confirmed the need for low-impedance collimators. In addition, an updated set of beam parameters consistent through all of the injectors and the LHC has been defined in collaboration with the LIU team.
The main efforts of the activity on magnets for insertion regions (IRs) in the past 18 months focused on the exploration of different options for the layout of the interaction region. The main parameters of the magnet lattice, such as operational field/gradients, apertures, lengths and magnet technology, have been chosen as a result of the worldwide collaboration, including US LARP and KEK. A baseline for the layout of the new interaction region is one of the main results of this work. There is now a coherent layout, agreed with the beam dynamics, energy deposition, cooling and vacuum teams, covering the whole interaction region.
The engineering design of most of the IR magnets has now started and the first hardware tests are expected in 2015. There was also good news from the quench-protection side, which can meet all of the key requirements based on the results from tests performed on the magnets. In addition, there is a solution for cooling the inner triplet (IT) quadrupoles and the separation dipole, D1. It relies on two heat exchangers for the IT quadrupole/orbit correctors assembly, with a separate system for the D1 dipole and the high-order corrector magnets. Besides these results, considerable effort was devoted to selecting the technologies and the design for the other magnets required in the lattice, namely the orbit correctors, the high-order correctors and the recombination dipole, D2.
The crab-cavities activity delivered designs for three prototype crab cavities, based on four-rod, RF-dipole (RFD) and double quarter-wave (DQW) structures. They were all tested successfully against the design gradient with higher-than-expected surface resistance. Further design improvements to the initial prototypes were made to comply with the strict requirements for higher-order-mode damping, while maintaining the deflecting field performance. There was significant progress on the engineering design of the dressed cavities and the two-cavity cryomodule conceptual design for tests at CERN’s Super Proton Synchrotron (SPS).
Full design studies, including thermal and mechanical analysis, were done for all three cavities, culminating in a major international design review where the three designs were assessed by a panel of independent leading superconducting RF experts. As an outcome of this review, the activity will focus the design effort for the SPS beam tests on the RFD and DQW cavities, with development of the 4-rod cavity continuing at a lower priority and not foreseen for the SPS tests. A key milestone – to freeze the cavity designs and interfaces – has also been met. In addition, a detailed road map to follow the fabrication and installation in the SPS has been prepared to meet the deadline of the extended year-end technical stop of 2016–2017.
The wrap-up talk on the IR-collimation activity also reviewed the work of related non-EU-funded work packages, namely machine protection (WP7), energy deposition and absorber co-ordination (WP10), and beam transfer and kickers (WP14). The activity has reached several significant milestones, following the recommendations of the collimation-project external review, which took place in spring 2013. Highlights include important progress towards the finalization of the layouts for the IR collimation. A solid baseline solution has been proposed for the two most challenging cleaning requirements: proton losses around the betatron-cleaning insertion and losses from ion collisions. The solution is based on new collimators – the target collimator long dispersion suppressor, or TCLD – to be integrated into the cold dispersion suppressors. Thanks to the use of shorter 11 T dipoles that will replace the existing 15-m-long dipoles, there will be sufficient space for the installation of warm collimators between two cold magnets. This collimation solution is elegant and modular because it can be applied, in principle, at any “old” dipole location. As one of the most challenging and urgent upgrades for the high-luminosity era, solid baselines for the collimation upgrade in the dispersion suppressors around IR7 and IR2 were also defined. In addition, simulations have continued for advanced collimation layouts in the matching sections of IR1 and IR5, improving significantly the cleaning of “debris” from collisions downstream around the high-luminosity experiments.
The cold-powering activity has seen the world-record current of 20 kA at 24 K in an electrical transmission line consisting of two 20-m-long MgB2 superconducting cables. Another achievement was with the novel design of the part of the cold-powering system that transfers the current from room temperature to the superconducting link. Following further elaboration, this was adopted as the baseline. The idea is that high-temperature superconducting (HTS) current-leads will be modular components that are connected via a flexible HTS cable to a compact cryostat, where the electrical joints between the HTS and MgB2 parts of the superconducting link are made. Simulation studies were also made to evaluate the electromagnetic and thermal behaviour of the MgB2 cables contained in the cold mass of the superconducting link, under static and transient conditions.
The final configuration has tens of high-current cables packed in a compact envelope to transfer a total current of about 150 kA feeding different magnet circuits. Cryogenic-flow schemes were also elaborated for the cold-powering systems at points 7, 1 and 5 on the LHC. An experimental study performed in the 20-m-long superconducting line at CERN was launched to understand quench propagation in the MgB2 superconducting cables operated in helium gas. In addition, integration studies of the cold-powering systems in the LHC were also done, with priority given to the system at point 7.
The meeting also covered updates on other topics such as machine protection, cryogenics, vacuum and beam instrumentation. Delicate arbitration took place between the needs of crab-cavity tests in the SPS at long straight section 4 and the requirements for the continuing study and tests of electron-cloud mitigation of those working on vacuum aspects (see Old machine to validate new technology below).
Summaries of the EU-funded work packages closed the meeting, showing “excellent technical progress thanks to the hard and smart work of many, including senior and junior”, as project leader Rossi concluded in his wrap-up talk.
Upcoming meetings will be the LARP/HiLumi LHC meeting on 11–13 May at Fermilab and the final FP7 HiLumi LHC/LARP collaboration meeting on 26–30 October at CERN. As a contribution to the UNESCO International Year of Light, special events celebrating this occasion will be organized by HL-LHC throughout the year – see cern.ch/go/light. (See also Viewpoint )
Old machine to validate new technology
Crab cavities have never been tested on hadron beams. So for the recently selected HL-LHC crab cavities (RFD and DQW, see main text), tests in the SPS beam are considered to be crucial. The goals are to validate the cavities with beam in terms of, for example, electric field, ramping, RF controls and impedance, and to study other parameters such as cavity transparency, RF noise, emittance growth and nonlinearities.
Long straight section 4 (LSS4) of the SPS already has a cold section, which was set up for the cold-bore experiment (COLDEX). Originally designed to measure synchrotron-radiation-induced gas release, COLDEX has become a key tool for evaluating electron-cloud effects. It mimics the cold bore and beam screen of the LHC for electron-cloud studies. Installed in the bypass line of the beam pipe, COLDEX is assembled on a moving table so that beam can pass either through the experiment during machine development runs or through the standard SPS beam pipe during normal operation. It has been running again since the SPS started up again last year after the first long shutdown, providing key information on new materials and technology to reduce or suppress severe electron-cloud effects that would otherwise be detrimental to LHC beams with 25 ns bunch spacing – as planned for Run 2.
Naturally, SPS LSS4 would be the right place to put the crab-cavity prototypes for the beam test. The goal was originally to install them during the extended year-end technical stop of 2016–2017, to validate the cavities in 2017, the year in which series construction must be launched. However, installing the cavities together with their powering and cryogenic infrastructure in an access time of 11–12 weeks is a real challenge. So at the meeting in Tsukuba, the idea of bringing forward part of the installation to 2015–2016 was discussed. However, in view of the severe electron-cloud effects that were computed in 2014 for LHC beam at high intensity, and the consequent need for a longer and deeper study to validate various solutions, COLDEX needs to run beyond 2015.
So what other options are there for testing the crab cavities? A preliminary look at possible locations for an additional cold section in the SPS led to LSS5. This would result in having two permanent “super” facilities to test equipment with proton beams. The hope is that these facilities would not only be available for testing crab cavities for the HL-LHC project, but would also provide a world facility for testing superconducting RF-accelerating structures in intense, high-energy proton beams. With the installation of adequate cryogenics and power infrastructure, a facility in LSS5 could further evolve and possibly also allow tests of beam damage and other beam effects for future superconducting magnets, for example for the Future Circular Collider study (CERN Courier April 2014 p16). This new idea raises many questions, but the experts are confident that these can be solved with suitable design and imagination.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.