The COMPASS experiment at CERN has made the first precise measurement of the polarizability of the pion – the lightest composite particle built from quarks. The result confirms the expectation from the low-energy expansion of QCD – the quantum field theory of the strong interaction between quarks – but is at variance with the previously published values, which overestimated the pion polarizability by more than a factor of two.
Every composite system made from charged particles can be polarized by an external electromagnetic field, which acts to separate positive and negative charges. The size of this charge separation – the induced dipole moment – is related to the external field by the polarizability. As a measure of the response of a complex system to an external force, polarizability is directly related to the system’s stiffness against deformability, and hence the binding force between the constituents.
The pion, made up of a quark and an antiquark, is the lightest object bound by the strong force and has a size of about 0.6 × 10–15 m (0.6 fm). So to observe a measurable effect, the particle must be subjected to electric fields in the order of 100 kV across its diameter – that is, about 1018 V/cm. To achieve this, the COMPASS experiment made use of the electric field around nuclei. To high-energy pions, this field appears as a source of (almost) real photons, on which the incident pions scatter. Such pion–photon Compton scattering, also known as the Primakoff mechanism, was explored in the early 1980s in an experiment at Serpukhov, but the small data sample led to only an imprecise value for the polarizability of 6.8±1.4 (stat.) ±1.2 (syst.) × 10–4 fm3, where the systematic uncertainty was underestimated, presumably.
COMPASS has now achieved a modern Primakoff experiment, using a 190 GeV pion beam from the Super Proton Synchrotron at CERN directed at a nickel target. Importantly, COMPASS was also able to use muons, which are point-like and hence non-deformable, to calibrate the experiment. The Compton π–γ → π–γ scattering is extracted from the reaction π–Ni → π–γNi by selecting events from the Coulomb peak at small momentum transfer. From the analysis of a sample of 63,000 events, the collaboration obtained a value of the pion electric polarizability of 2.0±0.6 (stat.) ±0.7 (syst.) × 10–4 fm3 – that is, about 2 × 10–4 of the pion’s volume. This value is in good agreement with theoretical calculations in low-energy QCD, therefore solving a long-standing discrepancy between these calculations and previous experimental efforts to determine the polarizability.
Although this measurement is the first to allow a self-calibration, the accuracy is still below the quoted uncertainty of the calculations. With more data already recorded, the COMPASS collaboration expects to improve on this result by a significant factor in the near future, and thereby probe further a benchmark calculation of non-perturbative QCD.
On 20 December 2013, the UN General Assembly proclaimed 2015 as the International Year of Light and Light-based Technologies (IYL 2015). The aim is to raise awareness about how these technologies provide solutions to global challenges in energy, education, agriculture and health.
In its quest to “see” the fundamental structure of matter, high-energy particle physics goes beyond the wavelengths of light to the wavelengths of particle beams. Over the years, developments in the accelerators that create those beams have led to new ways of producing light that have a big impact on other disciplines.
To celebrate the IYL 2015, this issue of CERN Courier looks at how brilliant, accelerator-based X-ray free-electron lasers are enabling exciting new studies in biology (see XFELs in the study of biological structure). Meanwhile, as Lucio Rossi points out in Viewpoint, accelerators provide the finest form of “light”, and experiments can now “see” down to distances as small as 10–20 m (see Viewpoint). The High-Luminosity LHC project (see A luminous future for the LHC) will allow CERN’s collider to cast still more of this fine light on matter. Finally, Inside Story (see Inside story) looks at how light and particle physics came together in the life of one physicist.
On 12 January, after 23 months of hard work involving around 1000 people each day, the key to the LHC was symbolically handed back to the operations team. The team will now perform tests on the machine in preparation for the restart this spring.
Tests include training the LHC’s superconducting dipole magnets to the current level needed for 6.5 TeV beam energy. The main dipole circuit of a given sector is ramped up until a quench of a single dipole occurs. The quench-protection system then swings into action, energy is extracted from the circuit, and the current is ramped down. After careful analysis, the exercise is repeated. On the next ramp, the magnet that quenched should hold the current (i.e. is trained), while at a higher current another of the 154 dipoles in the circuit quenches. For 2015, the target current is 11,080 A for operation at 6.5 TeV (with some margin). Sector 6-7 was brought to this level successfully at the end of 2014, having taken 20 training quenches to get there. Getting all eight sectors to this level will be an important milestone.
The next big step is the first sector test, in which beam would enter the LHC for the first time since February 2013.
The next big step is the first sector test, in which beam would enter the LHC for the first time since February 2013. The aim is to send single bunches from the Super Proton Synchrotron into the LHC through the injection regions at points 2 and 8 for a single pass through the available downstream sectors. This will allow testing of synchronization, the injection system, beam instrumentation, magnet settings, machine aperture and the beam dump.
A full circuit of the machine with beam and the start of beam commissioning are foreseen for March. It should then take about two months to re-commission the operational cycle, commission the beam-based systems (transverse feedback, RF, injection, beam dump system, beam instrumentation, power converters, orbit and tune feedbacks, etc) and commission and test the machine-protection system to re-establish the high level of protection required. This will open the way for the first collisions of stable beams at 6.5 TeV – foreseen currently for May – initially with a low number of bunches.
On 26 January, the CMS collaboration installed their new Pixel Luminosity Telescope (PLT). Designed with LHC Run 2 in mind, the PLT uses radiation-hard CMS pixel sensors to provide near-instantaneous readings of the per-bunch luminosity – thereby helping LHC operators to provide the maximum useful luminosity to CMS. The PLT is comprised of two arrays of eight small-angle telescopes situated on either side of the CMS interaction point. Each telescope hovers only 1 cm away from the CMS beam pipe, where it uses three planes of pixel sensors to take separate, unique measurements of luminosity.
The discovery of high-energy astrophysical neutrinos initially announced by IceCube in 2013 provided an added boost to the planning for new, larger facilities that could study the signal in detail and identify its origins. Three large projects – KM3NeT in the Mediterranean Sea, IceCube-Gen2 at the South Pole and the Gigaton Volume Detector (GVD) in Lake Baikal – are already working together in the framework of the Global Neutrino Network (CERN Courier December 2014 p11).
In December, the RWTH Aachen University hosted a workshop on these projects and their low-energy sub-detectors, ORCA and PINGU, which aim at determination of the neutrino-mass hierarchy through precision measurements of atmospheric-neutrino oscillations. Some 80 participants from 11 different countries came to discuss visionary strategies for detector optimization and technological aspects common to the high-energy neutrino telescopes.
Photodetection techniques, as well as trigger and readout strategies, formed one particular focus. All of the detectors are based on optical modules consisting of photomultiplier tubes (PMTs) housed in a pressure-resistant glass vessel together with their digitization and read-out electronics. Representatives of the experiments shared their experiences on the development, in situ performance and mass-production of the different designs. While the baseline design for IceCube-Gen2 follows the proven IceCube modules closely, KM3NeT has successfully deployed and operated prototypes of a new design consisting of 31 3″ PMTs housed in a single glass sphere, which offer superior timing and intrinsic directional information. Adaption of this technology for IceCube is under investigation.
New and innovative designs for optical modules were also reviewed, for example a large-area sensor employing wavelength-shifting and light-guiding techniques to collect photons in the blue and UV range and guide them to a small-diameter low-noise PMT. Presentations from Hamamatsu Photonics and Nautilus Marine Service on the latest developments in photosensors and glass housings, respectively, complemented the other talks nicely.
In addition, discussions centred on auxiliary science projects that can be carried out at the planned infrastructures. These can serve as a test bed for completely new detection technologies, such as acoustic neutrino detection, which is possible in water and ice, or radio neutrino detection, which is limited to ice as the target medium. Furthermore, IceCube-Gen2 at the South Pole offers the unique possibility to install detectors on the surface above the telescope deep in the ice, the latter acting as a detector for high-energy muons from cosmic-ray-induced extensive air showers. Indeed, the interest in cosmic-ray detectors on top of an extended IceCube telescope reaches beyond the communities of the three big projects.
The second focus of the workshop addressed the physics potential of cosmic-ray detection on the multi-kilometre scale, and especially the use of a surface array as an air-shower veto for the detection of astrophysical neutrinos from the southern sky at the South Pole. The rationale for surface veto techniques is the fact that the main background to extraterrestrial neutrinos from the upper hemisphere consists of muons and neutrinos produced in the Earth’s atmosphere. These particles are correlated to extended air showers, which can be tagged by a surface array. While upward-moving neutrinos have to traverse the entire Earth and are absorbed above some 100 TeV energy, downward-moving neutrinos do not suffer from absorption. Therefore a surface veto is especially powerful for catching larger numbers of cosmic neutrinos at the very highest energies.
The capabilities of these surface extensions together with deep-ice components will be evaluated in the near future. Presentations at the workshop on various detection techniques – such as charged-particle detectors, imaging air-Cherenkov telescopes and Cherenkov timing arrays – allowed detailed comparisons of their capabilities. Parameters of interest are duty cycle, energy threshold and the cost for construction and installation. The development of different detectors for applications in harsh environments is already on its way and the first prototypes are scheduled to be tested in 2015.
• The Detector Design and Technology for Next Generation Neutrino Observatories workshop was supported by the Helmholtz Alliance for Astroparticle Physics (HAP), RWTH Aachen University, and Hamamatsu Photonics. For more information, visit hap2014.physik.rwth-aachen.de.
Quarkonia – charm or beauty quark/antiquark bound states – are prototypes of elementary systems governed by the strong force. Owing to the large masses and small velocities of the quarks, their mutual interaction becomes simpler to describe, therefore opening unique insights into the mechanism of strong interactions. For decades, research in the area of quarkonium production in hadron collisions has been hampered by anomalies and puzzles in theoretical calculations and experimental results, so that, until recently, the studies were stuck at a validation phase. Now, new CMS data are enabling a breakthrough by accomplishing cross-section measurements for quarkonium production that reach unprecedentedly high values of transverse momentum (pT).
The latest and most persistent “quarkonium puzzle”, lasting for more than 10 years, was the seeming impossibility of theory to reproduce simultaneously quarkonium yields and polarizations, as observed in hadronic interactions. Polarization is particularly sensitive to the mechanism of quark–antiquark (qq) bound-state formation, because it reveals the quantum properties of the pre-resonance qq pair. For example, if a 3S1 bound state (J/ψ or Υ) is measured to be unpolarized (isotropic decay distribution), the straightforward interpretation is that it evolved from an initial coloured 1S0 qq configuration. To extract this information from differential cross-section measurements requires an additional layer of interpretation, based on perturbative calculations of the pre-resonance qq kinematics in the laboratory reference frame. The fragility of this additional step will reveal itself, a posteriori, as the cause of the puzzle.
In recent years, CMS provided the first unambiguous evidence that the decays of 3S1 bottomonia (Υ(1,2,3S)) and charmonia (J/ψ, ψ(2S)) are always approximately isotropic (CMS Collaboration 2013): the pre-resonance qq is a 1S0 state neutralizing its colour into the final 3S1 bound state. This contradicted the idea that quarkonium states are produced mainly from a transversely polarized gluon (coloured 3S1 pre-resonance), as deduced traditionally from cross-section measurements. After having exposed the polarization problem with high-precision measurements, CMS is now providing the key to its clarification.
The new cross-section measurements allow a theory/data comparison at large values of the ratio pT/mass, where perturbative calculations are more reliable. First attempts to do so, not yet exploiting the exceptional high-pT reach of the newest data, were revealing. With theory calculations restricted to their region of validity, the cross-section measurements are actually found to agree with the polarization data, indicating that the bound-state formation through coloured 1S0 pre-resonance is dominant (G Bodwin et al. 2014, K-T Chao et al. 2012, P Faccioli et al. 2014).
Heading towards the solution of a decades-long puzzle, what of the fundamental question: how do quarks and antiquarks interact to form bound states? Future analyses will disclose the complete hierarchy of transitions from pre-resonances with different quantum properties to the family of observed bound states, providing a set of “Kepler” laws for the long-distance interactions between quark and antiquark.
New results from the ALICE collaboration are providing additional data to test ideas about how particles are produced out of the quark–gluon plasma (QGP) created in heavy-ion collisions at the LHC.
Experiments at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) observed an enhancement in pT-dependent baryon/meson ratios – specifically the p/π and Λ/K0S ratios – for central nucleus–nucleus (AA) collisions in comparison with proton–proton (pp) collisions, where particle production is assumed to be dominated by parton fragmentation. In addition, constituent-quark scaling was observed in the elliptic-flow parameter, v2, measured in AA collisions. To interpret these observations, the coalescence of quarks was suggested as an additional particle-production mechanism. The coalescence (or recombination) model postulates that three quarks must come together to form a baryon, while a quark and an antiquark must coalesce to form a meson. The pT and the v2 of the particle created is the sum of the respective values of the constituent quarks. Therefore, coalescence models generally predict differences between the pT spectra of baryons and mesons, predominantly in the range 2 < pT < 5 GeV/c, where the enhancement in the baryon/meson ratio has been measured.
While a similar enhancement in the p/π and Λ/K0S ratios is observed at the LHC, the mass scaling of v2 is not, calling into question the importance of the coalescence mechanism. The observed-particle pT spectra reflect the dynamics of the expanding QGP created in local thermal equilibrium, conferring to the final-state particles a common radial velocity independent of their mass, but a different momentum (hydrodynamic flow). The resulting blue shift in the pT spectrum therefore scales with particle mass, and is observed as a rise in the p/π and Λ/K0S ratios at low pT (see figure). In such a hydrodynamic description, particles with the same mass have pT spectra with similar shapes, independent of their quark content. The particular shape of the baryon/meson ratio observed in AA collisions therefore reflects the relative importance of hydrodynamic flow, parton fragmentation and quark coalescence. However, for the p/π and Λ/K0S ratios, the particles in the numerator and denominator differ in both mass and (anti)quark content, so coalescence and hydrodynamic effects cannot be disentangled. To test the role of coalescence further, it is instructive to conduct this study using a baryon and a meson that have similar mass.
Fortunately, nature provides two such particles: the proton, a baryon with mass 938 MeV/c2, and the φ meson, which has a mass of 1019 MeV/c2. If protons and φ mesons are produced predominantly through coalescence, their pT spectra will have different shapes. Hydrodynamic models alone would predict pT spectra with similar shapes owing to the small mass-difference (less than 9%), implying a p/φ ratio that is constant with pT.
For peripheral lead–lead collisions, where the small volume of the quark–gluon plasma reduces the influence of collective hydrodynamic motion on the pT spectra, the p/φ ratio has a strong dependence on pT, similar to that observed for pp collisions. In contrast, as the figure shows, in central lead–lead collisions – where the volume of the QGP produced is largest – the p/φratio has a very different pT dependence, and is constant within its uncertainties for pT < 4 GeV/c. The data therefore indicate that hydrodynamics is the leading contribution to particle pT spectra in central lead–lead collisions at LHC energies, and it does not seem necessary to invoke coalescence models.
In the coming year, the ALICE collaboration will measure a larger number of collisions at a higher energy. This will allow a more precise study of both the pT spectra and elliptic-flow parameters of the proton and φ meson, and will allow tighter constraints to be placed on theoretical models of particle production in heavy-ion collisions.
There is evidence for dark matter from many astronomical observations, yet so far, dark matter has not been seen in particle-physics experiments, and there is no evidence for non-gravitational interactions between dark matter and Standard Model particles. If such interactions exist, dark-matter particles could be produced in proton–proton collisions at the LHC. The dark matter would travel unseen through the ATLAS detector, but often one or more Standard Model particles would accompany it, either produced by the dark-matter interaction or radiated from the colliding partons. Observed particles with a large imbalance of momentum in the transverse plane of the detector could therefore signal the production of dark matter.
Because radiation from the colliding partons is most likely a jet, the “monojet” search is a powerful search for dark matter. The ATLAS collaboration now has a new result in this channel and, while it does not show evidence for dark-matter production at the LHC, it does set significantly improved limits on the possible rate for a variety of interactions. The reach of this analysis depends strongly on a precise determination of the background from Z bosons decaying to neutrinos at large-boson transverse-momentum. By deriving this background from data samples of W and Z bosons decaying to charged leptons, the analysis achieves a total background uncertainty in the result of 3–14%, depending on the transverse momentum.
To compare with non-collider searches for weakly interacting massive particle (WIMP) dark matter, the limits from this analysis have been translated via an effective field theory into upper limits on WIMP–nucleon scattering or on WIMP annihilation cross-sections. When the WIMP mass is much smaller than several hundred giga-electron-volts – the kinematic and trigger thresholds used in the analysis – the collider results are approximately independent of the WIMP mass. Therefore, the results play an important role in constraining light dark matter for several types of spin-independent scattering interactions (see figure). Moreover, collider results are insensitive to the Lorentz structure of the interaction. The results shown on spin-dependent interactions are comparable to the spin-independent results and significantly stronger than those of other types of experiments.
The effective theory is a useful and general way to relate collider results to other dark-matter experiments, but it cannot always be employed safely. One advantage of the searches at the LHC is that partons can collide with enough energy to resolve the mediating interaction directly, opening complementary ways to study it. In this situation, the effective theory breaks down, and simplified models specifying an explicit mediating particle are more appropriate.
The new ATLAS monojet result is sensitive to dark-matter production rates where both effective theory and simplified-model viewpoints are worthwhile. In general, for large couplings of the mediating particles to dark matter and quarks, the mediators are heavy enough to employ the effective theory, whereas for couplings of order unity the mediating particles are too light and the effective theory is an incomplete description of the interaction. The figures use two types of dashed lines to depict the separate ATLAS limits calculated for these two cases. In both, the calculation removes the portion of the signal cross-section that depends on the internal structure of the mediator, recovering a well-defined and general but conservative limit from the effective theory. In addition, the new result presents constraints on dark-matter production within one possible simplified model, where the mediator of the interaction is a Z’-like boson.
While the monojet analysis is generally the most powerful search when the accompanying Standard Model particle is radiated from the colliding partons, ATLAS has also employed other Standard Model particles in similar searches. They are especially important when these particles arise from the dark-matter interaction itself. Taken together, ATLAS has established a broad and robust programme of dark-matter searches that will continue to grow with the upcoming data-taking.
On 10 January, the ground-breaking ceremony for the Jiangmen Underground Neutrino Observatory (JUNO) took place in Jiangmen City, Guangdong Province, China. More than 300 scientists and officials from China and other countries attended and witnessed this historical moment.
JUNO is the second China-based neutrino project, following the Daya Bay Reactor experiment, and is designed to determine the neutrino mass-hierarchy via precision measurements of the reactor-neutrino energy spectrum. The experiment is scheduled to start data-taking in 2020 and is expected to operate for at least 20 years. The neutrino detector, which is the experiment’s core component, will be the world’s largest and highest-precision liquid scintillator detector.
After the determination of the θ13 mixing angle by Daya Bay and other experiments, the next challenge to the international neutrino community is to determine the neutrino-mass hierarchy. Sensitivity analysis shows that the preferred range for the experiment stations is 50–55 km from a nuclear reactor. Jinji Town, the detector site chosen for the JUNO experiment, is 53 km from both Yangjiang and Taishan Nuclear Power Plants, which provide a total thermal power of 35.8 GW. By 2020, the effective power will be the highest in the world.
The JUNO international collaboration, established on 28 July 2014, already consists of more than 300 members from 45 institutions in nine countries and regions, and more than 10 institutions from five countries are planning to join.
High Energy Stereoscopic System (HESS) has discovered three extremely luminous gamma-ray sources in the Large Magellanic Cloud (LMC) – a dwarf galaxy orbiting the Milky Way about 170,000 light-years away. The three objects are all exceptional. They comprise the most powerful supernova remnant and pulsar-wind nebula, as well as a superbubble – a new class of source in very high-energy (VHE) gamma rays.
The HESS array of telescopes, located in Namibia, observes flashes of Cherenkov light emitted by particle showers triggered by incident gamma rays in the upper atmosphere (CERN Courier January/February 2005 p30). This technique is sensitive to gamma rays at energies of tera-electron-volts – photons typically a thousand times more energetic than those observed by the Fermi Gamma-ray Space Telescope in the giga-electron-volt range (CERN Courier November 2008 p13). These high-energy photons are emitted by extremely energetic particles interacting with matter or radiation. They are therefore the best tracers of cosmic accelerators such as supernova remnants and pulsar wind nebulas – two different types of remains from the evolution of massive stars.
Resolving individual sources in a galaxy outside of the Milky Way is a new breakthrough for Cherenkov-telescope astronomy. HESS performed a deep observation of the largest star-forming region within the LMC, known as the Tarantula Nebula ( Picture of the month, CERN Courier June 2012 p12). The 210 hours of observation yielded the discovery of the three extremely energetic objects.
One of the new sources is the superbubble 30 Dor C. It is the first time that a superbubble has been detected in the VHE regime, and demonstrates that the objects are a source of highly energetic particles. With a diameter of 270 light-years, 30 Dor C is the largest-known X-ray-emitting shell, and appears to have been blown by several supernovas and strong stellar winds from massive stars. The detection by HESS is important because it shows that superbubbles are viable sources of galactic cosmic rays, complementary to individual supernova remnants (CERN Courier April 2013 p12).
Another source detected by HESS is the pulsar-wind nebula N 157B. This kind of nebula is formed by the wind of ultra-relativistic particles blown by a pulsar – a highly magnetized, rapidly spinning neutron star. The most famous is the Crab Nebula, one of the brightest sources in the gamma-ray sky (CERN Courier November 2008 p11). N 157B is similar, but outshines the Crab Nebula by an order of magnitude in VHE gamma rays, owing to a lower magnetic field and a stronger radiation field from neighbouring star-forming regions.
The third object is the supernova remnant N 132D, which is already known as a bright object in the radio and infrared wavebands. Although it is between 2500 and 6000 years old, it still outshines the strongest supernova remnants in the Galaxy in the VHE regime. Surprisingly, the remnant of the bright supernova SN 1987A – which exploded in the LMC 28 years ago – was not detected by HESS, in contrast to theoretical predictions. The current study published in Science shows the LMC to be a prime target for even deeper observations with the new HESS II 28-m telescope and the future Cherenkov Telescope Array (CERN Courier July/August 2012 p28).
The Linac Coherent Light Source (LCLS) at SLAC produced its first laser-like X-ray pulses in April 2009. The unique and potentially transformative characteristics of the LCLS beam – in particular, the short femtosecond pulse lengths and the large numbers of photons per pulse (see The LCLS XFEL below) – have created whole new fields, especially in the study of biological materials. X-ray diffraction on nanocrystals, for example, reveals 3D structures at atomic resolution, and allows pump-probe analysis of functional changes in the crystallized molecules. New modalities of X-ray solution scattering include wide-angle scattering, which provides detailed pictures from pump-probe experiments, and fluctuational solution scattering, where the X-ray pulse freezes the rotation of the molecules in the beam, resulting in a rich, 2D scattering pattern. Even the determination of the structure of single particles is possible. This article focuses on examples from crystallography and time-resolved solution scattering.
An important example from crystallography concerns the structure of protein molecules. As a reminder, protein molecules, which are encoded in our genes, are linear polymers of the 20 naturally occurring amino-acid monomers. Proteins contain hundreds or thousands of amino acids and carry out most functions within cells or organs. They catalyse chemical reactions; act as motors in a variety of contexts; control the flow of substances into and out of cells; and mediate signalling processes. Knowledge of their atomic structures lies at the heart of mechanistic understanding in modern biology.
Serial femtosecond crystallography (SFX) provides a method of studying the structure of proteins. In SFX, still X-ray photographs are obtained from a stream of nanocrystals, each crystal being illuminated by a single pulse of a few femtoseconds duration. At the LCLS, the 1012 photons per pulse can produce observable diffraction from a protein crystal much less than 1 μm3. Critically, a 10 fs pulse will scatter from a specimen before radiation damage takes place, thereby eliminating such damage as an experimental issue. Figure 1 shows a typical SFX set-up for crystals of membrane proteins. The X-ray beam in yellow illuminates a stream of crystals, shown in the inset, being carried in a thin stream of highly viscous cubic-phase lipid (LCP). The high-pressure system that creates the jet is on the left. The rate of LCP flow is well matched to the 120 Hz arrival rate of the X-ray pulses, so not much material is wasted between shots. In the ideal case, each X-ray pulse scatters from a single crystal in the LCP flow. For soluble proteins, a jet of aqueous buffer replaces the LCP.
AT1R is found at the surface of vascular cells and serves as the principal regulator of blood pressure (figure 3). Although several AT1R blockers (ARBs) have been developed as anti-hypertensive drugs, the structural knowledge of the binding to AT1Rs has been lacking, owing mainly to the difficulties of growing high-quality crystals for structure determination. Using SFX at the LCLS, Vadim Cherezov and colleagues have successfully determined the room-temperature crystal structure of human AT1R in a complex with its selective receptor-blocker ZD7155 at 2.9 Å resolution (Zhang et al. 2015). The structure of the AT1R–ZD7155 complex reveals key features of AT1R and critical interactions for ZD7155 binding. Docking simulations, which predict the binding orientation of clinically used ARBs onto the AT1R structure, further elucidated both the common and distinct binding modes for these anti-hypertensive drugs. The results have provided fundamental insights into the AT1R structure-function relationship and structure-based drug design.
In solution scattering, an X-ray beam illuminates a volume of solution containing a large number of the particles of interest, creating a diffraction pattern. Because the experiment averages across many rotating molecules, the observed pattern is circularly symmetric and can be encapsulated by a radial intensity curve, I(q), where q = 4πsinθ/λ and 2θ is the scattering angle. The data are therefore essentially one-dimensional (figure 4b). The I(q) curves are quite smooth and can be well described by a modest number of parameters. They have traditionally been analysed to yield a few important physical characteristics of the scattering particle, such as its molecular mass and radius of gyration. Synchrotrons have enabled new classes of solution-scattering experiments, and the advent of XFEL sources is already providing further expansion of the methodology.
Chasing the protein quake
An elegant example of time-resolved wide-angle scattering (WAXS) at the LCLS comes from a group led by Richard Neutze at the University of Gothenberg (Arnlund et al. 2014), which has used multi-photon absorption to trigger an extremely rapid structural perturbation in the photosynthetic reaction centre from Blastochloris viridis, a non-sulphur purple bacterium that produces molecular oxygen valuable to our environment. The group measured the progress of this fluctuation using time-resolved WAXS. Appearing with a time constant of a few picoseconds, the perturbation falls away with a 10 ps time constant and, importantly, precedes the propagation of heat through the protein.
The photosynthetic reaction centre faces unique problems of energy management. The energy of a single photon of green light is approximately equal to the activation energy for the unfolding of the protein molecule. In the photosynthetic complex, photons are absorbed by light-harvesting antennae and then rapidly funnelled to the reaction centre through specialized channels. The hypothesis is that excess energy, which may also be deposited in the protein, is dissipated before damage can be done by a process named “a protein quake”, indicating a nanoscale analogue of the spreading of waves away from the epicentre of an earthquake.
The experiments performed at the coherent X-ray imaging (CXI) station at the LCLS used micro-jet injection of solubilized protein samples. An 800 nm laser pulse of 500 fs duration illuminating the sample was calibrated so that a heating signal could be observed in the difference between the WAXS spectra with and without the laser illumination (figure 5a). The XFEL was operated to produce 40 fs pulses at 120 Hz, and illuminated and dark samples were interleaved, each at 60 Hz. The team calibrated the delay time between the laser and XFEL pulses to within 5 ps, and collected scattering patterns across a series of 41 time delays to a maximum of 100 ps. Figure 5b shows the curves indicating the difference in scattering between activated and dark molecules that were generated at each time point.
The results from this study rely on knowing the equilibrium molecular structure of the complex
The results from this study rely on knowing the equilibrium molecular structure of the complex. Molecular-dynamics (MD) simulations and modelling play a key role in interpreting the data and developing an understanding of the “quake”. A combination of MD simulations of heat deposition and flow in a molecule and spectral decomposition of the time-resolved difference scattering curves provide a strong basis for a detailed understanding of the energy propagation in the system. Because the light pulse was tuned to the frequency of the photosystem’s antennae, cofactors (molecules within the photosynthetic complex) were instantaneously heated to a few thousand kelvin, before decaying with a half-life of about 7 ps through heat flow to the remainder of the protein. Also, principal component analysis revealed oscillations in the range q = 0.2–0.9 nm–1, corresponding to a crystallographic resolution of 31–7 nm, which are signatures of structural changes in the protein. The higher-angle scattering – corresponding to the heat motion – extends to a resolution of a few angstroms, with a time resolution extending to a picosecond. This study illustrates not only the rapid evolution of the technology and experimental prowess of the field, but brings it to bear on a problem that makes clear the biological relevance of extremely rapid dynamics.
Effective single-particle imaging (SPI) would eliminate the need for crystallization, and would open new horizons in structure determination. It is an arena in which electron microscopy is making great strides, and where XFELs face great challenges. Simulations have demonstrated the real possibility of recovering structures from many thousands of weak X-ray snapshots of molecules in random orientation. However, it has become clear, as actual experiments are carried out, that there are profound difficulties with collecting high-resolution data – at present the best resolution in 2D snapshot images is about 20 nm. A recent workshop on single-particle imaging at SLAC identified a number of sources of artifacts including complex detector nonlinearities, scattering from apertures, scattering from solvent, and shot-to-shot variation in beam intensity and position. In addition, the current capability to hit a single molecule with a pulse reliably is quite limited. Serious technical progress at XFEL beamlines will be necessary before the promise of SPI at XFELs is realized fully.
Currently, the only operational XFEL facilities are at the SPring-8 Angstrom Compact free-electron LAser (SACLA) at RIKEN in Japan (CERN Courier July/August 2011 p9) and the LCLS in the US, so competition for beamtime is intense. Within the next few years, the worldwide capacity to carry out XFEL experiments will increase dramatically. In 2017, the European XFEL will come on line in Hamburg, providing a pulse rate of 27 kHz compared with the 120 Hz rate at the LCLS. At about the same time, facilities at the Paul Scherrer Institute in Switzerland and at the Pohang Accelerator Laboratory in South Korea will produce first light. In addition, the technologies for performing and analysing experiments are improving rapidly. It seems more than fair to anticipate a rapid growth in crystallography, molecular movies, and other exciting experimental methods.
The LCLS XFEL
Hard X-ray free-electron lasers (XFELs) are derived from the undulator platform commonly used in synchrotron X-ray sources around the world. In the figure, (a) shows the undulator lattice, which comprises a series of alternating pairs of magnetic north and south poles defining a gap through which electron bunches travel. The undulator at the LCLS is 60 m long, compared with about 3 m for a synchrotron device. The bunches experience an alternating force normal to the magnetic field in the gap, transforming their linear path into a low-amplitude cosine trajectory.
In the reference frame of the electron bunch, the radiation that each electron emits has a wavelength equal to the spacing of the undulator magnets (a few centimetres) divided by the square of the relativistic factor γ = E/me2 (see below). Each electron interacts both with the radiation emitted by electrons preceding it in the bunch, and with the magnetic field within the undulator. Initially, the N electrons in the bunch have random phases (see figure, (b)), so that the radiated power is proportional to N.
As the bunch advances through the undulator, it breaks up into a series of microbunches of electrons separated by the wavelength of the emitted radiation. Without going into detail, this microbunching arises from a Lorenz force on the electron in the direction of propagation, which is generated by the interaction of the undulator field and the (small) component of the electron velocity perpendicular to the direction of propagation. This force tends to push the electrons into a position at the peak of the emitted radiation. All electrons within a single bunch radiate coherently, and the radiation from one microbunch is also coherent with that from the next, being separated by a single wavelength. Therefore, the power in the radiated field is proportional to N2.
The process of microbunching can be viewed as a resonance process, for which the following undulator equation describes the conditions for operation at wavelength λ.
The tables, above, show typical operating conditions for the CXI beamline at the LCLS. The values represent only a small subset of possible operating conditions. Note the small source size, the short pulse duration and the high photons per pulse.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.