Comsol -leaderboard other pages

Topics

Hitomi probes turbulence in galaxy cluster

With its very first observation, Japan’s Hitomi X-ray satellite has discovered that the gas in the Perseus cluster of galaxies is much less turbulent than expected. The unprecedented measurement opens the way towards a better determination of the mass of galaxy clusters, which has important cosmological implications.

Hitomi, which translates to “pupil of the eye”, is an X-ray observatory built and operated by the Japanese space agency (JAXA) in collaboration with more than 60 institutes and 200 scientists and engineers from Japan, the US, Canada and Europe. Launched on 17 February this year, Hitomi functioned for just over a month before operators lost contact on 26 March, when the spacecraft started to spin very rapidly, leading to its partial disintegration. It was a tragic end to a very promising mission that would have used a micro-calorimeter to achieve unprecedented spectral resolution in X-rays. Cooled down to 0.05 K, the soft X-ray spectrometer (SXS) was designed to record the precise energy of each incoming X-ray photon.

The cluster gas has very little turbulent motion

Hitomi targeted the Perseus cluster just a week after it arrived in space to measure the turbulence in the cluster to a precision of 10 km s–1, compared with the upper limit set by XMM-Newton of 500 km s–1. The SXS micro-calorimeter met expectations and measured a velocity of only 164±10 km s–1 along the line-of-sight. This low velocity came as a surprise for the Hitomi collaboration, especially because at the core of the cluster lies the highly energetic active galaxy NGC 1275. It indicates that the cluster gas has very little turbulent motion, with a turbulent pressure being only four per cent of the heat pressure of the hot intra-cluster gas. This is extraordinary, considering that NGC 1275 is pumping jetted energy into its surroundings to create bubbles of extremely hot gas.

Previously, it was thought that these bubbles induce turbulence, which keeps the central gas hot, but researchers now have to think of other ways to heat the gas. One possibility is sound waves, which would allow energy to be spread into the medium without global movement of the gas. The precise determination of the turbulence in the Perseus cluster allows a better determination of its mass, which depends on the ratio of turbulent to quiescent gas. Generalising the result of an almost negligible contribution of turbulent pressure in the central core of galaxy clusters impacts not just cluster physics but also cosmological simulations.

The impressive results of Hitomi only reinforce astronomers’ sense of loss. As this and several missions have shown, equipping an X-ray satellite with a micro-calorimeter is a daunting challenge. NASA’s Chandra X-ray Observatory, which launched in 1999, dropped the idea due to budget constraints. JAXA took over the calorimeter challenge on its ASTRO-E spacecraft, but the probe was destroyed in 2000 shortly after rocket lift-off. This was followed by the Suzaku satellite, launched in 2005, in which a leak in the cooling system destroyed the calorimeter. This series of failures is especially dramatic for the scientists and engineers developing such high-precision instruments over two decades – especially in the case of Hitomi, for which the SXS instrument worked perfectly until the loss of the satellite due to problems with the attitude control. Researchers may now have to wait more than a decade to use a micro-calorimeter in space, until ESA’s Athena mission, which is tentatively scheduled for launch in the late 2020s.

CMS gears up for the LHC data deluge

ATLAS and CMS, the large general-purpose experiments at CERN’s Large Hadron Collider (LHC), produce enormous data sets. Bunches of protons circulating in opposite directions around the LHC pile into each other every 25 nanoseconds, flooding the detectors with particle debris. Recording every collision would produce data at an unmanageable rate of around 50 terabytes per second. To reduce this volume for offline storage and processing, the experiments use an online filtering system called a trigger. The trigger system must remove the data from 99.998% of all LHC bunch crossings but keep the tiny fraction of interesting data that drives the experiment’s scientific mission. The decisions made in the trigger, which ultimately dictate the physics reach of the experiment, must be made in real time and are irrevocable.

The trigger system of the CMS experiment has two levels. The first, Level-1, is built from custom electronics in the CMS underground cavern, and reduces the rate of selected bunch crossings from 40 MHz to less than 100 kHz. There is a period of only four microseconds during which a decision must be reached, because data cannot be held within the on-detector memory buffers for longer than this. The second level, called the High Level Trigger (HLT), is software-based. Approximately 20,000 commercial CPU cores, housed in a building on the surface above the CMS cavern, run software that further reduces the crossing rate to an average of about 1 kHz. This is low enough to transfer the remaining data to the CERN Data Centre for permanent storage.

The original trigger system served CMS well during Run 1 of the LHC, which provided high-energy collisions at up to 8 TeV from 2010–2013. Designed in the late 1990s and operational by 2008, the system allowed the CMS collaboration to co-discover the Higgs boson in multiple final-state topologies. Among hundreds of other CMS measurements, it also allowed us to observe the rare decay Bsμμ with a significance of 4.3σ.

In Run 2 of the LHC, which got under way last year, CMS faces a much more challenging collision environment. The LHC now delivers both an increased centre-of-mass energy of 13 TeV and increased luminosity beyond the original LHC design of 1034 s–1 cm–2. While these improve the detector’s capability to observe rare physics events, they also result in severe event “pile-up” due to multiple overlapping proton collisions within a single bunch crossing. This effect not only makes it much harder to select useful crossings, it can drive trigger rates beyond what can be tolerated. This could be partially mitigated by raising the energy thresholds for the selection of certain particles. However, it is essential that CMS maintains its sensitivity to physics at the electroweak scale, both to probe the couplings of the Higgs boson and to catch glimpses of any physics beyond the Standard Model. An improved trigger system is therefore required that makes use of the most up-to-date technology to maintain or improve on the selection criteria used in Run 1.

Thinking ahead

In anticipation of these challenges, CMS has successfully completed an ambitious “Phase-1” upgrade to its Level-1 trigger system that has been deployed for operation this year. Trigger rates are reduced via several criteria: tightening isolation requirements on leptons; improving the identification of hadronic tau-lepton decays; increasing muon momentum resolution; and using pile-up energy subtraction techniques for jets and energy sums. We also employ more sophisticated methods to make combinations of objects for event selection, which is accomplished by the global trigger system (see figure 1).

These new features have been enabled by the use of the most up-to-date Field Programmable Gate Array (FPGA) processors, which provide up to 20 times more processing capacity and 10 times more communication throughput than the technology used in the original trigger system. The use of reprogrammable FPGAs throughout the system offers huge flexibility, and the use of fully optical communications in a standardised telecommunication architecture (microTCA) makes the system more reliable and easier to maintain compared with the previous VME standard used in high-energy physics for decades (see Decisions down to the wire).

Decisions down to the wire

Overall, about 70 processors comprise the CMS Level-1 trigger upgrade. All processors make use of the large-capacity Virtex-7 FPGA from the Xilinx Corporation, and three board variants were produced. The first calorimeter trigger layer uses the CTP7 board, which highlights an on-board Zync system-on-chip from Xilinx for on-board control and monitoring. The second calorimeter trigger layer, the barrel muon processors, and the global trigger and global muon trigger use the MP7, which is a generic symmetric processor with 72 optical links for both input and output. Finally, a third, modular variant called the MTF7 is used for the overlap and end-cap muon trigger regions, and features a 1 GB memory mezzanine used for the momentum calculation in the end-cap region. This memory can store the calculation of the momentum from multiple angular inputs in the challenging forward region of CMS where the magnetic bending is small.

The Level-1 trigger requires very rapid access to detector information. This is currently provided by the CMS calorimeters and muon system, which have dedicated optical data links for this purpose. The calorimeter trigger system – which is used to identify electrons, photons, tau leptons, and jets, and also to measure energy sums – consists of two processing layers. The first layer is responsible for collecting the data from calorimeter regions, summing the energies from the electromagnetic and hadronic calorimeter compartments, and organising the data to allow efficient processing. These data are then streamed to a second layer of processors in an approach called time-multiplexing. The second layer applies clustering algorithms to identify calorimeter-based “trigger objects” corresponding to single particle candidates, jets or features in the overall transverse-energy flow of the collision. Time-multiplexing allows data from the entire calorimeter for one beam crossing to be streamed to a single processor at full granularity, avoiding the need to share data between processors. Improved energy and position resolutions for the trigger objects, along with the increased logic space available, allows more sophisticated trigger decisions.

The muon trigger system also consists of two layers. For the original trigger system, a separate trigger was provided from each of the three muon-detector systems employed at CMS: drift tubes (DT) in the barrel region; cathode-strip chambers (CSC) in the endcap regions; and resistive plate chambers (RPC) throughout the barrel and endcaps. Each system provides unique information useful for making a trigger decision; for example, the superior timing of the RPCs can correct the time assignment of DTs and CSC track segments, as well as provide redundancy in case a specific DT or CSC is malfunctioning.

In Run 2, we combine trigger segments from all of these units at an earlier stage than in the original system, and send them to the muon track-finding system in a first processing layer. This approach creates an improved, highly robust muon trigger that can take advantage of the specific benefits of each technology earlier in the processing chain. The second processing layer of the muon trigger takes as input the tracks from 36 track-finding processors to identify the best eight candidate muons. It cancels duplicate tracks that occur along the boundaries of processing layers, and will in the future also receive information from the calorimeter trigger to identify isolated muons. These are a signature of interesting rare particle decays such as those of vector bosons.

A feast of physics

Finally, the global trigger processor collects information from both the calorimeter and muon trigger systems to arrive at the final decision on whether to keep the data from a given beam crossing – again, all in a period of four microseconds or less. The trigger changes made for Run 2 allow an event selection procedure that is much closer to that traditionally performed in software in the HLT or in offline analysis. The global trigger applies the trigger “menu” of the experiment – a large set of selection criteria designed to identify the broad classes of events used in CMS physics analyses. For example, events with a W or Z boson in the final state can be identified by the requirement for one or two isolated leptons above a certain energy threshold; top-quark decays by demanding high-energy leptons and jets in the same bunch crossing; and dark-matter candidates via missing transverse energy. The new system can contain several hundred such items – which is quite a feast of physics – and the complete trigger menu for CMS evolves continually as our understanding improves.

The trigger upgrade was commissioned in parallel with the original trigger system during LHC operations in 2015. This allowed the new system to be fully tested and optimised without affecting CMS physics data collection. Signals from the detector were physically split to feed both the initial and upgraded trigger systems, a project that was accomplished during the LHC’s first long shutdown in 2013–2014. For the electromagnetic calorimeter, for instance, new optical transmitters were produced to replace the existing copper cables to send data to the old and new calorimeter triggers simultaneously. A complete split was not realistic for the barrel muon system, but a large detector slice was prepared nevertheless. The encouraging results during commissioning allowed the final decision to proceed, with the upgrade to be taken in early January 2016.

As with the electronics, an entirely new software system had to be developed for system control and monitoring. For example, low-level board communication changed from a PCI-VME bus adapter to a combination of Ethernet and PCI-express. This took two years of effort from a team of experts, but also offered the opportunity to thoroughly redesign the software from the bottom up, with an emphasis on commonality and standardisation for long-term maintenance. The result is a powerful new trigger system with more flexibility to adapt to the increasingly extreme conditions of the LHC while maintaining efficiency for future discoveries (figure 2, previous page).

Although the “visible” work of data analysis at the LHC takes place on a timescale of months or years at institutes across the world,  the first and most crucial decisions in the analysis chain happen underground and within microseconds of each proton–proton collision. The improvements made to the CMS trigger for Run 2 mean that a richer and more precisely defined data set can be delivered to physicists working on a huge variety of different searches and measurements in the years to come. Moreover, the new system allows flexibility and routes for expansion, so that event selections can continue to be refined as we make new discoveries and as physics priorities evolve.

The CMS groups that delivered the new trigger system are now turning their attention to the ultimate Phase-2 upgrade that will be possible by around 2025. This will make use of additional information from the CMS silicon tracker in the Level-1 decision, which is a technique never used before in particle physics and will approach the limits of technology, even in a decade’s time. As long as the CMS physics programme continues to push new boundaries, the trigger team will not be taking time off.

Storage ring steps up search for electric dipole moments

The fact that we and the world around us are made of matter and only minimal amounts of antimatter is one of the fundamental puzzles in modern physics, motivating a variety of theoretical speculations and experimental investigations. The combined standard models of cosmology and particle physics suggest that at the end of the inflation epoch immediately following the Big Bang, the number of particles and antiparticles were almost in precise balance. Yet the laws of physics contrived to act differently on matter and antimatter to generate the apparently large imbalance that we observe today.

One of the necessary mechanisms required for this to happen – namely CP violation – is very small in the Standard Model of particle physics and therefore only able to account for a tiny fraction of the actual imbalance. New sources of CP violation are needed, and one such potential signature would be the appearance of electric dipole moments (EDMs) in fundamental particles.

Electric dipole moments

An EDM originates from a permanent charge separation inside the particle. In its centre-of-mass frame, the ground state of a subatomic particle has no direction at its disposal except its spin, which is an axial vector, while the charge separation (EDM) corresponds to a polar vector (see panel). Therefore, if such a particle with nonzero mass and spin possesses an EDM, it must violate both parity (P) and time-reversal (T) invariance. If the combined CPT symmetry is to be valid, T violation also implies breaking of the combined CP symmetry. The Standard Model predicts the existence of EDMs, but their sizes (in the range of 10–31 to 10–33 e·cm for nucleons) fall many orders of magnitude below the sensitivity of current measurements and still far below the expected levels of projected experiments. An EDM observation at a much higher value would therefore be a clear and convincing sign of new physics beyond the current Standard Model (BSM).

BSM theories such as supersymmetry (SUSY), technicolour, multi-Higgs models and left–right symmetric models generally predict nucleon EDMs in the range of 10–24 to 10–28 e·cm (part of the upper region of this range is already excluded by experiment). Although tiny, EDMs of this size would be large enough to be observed by a new generation of highly sensitive accelerator-based experiments with charged particles such as the proton and deuteron. In this respect, EDMs offer a complementary approach to searches for BSM physics at collider experiments, probing scales far beyond the reach of present high-energy machines such as the LHC. For example, in certain SUSY scenarios the present observed EDM limits provide information about physics at the TeV or even PeV scales, depending on the mass scale of the supersymmetric mechanisms and the strength of the CP-violating SUSY phase parameters (figure 1).

Researchers have been searching for EDMs in neutral particles, especially neutrons, for more than 50 years, by trapping and cooling particles in small volumes and using strong electric fields. Despite an enormous improvement in sensitivity, however, these experiments have only produced upper bounds. The current upper limit of approximately 10–26 e·cm for the EDM of the neutron is an amazingly accurate result: if we had inflated the neutron so that it had the radius of the Earth, the EDM would correspond to a separation between positive and negative charges of about 1 μm. An upper limit of less than 10–29 e·cm has also been reported for a special isotope of mercury, but the Coulomb screening by the atom’s electron cloud makes it difficult to directly relate this number to the permanent EDMs of the neutrons and protons in its nucleus. For the electron, meanwhile, the reported EDM limits on more complicated polar molecules can be used to deduce a bound of about 10–28 e·cm – which is even further away from the Standard Model prediction (10–38 e·cm) than is the case for the neutron.    

Storage-ring solution

Although these experiments provide useful constraints on BSM theories, a new class of experiments based on storage rings is needed to measure the electric dipole moment of charged particles (such as the proton, deuteron or helium-3). These highly sensitive accelerator-based experiments will allow the EDM of charged particles to be inferred from their very slow spin precession in the presence of large electric fields, and promise to reach a sensitivity of 10–29 e·cm. This is due mainly to the larger number of particles available in a stored beam compared with the ultra-cold neutrons usually found in trap experiments, and also the potentially longer observation time possible because such experiments are not limited by the particle decay time. Storage-ring experiments would span the range of EDM sizes where new CP violation is expected to lie. Furthermore, the ability to measure EDMs of more than one type of particle will help to constrain the origin of the CP-violating source because not all particles are equally sensitive to the various CP-violating mechanisms.

At the Cooler Synchrotron “COSY” located at the Forschungszentrum Jülich (FZJ), Germany, the JEDI (Jülich Electric Dipole moment Investigations) collaboration is working on a series of feasibility studies for such a measurement using an existing conventional hadron storage ring. COSY, which is able to store both polarised proton and deuteron beams with a momentum up to 3.7 GeV/c, is an ideal machine for the development and commissioning of the necessary technology. This R&D work has recently replaced COSY’s previous hadron-physics programme of particle production and rare decays, although some other service and user activities continue.

A first upper limit for an EDM directly measured in a storage ring was obtained for muons at the (g–2) experiment at Brookhaven National Laboratory (BNL) in the US, but the measurement was not optimised for sensitivity to the EDM. Subsequently, scientists at BNL began to explore what would be needed to fully exploit the potential of a storage-ring experiment. While much initial discussion for an EDM experiment also took place at Brookhaven, commitments to the Relativistic Heavy Ion Collider (RHIC) operation and planning for a potential Electron–Ion Collider have prevented further development of such a project there. Therefore the focus shifted to FZJ and the COSY storage ring in Germany, where the JEDI collaboration was formed in 2011 to address the EDM opportunity.

The measuring principle is straightforward: a radial electric field is applied to an ensemble of particles circulating in a storage ring with their polarisation vector (or spin) initially aligned with their momentum direction. Maintaining the polarisation in this direction requires a storage ring in which the bending elements are a carefully matched set of both vertical magnetic fields and radial electric fields. The field strengths must be chosen such that the precession rate of the polarisation matches the circulation rate of the beam (called the “frozen spin”). For particles such as the proton with a positive gyromagnetic anomaly, this can be achieved by using only electric fields and choosing just the right “magic” momentum value (around 0.7 GeV/c). For deuterons, which have a negative gyromagnetic anomaly, a combination of electric and magnetic fields is required, but in this case the frozen spin condition can be achieved for a wide range of momentum and electric/magnetic field combinations. Such combined fields may also be used for the proton and would allow the experiment to operate at momenta other than the magic value.

The existence of an EDM would generate a torque that slowly rotates the spin out of the plane of the storage ring and into the vertical plane (see panel opposite). This slow change in the vertical polarisation is measured by sampling the beam with elastic scattering off a carbon target and looking for a slowly increasing left–right asymmetry in the scattered particle flux. For an EDM of 10–29 e·cm and an electric field of 10 MV/m, this would happen at an angular velocity of 3·10–9 rad·s–1 (about 1/100th of a degree per day of continuous operations). This requires the measurement to be sensitive at a level never reached before in a storage ring. To obtain a statistically significant result, the polarisation in the ring plane must last for approximately 1000 s during a single fill of the ring, while the scattering asymmetry from the carbon target must reach levels above 10–6 to be measurable within a year of running.

Milestones passed

Following the commissioning of a measurement system that stores the clock time of each recorded event in the beam polarimeter with respect to the start of the accelerator cycle, the JEDI collaboration has passed a series of important milestones in recent years. Working with the deuteron beam at COSY, these time stamps make it possible to unfold for the first time the rapid rotation of the polarisation in the ring plan (which has a frequency of around 120 kHz) that arises from the gyromagnetic anomaly. In a one-second time interval, the number of polarisation revolutions may be counted and the final direction of the polarisation known to better than 0.1 rad (see figure 2). The magnitude of the polarisation may decline slowly due to decoherence effects in storage ring, as can be seen in subsequent polarisation measurements within a single fill.

Maintaining the polarisation requires the cancellation of effects that may cause the particles in the beam to differ from one another. Bunching and electron cooling serves to remove much of this spurious motion, but particle path lengths around the ring may differ if particles in the beam have transverse oscillations with different amplitudes. Recently, we demonstrated that the effect of these differences on polarisation decoherence can be removed by applying correcting sextupole fields to the ring. As a result, we can now achieve polarisation lifetimes in the horizontal plane of more than 1000 s – as required for the EDM experiment (figure 3). In the past year, the JEDI group has also shown that by determining errors in the polarisation direction and feeding this back to make small changes in the ring’s radio-frequency, the direction of the polarisation may be maintained at the level of 0.1 rad during any chosen time period. This is a further requirement for managing the polarisation in the ring for the EDM measurement.

In early 2016, the European Research Council awarded an advanced research grant to the Jülich group to support further developmental efforts. The five-year grant, starting in October, will support a consortium that also includes RWTH Aachen University in Germany and the University of Ferrara in Italy. The goal of the project is to conduct the first measurement of the deuteron EDM. Since the COSY polarisation cannot be maintained parallel to its velocity (because no combined electric and magnetic bending elements exist), a novel device called a radiofrequency Wien filter will be installed in the ring to slowly accumulate the EDM signal (the filter influences the spin motion without acting on the particle’s orbit). The idea is to exploit the electric fields created in the particle rest system by the magnetic fields of the storage-ring dipoles, which would allow the first ever measurement of the deuteron EDM.

COSY is also an important test facility for many EDM-related technologies, among them new beam-position monitoring, control and feedback systems. High electric fields and combined electric/magnetic deflectors may also find applications in other fields, such as accelerator science. Many checks for systematic errors will be undertaken, and a technical design report for a future dedicated storage ring will be prepared. The most significant challenges will come from small imperfections in the placement and orientation of ring elements, which may cause stray field components that generate the accumulation of an EDM-like signal. The experiment is most sensitive to radial magnetic fields and vertical electric fields. Similar effects may arise through the non-commutativity of spurious rotations within the ring system, and efforts are under way to model these effects via spin tracking supported with beam testing. Eventually, many such effects may be reduced or eliminated by comparing the signal accumulation rates seen with beams travelling in opposite directions in the storage ring. During the next decade, this will allow researchers to approach the design goals of the EDM search using a storage ring, adding a new opportunity to unveil physics beyond the Standard Model.

Electromagnetic gymnastics
 

Naively, an electric dipole moment (d) and a magnetic dipole moment (μ) transfer differently under P and T. In a fundamental particle, both quantities are proportional to the spin vector (s). Therefore, the interaction term (ds·E) is odd under P and T, whereas (μs·B) is even under these transformations.

 

 

In the final experiment to measure the EDM of a charged particle, a radial electric field is applied to an ensemble of particles circulating in a storage ring with polarisation vector aligned to their momentum. The existence of an EDM would generate a torque that slowly rotates the spin out of the ring plane into the vertical direction.

 

 

After rotation into the horizontal plane at COSY, the polarization vector starts to precess. At a measurement point along the ring, the rapidly rotating polarisation direction of the beam is determined by using the count-rate asymmetry of deuterons elastically scattered from a carbon target.

 

 

Belle II super-B factory experiment takes shape at KEK

Since CERN’s LHC switched on in the autumn of 2008, no new particle colliders have been built. SuperKEKB, under construction at the KEK laboratory in Tsukuba, Japan, is soon to change that. In contrast to the LHC, which is a proton–proton collider focused on producing the highest energies possible, SuperKEKB is an electron–positron collider that will operate at the intensity frontier to produce enormous quantities of B mesons.

At the intensity frontier, physicists search for signatures of new particles or processes by measuring rare or forbidden reactions, or finding deviations from Standard Model (SM) predictions. The “mass reach” for new-particle searches can be as high as 100 TeV/c2, provided the couplings of the particles are large, which is well beyond the reach of direct searches at current colliders. The flavour sector provides a particularly powerful way to address the many deficiencies of the SM: at the cosmological scale, the puzzle of the baryon–antibaryon asymmetry remains unexplained by known sources of CP violation; the SM does not explain why there should be only three generations of elementary fermions or why there is an observed hierarchy in the fermion masses; the theory falls short on accounting for the small neutrino mass, and it is also not clear whether there is only a single Higgs boson.

SuperKEKB follows in the footsteps of its predecessor KEKB, which recorded more than 1000 fb–1 (one inverse attobarn, ab–1) of data and achieved a world record for instantaneous luminosity of 2.1 × 1034 cm–2 s–1. The goals for SuperKEKB are even more ambitious. Its design luminosity is 8 × 1035 cm–2 s–1, 40 times that of previous B-factory experiments, and the machine will operate in “factory” mode with the aim of recording an unprecedented data sample of 50 ab–1.

The trillions of electron–positron collisions provided by SuperKEKB will be recorded by an upgraded detector called Belle II, which must be able to cope with the much larger beam-related backgrounds resulting from the high-luminosity environment. Belle II, which is the first “super-B factory” experiment, is designed to provide better or comparable performance to that of the previous Belle experiment at KEKB or BaBar at SLAC in Stanford, California. With the SM of weak interactions now well established, Belle II will focus on the search for new physics beyond the SM.

SuperKEKB was formally approved in October 2010, began construction in November 2011 and achieved its “first turns” in February this year (CERN Courier April 2016 p11). By the time of  completion of the initial accelerator commissioning before Belle-II roll-in (so-called “Phase 1”), the machine was storing a current of 1000 mA in its low-energy positron ring (LER) and 870 mA in the high-energy electron ring (HER). As currently scheduled, SuperKEKB will produce its first collisions in late 2017 (Phase 2), and the first physics run with the full detector in place will take place in late 2018 (Phase 3). The experiment will operate until the late 2020s.

B-physics background

The Belle experiment took data at the KEKB accelerator between 1999 and 2010. At roughly the same time, the BaBar experiment operated at SLAC’s PEP-II accelerator. In 2001, these two “B factories” established the first signals of CP violation, therefore revealing matter–antimatter asymmetries, in the B-meson sector. They also provided the experimental foundation for the 2008 Nobel Prize in Physics, which was awarded to theorists Makoto Kobayashi and Toshihide Maskawa for their explanation through complex phases in weak interactions.

In addition to the observation of large CP violation in the low-background “golden” B  J/ψ KS-type decay modes, these B-factory experiments allowed many important measurements of weak interactions involving bottom and charm quarks as well as τ leptons. The B factories also discovered an unexpected crop of new strongly interacting particles known as the X, Y and Z states. Since 2008, a third major B factory, LHCb, entered the game. One of the four main LHC detectors, LHCb has made a large number of new measurements of B and Bs mesons and B baryons produced in proton–proton collisions. The experiment has tightly constrained new physics phases in the mixing-induced weak decays of Bs mesons, confirmed Belle’s discovery of the four-quark state Z(4430), and discovered the first two clear pentaquark states. Together with LHCb, Belle II is expected to be equally prolific and may discover signals of new physics in the coming decade.

Asymmetric collisions

The accelerator technology underpinning B factories is quite different from that of high-energy hadron colliders. For the coherent production of quantum-mechanically entangled pairs of B and B mesons, measurements of time-dependent CP asymmetries require that we know the difference in the decay times between the two B mesons. With equal energy beams, the B mesons travel only tens of microns from their production point and cannot experimentally be distinguished in silicon vertex detectors. To allow the B factory experiments to observe the time difference or spatial separation of the B vertices, the beams have asymmetric energies, and the centre of mass system is therefore boosted along the axis of the detector. For example, at PEP-II, 9 GeV electron and 3.1 GeV positron beams were used, while at KEKB the beam energies were 8 GeV and 3.5 GeV.

Charged particles within a beam undergo thermal motion just like gas molecules: they scatter to generate off-momentum particles at a rate given by the density and the temperature of the beam. Such off-momentum particles reduce the beam lifetime, increase beam sizes and generate detector background. To maximise the beam lifetime and reduce intra-beam scattering, SuperKEKB will collide 7 and 4 GeV electron and positron beams, respectively.

Two strategies were employed at the B factories to separate the incoming and outgoing beams: PEP-II used magnetic separation in a strong dipole magnet near the interaction point, while KEKB used a crossing angle of 22 mrad. SuperKEKB will extend the approach of KEKB with a crossing angle of 83 mrad, with separate beamlines for the two rings and no shared magnets between them. While the beam currents will be somewhat higher at SuperKEKB than they were at KEKB, the most dramatic improvement in luminosity is the result of very flat low-emittance “cool beams” and much stronger focusing at the interaction point. Specifically, SuperKEKB uses the nano-beam scheme inspired by the design of Italian accelerator physicist Pantaleo Raimondi, which promises to reduce the vertical beam size at the interaction point to around 50 nm – 20 times smaller than at KEKB.

Although the former TRISTAN (and KEKB) tunnels were reused for the SuperKEKB facility, many of the other accelerator components are new or upgraded from KEKB. For example, the 3 km-circumference vacuum chamber of the LER is new and is equipped with an antechamber and titanium-nitride coating to fight against the problem of photoelectrons. This process, in which low-energy electrons generated as photoelectrons or by ionisation of the residual gas in the beam pipe are attracted by the positively charged beam to form a cloud around the beam, was a scourge for the B factories and is also a major problem for the LHC. Many of the LER magnets are new, while a significant number of the HER magnets were rearranged to achieve a lower emittance, powered by newly designed high-precision power supplies at the ppm level. The RF system has been rearranged to double the beam current with a new digital-control system, and many beam diagnostics and control systems were rebuilt from scratch.

During Phase 1 commissioning, after many iterations the LER optics were corrected to achieve design emittance. To achieve low-emittance positron beams, a new damping ring has been constructed that will be brought into operation in 2017. To meet the charge and emittance requirements of SuperKEKB, the linac injector complex has been upgraded and includes a new low-emittance electron gun. Key components of the accelerator – including the beam pipe, superconducting magnets, beam feedback and diagnostics – were developed in collaboration with international partners in Italy (INFN Frascati), the US (BNL), and Russia (BINP), and further joint work, which will also involve CERN, is expected.

During Phase 1, intensive efforts were made to tune the machine to minimise the vertical emittances in both rings. This was done via measurements and corrections using orbit-response matrices. The estimated vertical emittances were below 10 pm in both rings, which is close to the design values. There were discrepancies, however, with the beam sizes measured by X-ray size monitors, especially in the HER, which is under investigation.

The early days of Belle and BaBar were plagued by problems, with beam-related backgrounds resulting from the then unprecedented beam currents and strong beam focusing. In the case of Belle, the first silicon vertex detector was destroyed by an unexpected synchrotron radiation “fan” produced by an electron beam passing through a steering magnet. Fortunately, the Belle team was able to build a new replacement detector quickly and move on to compete in the race with BaBar to measure CP asymmetries in the B sector. As a result of these past experiences, we have adopted a rather conservative commissioning strategy for the SuperKEKB/Belle-II facility. This year, during the earliest Phase 1 of operation, a special-purpose device called BEAST II consisting of seven types of background measurement devices was installed at the interaction point to characterise the expected Belle-II background.

At the beginning of next year, the Belle-II outer detector will be “rolled in” to the beamline and all components except the vertex detectors will be installed. The complex quadrupole superconducting final-focusing magnets are among the most challenging parts of the accelerator. In autumn 2017, the final-focusing magnets will be integrated with Belle II and the first runs of Phase 2 will commence. A new suite of background detectors will be installed, including a cartridge containing samples of the Belle-II vertex detectors. The first goal of the Phase-2 run is to achieve a luminosity above 1034 cm–2 s–1 and to verify that the backgrounds are low enough for the vertex detector to be installed.

Belle reborn

With Belle II expected to face beam-related backgrounds 20 times higher than at Belle, the detector has been reborn to achieve the experiment’s main physics goals – namely, to measure rare or forbidden decays of B and D mesons and the τ lepton with better accuracy and sensitivity than before. While Belle II reuses Belle’s spectrometer magnet, many state-of-the-art technologies have been included in the detector upgrade. A new vertex-detector system comprising a two-layer pixel detector (PXD) based on “DEPFET” technology and a four-layer double-sided silicon-strip detector (SVD) will be installed. With the beam-pipe radius of SuperKEKB having been reduced to 10 mm, the first PXD layer can be placed just 14 mm from the interaction point to improve the vertex resolution significantly. The outermost SVD layer is located at a larger radius than the equivalent system at Belle, resulting in higher reconstruction efficiency for Ks mesons, which is important for many CP-violation measurements.

A new central drift chamber (CDC) has been built with smaller cell sizes to be more robust against the higher level of beam background hits. The new CDC has a larger outer radius (1111.4 mm as opposed to 863 mm in Belle) and 56 compared to 50 measurement layers, resulting in improved momentum resolution. Combined with the vertex detectors, Belle II has improved D* meson reconstruction and hence better full-reconstruction efficiency for B mesons, which often include D*s among their weak-interaction decay products.

Because good particle identification is vital for successfully identifying rare processes in the presence of very large background (for example, the measurement of B  Xd γ must contend with B  Xs γ background processes that are an order-of-magnitude larger), two newly developed ring-imaging Cherenkov detectors have been introduced at Belle II. The first, the time-of-propagation (TOP) counter, is installed in the barrel region and consists of a finely polished and optically flat quartz radiator and an array of pixelated micro-channel-plate photomultiplier tubes that can measure the propagation time of internally reflected Cherenkov photons with a resolution of around 50 ps. The second, the aerogel ring-imaging Cherenkov counter (A-RICH), is located in Belle II’s forward endcap region and will detect Cherenkov photons produced in an aerogel radiator with hybrid avalanche photodiode sensors.

The electromagnetic calorimeter (ECL) reuses Belle’s thallium-doped cesium-iodide crystals. New waveform-sampling read-out electronics have been implemented to resolve overlapping signals such that π0 and γ reconstruction is not degraded, even in the high-background environment. The flux return of the Belle-II solenoid magnet, which surrounds the ECL, is instrumented to detect KL mesons and muons (KLM). All of the endcap KLM layers and the innermost two layers of the barrel KLM were replaced with new scintillator-based detectors read out by solid-state photomultipliers. Signals from all of the Belle-II sub-detector components are read out through a common optical-data-transfer system and backend modules. GRID computing distributed over KEK-Asia-Australia-Europe-North America will be used to process the large data volumes produced at Belle II by high-luminosity collisions, which, like LHCb, are expected to be in the region of 1.8 GB/s.

Construction of the Belle-II experiment is in full swing, with fabrication and installation of sub-detectors progressing from the outer to the inner regions. A recent milestone was the completion of the TOP installation in June, while installation of the CDC, A-RICH and endcap ECL will follow soon. The Belle-II detector will be rolled into the SuperKEKB beamline in early 2017 and beam collisions will start later in the year, marking Phase 2. After verifying the background conditions in beam collisions, Phase 3 will see the installation of the vertex-detector system, after which the first physics run can begin towards the end of 2018.

Unique data set

As a next-generation B factory, Belle II will serve as our most powerful probe yet of new physics in the flavour sector, and may discover new strongly interacting particles such as tetraquarks, molecules or perhaps even hybrid mesons. Collisions at SuperKEKB will be tuned to centre-of-mass energies corresponding to the masses of the ϒ resonances, with most data to be collected at the Υ(4S) resonance. This is just above the threshold for producing quantum-correlated B-meson pairs with no fragmentation particles, which are optimal for measuring weak-interaction decays of B mesons.

SuperKEKB is both a super-B factory and a τ-charm factory: it will produce a total of 50 billion b b, c c and τ+ τ pairs over a period of eight years, and a team of more than 650 collaborators from 23 countries is already preparing to analyse this unique data set. The key open questions to be addressed include the search for new CP-violating phases in the quark sector, lepton-flavour violation and left–right asymmetries (see panel opposite).

Rare charged B decays to leptonic final states are the flagship measurements of the Belle-II research programme. The leptonic decay B τν occurs in the SM via a W-annihilation diagram with an expected branching fraction of 0.82+0.05–0.03 × 10−4, which would be modified if a non-standard particle such as a charged Higgs interferes with the W. Since the final state contains multiple neutrinos, it is measurable only in an electron–positron collider experiment where the centre-of-mass energy is precisely known. Belle II should reach a precision of 3% on this measurement, and observe the channel B μν for tests of lepton-flavour universality.

Perhaps the most interesting search at Belle II will be the analogous semi-leptonic decays, B  D*τν and B  Dτν, which are similarly sensitive to charged Higgs bosons. Recently, the combined measurements of these processes from Babar, Belle and LHCb have pointed to a curious 4σ deviation of the decay rates compared to the SM prediction (see figure X). Since no such deviation is seen in B τν, making it difficult to resolve the nature of the potential underlying new physics, the Belle-II data set will be required to settle the issue.

Another 4σ anomaly persists in B  K* l+l flavour-changing neutral-current loop processes observed by LHCb, which may be explained by the actions of new gauge bosons. By allowing the study of closely related processes, Belle II will be able to confirm if this really is a sign of new physics and not an artifact of theoretical predictions. More precisely calculable inclusive transitions b  sγ and b  s l+l will be compared to the exclusive ones measured by LHCb. The ultimate data set will also give access to B  K*νν and Kνν, which are experimentally challenging channels but also the most precise theoretically.

Beyond the Standard Model

There are many reasons to choose Belle II to address these and other puzzles with the SM, and in general the experiment will complement the physics reach of LHCb. The lower-background environment at Belle compared to LHCb allows researchers to reconstruct final states containing neutral particles, for instance, and to design efficient triggers for the analysis of τ particles. With asymmetric beam energies, the Lorentz boost of the electron–positron system is ideal for measurements of lifetimes, mixing parameters and CP violation.

The B factories established the existence of matter–antimatter asymmetries in the b-quark sector, in addition to the CP violation that was discovered 52 years earlier in the s-quark sector. The B factories established that a single irreducible complex phase in the weak interaction is sufficient to explain all CP-violating effects observed to date. This completed the SM description of the weak-interaction couplings of quarks.To move beyond this picture, two super-B factories were initially proposed: one at Tor Vegata near Frascati in Italy, and one at KEK in Japan. Although the former facility was not funded, there was a synergy and competition in the two designs. The super-B factory at KEK follows the legacy of the B factories, with Belle II and LHCb both vying to establish the first solid existence of new physics beyond the SM.

Key physics questions to be addressed by SuperKEKB and Belle II

• Are there new CP-violating phases in the quark sector?
The amount of CP violation (CPV) in the SM quark sector is orders-of-magnitude too small to explain the baryon–antibaryon asymmetry. New insights will come from examining the difference between B0 and B0 decay rates, namely via measurements of time-dependent CPV in penguin transitions (second-order W interactions) of b  s and b  d quarks. CPV in charm mixing, which is negligible in the SM, will also provide information on the up-type quark sector. Another key area will be to understand the mechanisms that produced large amounts of CPV in the time-integrated rates of hadronic B decays, such as B  Kπ and B  Kππ, observed by the B factories and LHCb.

• Does nature have multiple Higgs bosons?
Many extensions to the SM predict charged Higgs bosons in addition to the observed neutral SM-like Higgs. Extended Higgs sectors can also introduce extra sources of CP violation. The charged Higgs will be searched for in flavour transitions to τ leptons, including B → τν, as well as B → Dτν and B → D*τν, where 4σ anomalies have already been observed.

• Does nature have a left–right symmetry, and are there flavour-changing neutral currents beyond the SM?
The LHCb experiment finds 4σ evidence for new physics in the decay B  K*μ+μ, which is sensitive to all heavy particles in the SM. Left–right symmetry models provide interesting candidates for this anomaly. Such extensions to the SM introduce new heavy bosons that predominantly couple to right-handed fermions that allow a new pattern of flavour-changing currents, and can be used to explain neutrino mass generation. To further characterise potential new physics, here we need to examine processes with reduced theoretical uncertainty, such as inclusive b  s l+l, b  sν ν transitions and time-dependent CPV in radiative B meson decays. Complementary constraints coming from electroweak precision observables and from direct searches at the LHC have pushed the mass limit for left–right models to several TeV.

• Are there sources of lepton-flavour violation (LFV) beyond the SM?
LFV is a key prediction in many neutrino mass-generation mechanisms, and may lead to τμγ enhancement at the level of 10−8. Belle II will analyse τ lepton decays for a number of searches, which include LFV, CP violation and measurements of the electric dipole moment and (g−2) of the τ. The expected sensitivities to τ decays at Belle II will be unrivalled due to correlated production with minimal collision background. The detector will provide sensitivities seven times better than Belle for background-limited modes such as τμγ (to about 5 × 10–9) and up to 50 times better for the cleanest searches, such as τ eee (at the level of 5 × 10–10).

• Is there a dark sector of particle physics at the same mass scale as ordinary matter?
Belle II has unique sensitivity to dark matter via missing energy decays. While most searches for new physics at Belle II are indirect, there are models that predict new particles at the MeV to GeV scale – including weakly and non-weakly interacting massive particles that couple to the SM via new gauge symmetries. These models often predict a rich sector of hidden particles that include dark-matter candidates and gauge bosons. Belle II is implementing a new trigger system to capture these elusive events.

• What is the nature of the strong force in binding hadrons?
With B factories and hadron colliders having discovered a large number of states that were not predicted by the conventional meson interpretation, changing our understanding of QCD in the low-energy regime, quarkonium is high on the agenda at Belle II. A clean way of studying new particles is to produce them near resonance, achievable by adjusting the machine energy, while Belle II has good detection capabilities for all neutral and charged particles.

MAX IV paves the way for ultimate X-ray microscope

CCmax1_07_16

Since the discovery of X-rays by Wilhelm Röntgen more than a century ago, researchers have striven to produce smaller and more intense X-ray beams. With a wavelength similar to interatomic spacings, X-rays have proved to be an invaluable tool for probing the microstructure of materials. But a higher spectral power density (or brilliance) enables a deeper study of the structural, physical and chemical properties of materials, in addition to studies of their dynamics and atomic composition.

For the first few decades following Röntgen’s discovery, the brilliance of X-rays remained fairly constant due to technical limitations of X-ray tubes. Significant improvements came with rotating-anode sources, in which the heat generated by electrons striking an anode could be distributed over a larger area. But it was the advent of particle accelerators in the mid-1900s that gave birth to modern X-ray science. A relativistic electron beam traversing a circular storage ring emits X-rays in a tangential direction. First observed in 1947 by researchers at General Electric in the US, such synchrotron radiation has taken X-ray science into new territory by providing smaller and more intense beams.

Generation game

First-generation synchrotron X-ray sources were accelerators built for high-energy physics experiments, which were used “parasitically” by the nascent synchrotron X-ray community. As this community started to grow, stimulated by the increased flux and brilliance at storage rings, the need for dedicated X-ray sources with different electron-beam characteristics resulted in several second-generation X-ray sources. As with previous machines, however, the source of the X-rays was the bending magnets of the storage ring.

The advent of special “insertion devices” led to present-day third-generation storage rings – the first being the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, and the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory in Berkeley, California, which began operation in the early 1990s. Instead of using only the bending magnets as X-ray emitters, third-generation storage rings have straight sections that allow periodic magnet structures called undulators and wigglers to be introduced. These devices consist of rows of short magnets with alternating field directions so that the net beam deflection cancels out. Undulators can house 100 or so permanent short magnets, each emitting X-rays in the same direction, which boosts the intensity of the emitted X-rays by two orders of magnitude. Furthermore, interference effects between the emitting magnets can concentrate X-rays of a given energy by another two orders of magnitude.

Third-generation light sources have been a major success story, thanks in part to the development of excellent modelling tools that allow accelerator physicists to produce precise lattice designs. Today, there are around 50 third-generation light sources worldwide, with a total number of users in the region of 50,000. Each offers a number of X-ray beamlines (up to 40 at the largest facilities) that fan out from the storage ring: X-rays pass through a series of focusing and other elements before being focused on a sample positioned at the end station, with the longest beamlines (measuring 150 m or more) at the largest light sources able to generate X-ray spot sizes a few tens of nanometres in diameter. Facilities typically operate around the clock, during which teams of users spend anywhere between a few hours to a few days undertaking experimental shifts, before returning to their home institutes with the data.

Although the corresponding storage-ring technology for third-generation light sources has been regarded as mature, a revolutionary new lattice design has led to another step up in brightness. The MAX IV facility at Maxlab in Lund, Sweden, which was inaugurated in June, is the first such facility to demonstrate the new lattice. Six years in construction, the facility has demanded numerous cutting-edge technologies – including vacuum systems developed in conjunction with CERN – to become the most brilliant source of X-rays in the world.

The multi-bend achromat

CCmax2_07_16

Initial ideas for the MAX IV project started at the end of the 20th century. Although the flagship of the Maxlab laboratory, the low-budget MAX II storage ring, was one of the first third-generation synchrotron radiation sources, it was soon outcompeted by several larger and more powerful sources entering operation. Something had to be done to maintain Maxlab’s accelerator programme.

The dominant magnetic lattice at third-generation light sources consists of double-bend achromats (DBAs), which have been around since the 1970s. A typical storage ring contains 10–30 achromats, each consisting of two dipole magnets and a number of magnet lenses: quadrupoles for focusing and sextupoles for chromaticity correction (at MAX IV we also added octupoles to compensate for amplitude-dependent tune shifts). The achromats are flanked by straight sections housing the insertion devices, and the dimensions of the electron beam in these sections is minimised by adjusting the dispersion of the beam (which describes the dependence of an electron’s transverse position on its energy) to zero. Other storage-ring improvements, for example faster correction of the beam orbit, have also helped to boost the brightness of modern synchrotrons. The key quantity underpinning these advances is the electron-beam emittance, which is defined as the product of the electron-beam size and its divergence.

Despite such improvements, however, today’s third-generation storage rings have a typical electron-beam emittance of between 2–5 nm rad, which is several hundred times larger than the diffraction limit of the X-ray beam itself. This is the point at which the size and spread of the electron beam approaches the diffraction properties of X-rays, similar to the Abbe diffraction limit for visible light (see panel below). Models of machine lattices with even smaller electron-beam emittances predict instabilities and/or short beam lifetimes that make the goal of reaching the diffraction limit at hard X-ray energies very distant.

Although it had been known for a long time that a larger number of bends decreases the emittance (and therefore increases the brilliance) of storage rings, in the early 1990s, one of the present authors (DE) and others recognised that this could be achieved by incorporating a higher number of bends into the achromats. Such a multi-bend achromat (MBA) guides electrons around corners more smoothly, therefore decreasing the degradation in horizontal emittance. A few synchrotrons already employ triple-bend achromats, and the design has also been used in several particle-physics machines, including PETRA at DESY, PEP at SLAC and LEP at CERN, proving that a storage ring with an energy of a few GeV produces a very low emittance. To avoid prohibitively large machines, however, the MBA demands much smaller magnets than are currently employed at third-generation synchrotrons.

CCmax3_07_16

In 1995, our calculations showed that a seven-bend achromat could yield an emittance of 0.4 nm rad for a 400 m-circumference machine – 10 times lower than the ESRF’s value at the time. The accelerator community also considered a six-bend achromat for the Swiss Light Source and a five-bend achromat for a Canadian light source, but the small number of achromats in these lattices meant that it was difficult to make significant progress towards a diffraction-limited source. One of us (ME) took the seven-bend achromat idea and turned it into a real engineering proposal for the design of MAX IV. But the design then went through a number of evolutions. In 2002, the first layout of a potential new source was presented: a 277 m-circumference, seven-bend lattice that would reach an emittance of 1 nm rad for a 3 GeV electron beam. By 2008, we had settled on an improved design: a 520 m-circumference, seven-bend lattice with an emittance of 0.31 nm rad, which will be reduced by a factor of two once the storage ring is fully equipped with undulators. This is more or less the design of the final MAX IV storage ring.

In total, the team at Maxlab spent almost a decade finding ways to keep the lattice circumference at a value that was financially realistic, and even constructed a 36 m-circumference storage ring called MAX III to develop the necessary compact magnet technology. There were tens of problems that we had to overcome. Also, because the electron density was so high, we had to elongate the electron bunches by a factor of four by using a second radio-frequency (RF) cavity system.

Block concept

MAX IV stands out in that it contains two storage rings operated at an energy of 1.5 and 3 GeV. Due to the different energies of each, and because the rings share an injector and other infrastructure, high-quality undulator radiation can be produced over a wide spectral range with a marginal additional cost. The storage rings are fed electrons by a 3 GeV S-band linac made up of 18 accelerator units, each comprising one SLAC Energy Doubler RF station. To optimise the economy over a potential three-decade-long operation lifetime, and also to favour redundancy, a low accelerating gradient is used.

The 1.5 GeV ring at MAX IV consists of 12 DBAs, each comprising one solid-steel block that houses all the DBA magnets (bends and lenses). The idea of the magnet-block concept, which is also used in the 3 GeV ring, has several advantages. First, it enables the magnets to be machined with high precision and be aligned with a tolerance of less than 10 μm without having to invest in aligning laboratories. Second, blocks with a handful of individual magnets come wired and plumbed direct from the delivering company, and no special girders are needed because the magnet blocks are rigidly self-supporting. Last, the magnet-block concept is a low-cost solution.

We also needed to build a different vacuum system, because the small vacuum tube dimensions (2 cm in diameter) yield a very poor vacuum conductance. Rather than try to implement closely spaced pumps in such a compact geometry, our solution was to build 100% NEG-coated vacuum systems in the achromats. NEG (non-evaporable getter) technology, which was pioneered at CERN and other laboratories, uses metallic surface sorption to achieve extreme vacuum conditions. The construction of the MAX IV vacuum system raised some interesting challenges, but fortunately CERN had already developed the NEG coating technology to perfection. We therefore entered a collaboration that saw CERN coat the most intricate parts of the system, and licences were granted to companies who manufactured the bulk of the vacuum system. Later, vacuum specialists from the Budker Institute in Novosibirsk, Russia, mounted the linac and 3 GeV-ring vacuum systems.

Due to the small beam size and high beam current, intra beam scattering and “Touschek” lifetime effects must also be addressed. Both are due to a high electron density at small-emittance/high-current rings in which electrons are brought into collisions with themselves. Large energy changes among the electrons bring some of them outside of the energy acceptance of the ring, while smaller energy deviations cause the beam size to increase too much. For these reasons, a low-frequency (100 MHz) RF system with bunch-elongating harmonic cavities was introduced to decrease the electron density and stabilise the beam. This RF system also allows powerful commercial solid-state FM-transmitters to be used as RF sources.

CCmax4_07_16

When we first presented the plans for the radical MAX IV storage ring in around 2005, people working at other light sources thought we were crazy. The new lattice promised a factor of 10–100 increase in brightness over existing facilities at the time, offering users unprecedented spatial resolutions and taking storage rings within reach of the diffraction limit. Construction of MAX IV began in 2010 and commissioning began in August 2014, with regular user operation scheduled for early 2017.

On 25 August 2015, an amazed accelerator staff sat looking at the beam-position monitor read-outs at MAX IV’s 3 GeV ring. With just the calculated magnetic settings plugged in, and the precisely CNC-machined magnet blocks, each containing a handful of integrated magnets, the beam went around turn after turn with proper behaviour. For the 3 GeV ring, a number of problems remained to be solved. These included dynamic issues – such as betatron tunes, dispersion, chromaticity and emittance – in addition to more trivial technical problems such as sparking RF cavities and faulty power supplies.

As of MAX IV’s inauguration on 21 June, the injector linac and the 3 GeV ring are operational, with the linac also delivering X-rays to the Short Pulse Facility. A circulating current of 180 mA can be stored in the 3 GeV ring with a lifetime of around 10 h, and we have verified the design emittance with a value in the region of 300 pm rad. Beamline commissioning is also well under way, with some 14 beamlines under construction and a goal to increase that number to more than 20.

Sweden has a well-established synchrotron-radiation user community, although around half of MAX IV users will come from other countries. A variety of disciplines and techniques are represented nationally, which must be mirrored by MAX IV’s beamline portfolio. Detailed discussions between universities, industry and the MAX IV laboratory therefore take place prior to any major beamline decisions. The high brilliance of the MAX IV 3 GeV ring and the temporal characteristics of the Short Pulse Facility are a prerequisite for the most advanced beamlines, with imaging being one promising application.

Towards the diffraction limit

MAX IV could not have reached its goals without a dedicated staff and help from other institutes. As CERN has helped us with the intricate NEG-coated vacuum system, and the Budker Institute with the installation of the linac and ring vacuum systems, the brand new Solaris light source in Krakow, Poland (which is an exact copy of the MAX IV 1.5 GeV ring) has helped with operations, and many other labs have offered advice. The MAX IV facility has also been marked out for its environmental credentials: its energy consumption is reduced by the use of high-efficiency RF amplifiers and small magnets that have a low power consumption. Even the water-cooling system of MAX IV transfers heat energy to the nearby city of Lund to warm houses.

The MAX IV ring is the first of the MBA kind, but several MBA rings are now in construction at other facilities, including the ESRF, Sirius in Brazil and the Advanced Photon Source (APS) at Argonne National Laboratory in the US. The ESRF is developing a hybrid MBA lattice that would enter operation in 2019 and achieve a horizontal emittance of 0.15 nm rad. The APS has decided to pursue a similar design that could enter operation by the end of the decade and, being larger than the ESRF, the APS can strive for an even lower emittance of around 0.07 nm rad. Meanwhile, the ALS in California is moving towards a conceptual design report, and Spring-8 in Japan is pursuing a hybrid MBA that will enter operation on a similar timescale.

CCmax5_07_16

Indeed, a total of some 10 rings are currently in construction or planned. We can therefore look forward to a new generation of synchrotron storage rings with very high transverse-coherent X-rays. We will then have witnessed an increase of 13–14 orders of magnitude in the brightness of synchrotron X-ray sources in a period of seven decades, and put the diffraction limit at high X-ray energies firmly within reach.

One proposal would see such a diffraction-limited X-ray source installed in the 6.3 km-circumference tunnel that once housed the Tevatron collider at Fermilab, Chicago. Perhaps a more plausible scenario is PETRA IV at DESY in Hamburg, Germany. Currently the PETRA III ring is one of the brightest in the world, but this upgrade (if it is funded) could bring the ring performance to the diffraction limit at hard X-ray energies. This is the Holy Grail of X-ray science, providing the highest resolution and signal-to-noise ratio possible, in addition to the lowest-radiation damage and the fastest data collection. Such an X-ray microscope will allow the study of ultrafast chemical reactions and other processes, taking us to the next chapter in synchrotron X-ray science.

Towards the X-ray diffraction limit

Electromagnetic radiation faces a fundamental limit in terms of how sharply it can be focused. For visible light, it is called the Abbe limit, as shown by Ernst Karl Abbe in 1873. The diffraction limit is defined as λ/(4π), where λ is the wavelength of the radiation. Reaching the diffraction limit for X-rays emitted from a storage ring (approximately 10 pm rad) is highly desirable from a scientific perspective: not only would it bring X-ray microscopy to its limit, but material structure could be determined with much less X-ray damage and fast chemical reactions could be studied in situ. Currently, the electron beam travelling in a storage ring dilutes the X-ray emittance by orders of magnitude. Because this quantity determines the brilliance of the X-ray beam, reaching the X-ray diffraction limit is a case of reducing the electron-beam emittance as far as possible.

The emittance is defined as Cq*E2/N3, where Cq is the ring magnet-lattice constant, E is the electron energy and N is the number of dipole magnets. It has two components: horizontal (given by the magnet lattice and electron energy) and vertical (which is mainly caused by coupling from the horizontal emittance). While the vertical emittance is, in principle, controllable and small compared with the horizontal emittance, the latter has to be minimised by choosing an optimised magnet lattice with a large number of magnet elements.

Because Cq can be brought to the theoretical minimum emittance limit and E is given by the desired spectral range of the X-rays, the only parameter remaining with which we can decrease the electron-beam emittance is N. Simply increasing the number of achromats to increase N turns out not to be a practical solution, however, because the rings are too big and expensive and/or the electrons tend to be unstable and leave the ring. However, a clever compromise called the multi-bend achromat (MBA), based on compact magnets and vacuum chambers, allows more elements to be incorporated around a storage ring without increasing its diameter, and in principle this design could allow a future storage ring to achieve the diffraction limit.

 

The end of computing’s steam age

Steam once powered the world. If you wanted to build a factory, or a scientific laboratory, you needed a steam engine and a supply of coal. Today, for most of us, power comes out of the wall in the form of electricity.

The modern-day analogue is computing: if you want to run a large laboratory such as CERN, you need a dedicated computer centre. The time, however, is ripe for change.

For LHC physicists, this change has already happened. We call it the Worldwide LHC Computing Grid (WLCG), which is maintained by the global particle-physics community. As physicists move towards the High Luminosity LHC (HL-LHC), however, we need a new solution for our increasingly demanding computing and data-storage needs. That solution could look very much like the Cloud, which is the general term for distributed computing and data storage in broader society.

There are clear differences between the Cloud and the Grid. When developing the WLCG, CERN was able to factor in technology that was years in the future by banking on Moore’s law, which states that processing capacity doubles roughly every 18 months. After more than 50 years, however, Moore’s law is coming up against a hard technology limit. Cloud technology, by contrast, shows no sign of slowing down: more bandwidth simply means more fibre or colour-multiplexing on the same fibre.

Cloud computing is already at an advanced stage. While CERN was building the WLCG, the Googles and Amazons of the world were building huge data warehouses to host commercial Clouds. Although we could turn to them to satisfy our computing needs, it is doubtful that such firms could guarantee the preservation of our data for the decades that it would be needed. We therefore need a dedicated “Science Cloud” instead.

CERN has already started to think about the parameters for such a facility. Zenodo, for example, is a future-proof and non-proprietary data repository that has been adopted by other big-data communities. The virtual nature of the technology allows various scientific disciplines to coexist on a given infrastructure, making it very attractive to providers. The next step requires co-operation with governments to develop computing and data warehouses for a Science Cloud.

CERN and the broader particle-physics community have much to bring to this effort. Just as CERN played a pioneering role in developing Grid computing to meet the needs of the LHC, we can contribute to the development of the Science Cloud to meet the demands of the HL-LHC. Not only will this machine produce a luminosity five times greater than the LHC, but data are increasingly coming straight from the sensors in the LHC detectors to our computer centre with minimal processing and reduction along the way. Add to that CERN’s open-access ethos, which began in open-access publishing and is now moving towards “open data”, and you have a powerful combination of know-how relevant to designing future computing and data facilities. Particle physics can therefore help develop Cloud computing for the benefit of science as a whole.

In the future, scientific computing will be accessed much as electrical power is today: we will tap into resources simply by plugging in, without worrying about where our computing cycles and data storage are physically located. Rather than relying on our own large computer centre, there will be a Science Cloud composed of computing and data centres serving the scientific endeavour as a whole, guaranteeing data preservation for as long as it is needed. Its location should be determined primarily by its efficiency of operation.

CERN has been in the vanguard of scientific computing for decades, from the computerised control system of the Super Proton Synchrotron in the 1970s, to CERNET, TCP/IP, the World Wide Web and the WLCG. It is in that vanguard that we need to remain, to deliver the best science possible. Working with governments and other data-intensive fields of science, it’s time for particle physics to play its part in developing a world in which the computing socket sits right next to the power socket. It’s time to move beyond computing’s golden age of steam.

Tunnel visions

By M Riordan, L Hoddeson and A W Kolb
University of Chicago Press
Also available at the CERN bookshop

CCboo1_06_16

The Superconducting Super Collider (SSC), a huge accelerator to be built in Texas in the US, was expected by the physicists who supported it to be the place where the Higgs boson would be discovered. Instead, the remnants of the SSC facilities at Waxahachie are now property of the chemical company Magnablend, Inc. What happened in between? What did go wrong? What are the lessons to be learnt?

Tunnel Visions responds to these historical questions in a very precise and exhaustive way. Contrary to my expectations, it is not a doom and gloom narration but a down to earth story of the national pride, good physics and bad economics of one of the biggest collider projects in history.

The book depicts the political panorama during the 10 years (~1983–1993) of life of the SSC project. It started during the Reaganomics, hand in hand with the International Space Station (ISS), and concluded during the first Clinton presidency after the 1990s recession and the end of the Cold War. The ISS survived, possibly because political justifications for space adventure are easier to find, but most probably because from the beginning it was an international project. The book explains the management intricacies of such a large project, the partisan support and disregard, until the final SSC demise in the US congress. For the particle-physics community this is a well-known tale, but the historical details are welcome.

However, the book is more than that, because it also sheds light on the lessons learnt. The final woes of the SSC signed the definitive opening of the US particle-physics community to full international collaboration. For 50 years, without doubt, the US had been the place to go for any particle physicist. Fermilab, SLAC and Brookhaven were, and still are, great stars in the physics firmament. Even if the SSC project had not been cut, those three had to keep working in order to maintain the progress in the field. But that was too much for essentially a zero-sum budget game. The show must go on, so Fermilab got the main injector, SLAC the BaBar factory, and Brookhaven the RHIC collider. Thanks to these upgrades, the three laboratories made important progress in particle physics: top quark discovery; W and Z boson precision measurements; Higgs boson mass hunt narrowing between 113 and 170 GeV; detection of possible discrepancies in the Standard Model associated with b-meson decay; and the discovery of the liquid-like quark–gluon plasma.

Why did the SSC project collapse? The authors explain the real reasons, not related to technical problems but to poor management in the first years and the clash of cultures between the US particle-physics community and the US military-industrial system. But there are also reasons of opportunity. The SSC was several steps beyond its time. To put it into context: during the years of the SSC project, at CERN the conversion of the SPS into a collider took place, along with the whole LEP programme and the beginning of the LHC project. That effort prevented any possible European contribution to the SSC. The last-ditch attempt to internationalize the SSC into a trans-Pacific partnership with Japan was also unsuccessful. The lessons from history, the authors conclude, are that at the beginning of the 1990s the costs of frontier experimental particle physics had grown too much, even for a country like the US. Multilateral international collaboration was the only way out, as the ISS showed.

The Higgs boson discovery was possible at CERN. The book avoids any “hare and tortoise” comparison here, however, since in the dawning of the new century, the US became a CERN observer state with a very important in-kind contribution. In my opinion, this is where the book grows in interest because it explains how the US particle-physics community took part in the LHC programme, becoming decisive. In particular, the US technological effort in developing superconducting magnets was not wasted. The book also talks about the suspense around the Higgs search when the Tevatron was the only one still in the game during the LHC shutdown after the infamous incident in September 2008.

Useful appendices providing notes, a bibliography and even a short explanation of the Standard Model complete the text.

Entropy Demystified: The Second Law Reduced to Plain Common Sense (2nd edition)

By Arieh Ben-Naim
World Scientific

41vMCDUos3L._SX312_BO1,204,203,200_

In this book, the author explains entropy and the second law of thermodynamics in a clear and easy way, and with the help of many examples. He intends, in particular, to show that these physics laws are not intrinsically incomprehensible, as they appear at first. The fact that entropy, which is defined in terms of heat and temperature, can be also expressed in terms of order and disorder, which are intangible concepts, together with the evidence that entropy (or, in other words, disorder) increases perpetually, can puzzle students. Some mystery seems to be inevitably associated with these concepts. The author asserts that, looking at the second law from the molecular point of view, everything clears up. What a student needs to know is the atomistic formulation of entropy, which comes from statistical mechanics.

The aim of the book is to clarify these concepts to readers who haven’t studied statistical mechanics. Many dice games and examples from everyday life are used to make readers familiar with the subject. They are guided along a path that allows them to discover by themselves what entropy is, how it changes, and why it always changes in one direction in a spontaneous process.

In this second edition, seven simulated games are also included, so that the reader can experiment with and appreciate the joy of understanding the second law of thermodynamics.

Modern Physics Letters A: Special Issue on Hadrontherapy

By Saverio Braccini (ed.)
World Scientific

The applications of nuclear and particle physics to medicine have seen extraordinary development since the discovery of X-rays by Röntgen at the end of the 19th century. Medical imaging and oncologic therapy with photons and charged particles (specifically hadrons) are currently hot research topics.

This special issue of Modern Physics Letters is dedicated to hadron therapy, which is the frontier of cancer radiation therapy, and aims at filling a gap in the current literature on medical physics. Through 10 invited review papers, the volume presents the basics of hadron therapy, along with the most recent scientific and technological developments in the field. The first part covers topics such as the history of hadron therapy, radiation biophysics, particle accelerators, dose-delivery systems and treatment planning. In the second part, more specific topics are treated, including dose and beam monitoring, proton computer tomography, innoacustics and microdosimetry.

This volume will be very useful to students, researchers approaching medical physics, and scientists interested in this interdisciplinary and fast-moving field.

Beyond the Galaxy: How Humanity Looked Beyond our Milky Way and Discovered the Entire Universe

By Ethan Siegel
World Scientific

719Eu+6dvnL

This book provides an introduction to astrophysics and cosmology for absolute beginners, as well as for any reader looking for a general overview of the subject and an account of its latest developments.

Besides presenting what we know about the history of the universe and the marvellous objects that populate it, the author is interested in explaining how we came to such knowledge. He traces a trajectory through the various theories and the discoveries that defined what we know about our universe, as well as the boundary of what is still to be understood.

The first six chapters deal with the state-of-the-art of our knowledge about the structure of the universe, its origin and evolution, general relativity and the life of stars. The following five address the most important open problems, such as: why there is more matter than antimatter, what dark matter and dark energy are, what there was before the Big Bang, and what the fate of the universe is.

Written in plain English, without formulas and equations, and characterized by a clear and fluid prose, this book is suitable for a wide range of readers.

bright-rec iop pub iop-science physcis connect