Comsol -leaderboard other pages

Topics

CERN soups up its antiproton source

The Antiproton Decelerator (AD) facility at CERN, which has been operational since 2000, is a unique source of antimatter. It delivers antiprotons with very low kinetic energies, enabling physicists to study the fundamental properties of baryonic antimatter – namely antiprotons, antiprotonic helium and antihydrogen – with great precision. Comparing the properties of these simple systems to those of their respective matter conjugates therefore provides highly sensitive tests of CPT invariance, which is the most fundamental symmetry underpinning the relativistic quantum-field theories of the Standard Model (SM). Any observed difference between baryonic matter and antimatter would hint at new physics, for instance due to the existence of quantum fields beyond the SM.

In the case of matter particles, physicists have developed advanced experimental techniques to characterise simple baryonic systems with extraordinary precision. The mass of the proton, for example, has been determined with a fractional precision of 89 parts in a trillion (ppt) and its magnetic moment is known to a fractional precision of three parts in a billion. Electromagnetic spectroscopy on hydrogen atoms, meanwhile, has allowed the ground-state hyperfine splitting of the hydrogen atom to be determined with a relative accuracy of 0.7 ppt and the 1S/2S electron transition in hydrogen to be determined with a fractional precision of four parts in a quadrillion – a number that has 15 digits.

ELENA will lead to an increase by one to two orders of magnitude in the number of antiprotons captured by experiments

In the antimatter sector, on the other hand, only the mass of the antiproton has been determined at a level comparable to that in the baryon world (see table). In the late 1990s, the TRAP collaboration at CERN’s LEAR experiment used advanced trapping and cooling methods to compare the charge-to-mass ratios of the antiproton and the proton with a fractional uncertainty of 90 ppt. This was, among others, one of the crucial steps that inspired CERN to start the AD programme. Over the past 20 years, CERN has made huge strides towards our understanding of antimatter (see panel). This includes the first ever production of anti-atoms – antihydrogen, which comprises an antiproton orbited by a positron – in 1995 and the production of antiprotonic helium (in which an antiproton and an electron orbit a normal helium nucleus).

CERN has decided to boost its AD programme by building a brand new synchrotron that will improve the performance of its antiproton source. Called the Extra Low ENergy Antiproton ring (ELENA), this new facility is now in the commissioning phase. Once it enters operation, ELENA will lead to an increase by one to two orders of magnitude in the number of antiprotons captured by experiments using traps and also make new types of experiments possible (see figure). This will provide an even more powerful probe of new physics beyond the SM.

Combined technologies

The production and investigation of antimatter relies on combining two key technologies: high-energy particle-physics sources and classical low-energy atomic-physics techniques such as traps and lasers. One of the workhorses of experiments in the AD facility is the Penning trap. This static electromagnetic cage for antiprotons serves for both high-precision measurements of the fundamental properties of single trapped antiprotons and for trapping large amounts of antiprotons and positrons for antihydrogen production.

The AD routinely provides low-energy antiprotons to a dynamic and growing user community. It comprises a ring with a circumference of 182.4 m, which currently supplies five operational experiments devoted to studying the properties of antihydrogen, antiprotonic helium and bare antiprotons with high precision: ALPHA, ASACUSA, ATRAP, AEgIS and BASE (see panel). All of these experiments are located in the existing experimental zone, covering approximately one half of the space inside the AD ring. With this present scheme, one bunch containing about 3 × 107 antiprotons is extracted roughly every 120 seconds at a kinetic energy of 5.3 MeV and sent to a particular experiment.

Although there is no hard limit for the lowest energy that can be achieved in a synchrotron, operating a large machine at low energies requires magnets with low field strengths and is therefore subject to perturbations due to remanence, hysteresis and external stray-field effects. The AD extraction energy of 5.3 MeV is a compromise: it allows beam to be delivered under good conditions given the machine’s circumference, while enabling the experiments to capture a reasonable quantity of antiprotons. Most experiments further decelerate the antiprotons by sending them through foils or using a radiofrequency quadrupole to take them down to a few keV so that they can be captured. This present scheme is inefficient, however, and less than one antiproton in 100 that have been decelerated with a foil can be trapped and used by the experiments.

The ELENA project aims to further decelerate the antiprotons from 5.3 MeV down to 100 keV in a controlled way. This is achieved via a synchrotron equipped with an electron cooler to avoid losses during deceleration and to generate dense bunches of antiprotons for users. To achieve this goal, the machine has to be smaller than the AD; a circumference of 30.4 metres has been chosen, which is one sixth of the AD. The experiments still have to further decelerate the beam either using thinner foils or other means, but the lower energy from the synchrotron makes this process more efficient and therefore increases the number of captured antiprotons dramatically.

With ELENA, the available intensity will be distributed to several (the current baseline is four) bunches, which are sent to several experiments simultaneously. Despite the reduction in intensity, the higher beam availability for a given experiment means that a given experiment will receive beam almost continuously 24 hours per day, as opposed to during an eight-hour-long shift a few times per week, as is the case with the present AD.

The ELENA project started in 2012 with the detailed design of the machine and components. Installations inside the AD hall and inside the AD ring itself began in spring 2015, in parallel to AD operation for the existing experiments. Installing ELENA inside the AD ring is a simple cost-effective solution because no large additional building to house a synchrotron and a new experimental area had to be constructed, plus the existing experiments have been able to remain at their present locations. Significant external contributions from the user community include a H ion and proton source for commissioning, and very sensitive profile monitors for the transfer lines.

Low-energy challenges

Most of the challenges and possible issues of the ELENA project are a consequence of its low energy, small size and low intensity. The low beam energy makes the beam very sensitive to perturbations such that even the Earth’s magnetic field has a significant impact, for instance deforming the “closed orbit” such that the beam is no longer located at the centre of the vacuum chamber. The circumference of the machine has therefore been chosen to be as small as possible, thus demanding higher-field magnets, to mitigate these effects. On the other hand, the ring has to be long enough to install all necessary components.

For similar reasons, magnets have to be designed very carefully to ensure a sufficiently good field quality at very low field levels, where hysteresis effects and remanence become important. This challenge triggered thorough investigations by the CERN magnet experts and involved several prototypes using different types of yokes, resulting in unexpected conclusions relevant for any project that relies on low-field magnets. The initially foreseen bending magnets based on “diluted” yokes, with laminations made of electrical steel alternated with thicker non-magnetic stainless steel laminations, were found to have larger remnant fields and to be less suitable. Based on this unexpected empirical observation, which was later explained by theoretical considerations, it has been decided that most ELENA magnets will be built with conventional yokes. The corrector magnets have been built without magnetic yoke to completely suppress hysteresis effects.

Electron cooling is an essential ingredient for ELENA: cooling on an intermediate plateau is applied to reduce emittances and losses during deceleration to the final energy. Once the final energy is reached, electron cooling is applied again to generate dense bunches with low emittances and energy spread, which are then transported to the experiments. At the final energy, so-called intra beam scattering (IBS) caused by Coulomb interactions between different particles in the beam increases the beam “emittances” and the energy spread, which, in turn, increases the beam size. This phenomenon will be the dominant source of beam degradation in ELENA, and the equilibrium between IBS and electron cooling will determine the characteristics of the bunches sent to the experiments.

Another possible limitation for a low-energy machine such as ELENA is the large cross-section for scattering between antiprotons and the nuclei of at-rest gas molecules, which leads to beam loss and degradation. This phenomenon is mitigated by a carefully designed vacuum system that can reach pressures as low as a few 10–12 mbar. Furthermore, ELENA’s low intensities and energy mean that the beam can generate only very small signals and therefore makes beam diagnostics challenging. For example, the currents of the circulating beam are less than 1 μA, which is well below what can be measured with standard beam-current transformers and therefore demands that we seek alternative techniques to estimate the intensity.

An external source capable of providing 100 keV H and proton beams will be used for a large part of the commissioning. Although this allows commissioning to be carried out in parallel with AD operation for the experiments, it means that commissioning starts at the most delicate low-energy part of the ELENA cycle where perturbations have the most impact. Another advantage of ELENA’s low energy is that the transfer lines to the experiments are electrostatic – a low-cost solution that allows for the installation of many focusing quadrupoles and makes the lines less sensitive to perturbations.

CERN's AD facility opens new era of precision anitmatter studies

CERN’s Antiproton Decelerator (AD) was approved in 1997, just two years after the production of the first antihydrogen atoms at the Low Energy Antiproton Ring (LEAR), and entered operation in 2000. Its debut discovery was the production of cold antihydrogen in 2002 by the ATHENA and ATRAP collaborations. These experiments were joined by the ASACUSA collaboration, which aims at precision spectroscopy of antiprotonic helium and Rabi-like spectroscopy of the antihydrogen ground-state hyperfine splitting. Since then, techniques have been developed that allow trapping of antihydrogen atoms and the production of a beam of cold antihydrogen atoms. This culminated in 2010 in the first report on trapped antihydrogen by the ALPHA collaboration (the successor of ATHENA). In the same year, ASACUSA produced antihydrogen using a cusp trap, and in 2012 the ATRAP collaboration also reported on trapped antihydrogen.

TRAP, which was based at LEAR and was the predecessor of ATRAP, is one of two CERN experiments that have allowed the first direct investigations of the fundamental properties of antiprotons. In 1999, the collaboration published a proton-to-antiproton charge-to-mass ratio with a factional precision of 90 ppt based on single-charged-particle spectroscopy in a Penning trap using data taken up to 1996. Then, published in 2013, ATRAP measured the magnetic moment of the antiproton with a fractional precision of 4.4 ppm. The BASE collaboration, which was approved in the same year, is now preparing to improve the ATRAP value to the ppb level. In addition, in 2015 BASE reported on a comparison of the proton-to-antiproton charge-to-mass ratio with a fractional precision of 69 ppm. So far, all measured results are consistent with CPT invariance.

The ALPHA, ASACUSA and ATRAP experiments, with the goal of performing precise antihydrogen spectroscopy, are challenging because they need antihydrogen first to be produced and then to be trapped. This requires the accumulation of both antiprotons and positrons, in addition to antihydrogen production via three-body reactions in a nested Penning trap. In 2012, ALPHA reported on a first spectroscopy-type experiment and published the observation of resonant quantum transitions in antihydrogen (see figure) and, later, ASACUSA reported in 2014 on the first production of a beam of cold antihydrogen atoms. The reliable production/trapping scheme of ALPHA, meanwhile, enabled several high-resolution studies, including the precise investigation of the charge neutrality of antihydrogen with a precision at the 0.7 ppb level.

The ASACUSA, ALPHA and ATRAP collaborations are now preparing their experiments to produce the first electromagnetic spectroscopy results on antihydrogen. This is difficult because ALPHA typically reports on about one trapped antihydrogen atom per mixing cycle, while ASACUSA detects approximately six antihydrogen atoms per shot. Both numbers demand for higher antihydrogen production rates, and to further boost AD physics, CERN built the new low-energy antiproton synchrotron ELENA. In parallel to these efforts, proposals to study gravity with antihydrogen were approved. This led to the formation of the AEgIS collaboration in 2008, which is currently being commissioned, and the GBAR project in 2012.

Towards first beam

As of the end of October 2016, all sectors of the ELENA ring –except for the electron cooler, which has temporarily been replaced by a simple vacuum chamber, and a few transfer lines required for the commissioning of the ring – have been installed and baked to reach the very low rest-gas density required. Following hardware tests, commissioning with beam is under way and will be resumed in early 2017, only interrupted for the installation of the electron cooler some time in spring.

ELENA will be ready from 2017 to provide beam to the GBAR experiment, which will be installed in the new experimental area (see panel). The existing AD experiments, however, will be connected only during CERN’s Long Shutdown 2 in 2019–2020 to minimise the period without antiprotons and to optimise the exploitation of the experiments. GBAR, along with another AD experiment called AEgIS, will target direct tests of the weak-equivalence principle by measuring gravitational acceleration based on antihydrogen. This is another powerful way to test for any violations between the way the fundamental forces affect matter and antimatter. Although the first antimatter fall experiments were reported by the ALPHA collaboration in 2013, these results will potentially be improved by several orders of magnitude using the dedicated gravity experiments offered by ELENA.

ELENA is expected to operate for at least 10 years and be exploited by a user community consisting of six approved experiments. This will take physicists towards the ultimate goal of performing spectroscopy on antihydrogen atoms at rest, and also to investigate the effect of gravity on matter and antimatter. A potential discovery of CPT violation will constitute a dramatic challenge to the relativistic quantum-field theories of the SM and will potentially contribute to an understanding of the striking imbalance of matter and antimatter observed on cosmological scales.

Crystal Clear celebrates 25 years of success

3D PET/CT image

The Crystal Clear (CC) collaboration was approved by CERN’s Detector Research and Development Committee in April 1991 as experiment RD18. Its objective was to develop new inorganic scintillators that would be suitable for electromagnetic calorimeters in future LHC detectors. The main goal was to find dense and radiation-hard scintillating material with a fast light emission that can be produced in large quantities. This challenge required a large multidisciplinary effort involving world experts in different aspects of material sciences – including crystallography, solid-state physics, luminescence and defects in solids.

From 1991 to 1994, the CC collaboration carried out intensive studies to identify the most adequate scintillator material for the LHC experiments. Three candidates were identified and extensively studied: cerium fluoride (CeF3), lead tungstate (PbWO4) and heavy scintillating glass. In 1994, lead tungstate was chosen by the CMS and ALICE experiments as the most cost-effective crystal compliant with the operational conditions at the LHC. Today, 75,848 lead-tungstate crystals are installed in CMS electromagnetic calorimeters and 17,920 in ALICE. The former contributed to the discovery of the Higgs boson, which was identified in 2012 by CMS and the ATLAS experiment via its decay, among others, into two photons. The CC collaboration’s generic R&D on scintillating materials has brought a deep understanding of cerium ions for scintillating activators and seen the development of lutetium and yttrium aluminium perovskite crystals for both physics and medical applications.

From physics to medicine

In 1997, the CC collaboration made its expertise in scintillators available to industry and society at large. Among the most promising sectors were medical functional imaging and, in particular, positron emission tomography (PET), due to its growing importance in cancer diagnostics and similarities with the functionality of electromagnetic calorimeters (the principle of detecting gamma rays in a PET scanner is identical to that in high-energy physics detectors).

Following this, CC collaboration members developed and constructed several dedicated PET prototypes. The first, which was later commercialised by Raytest GmbH in Germany under the trademark ClearPET, was a small-animal PET machine used for radiopharmaceutical research. At the turn of the millennium, five ClearPET prototypes characterised by a spatial resolution of 1.5 mm were built by the CC collaboration, which represented a major breakthrough in functional imaging at that time. The same crystal modules were also developed by the CC team at Forschungszentrum Jülich, Germany, to image plants in order to study carbon transport. A modified ClearPET geometry was also combined with X-ray single-photon detectors by CC researchers at CPPM Marseille, offering simultaneous PET and computed-tomography (CT) acquisition, and providing the first PET/CT simultaneous images of a mouse in 2015 (see image above). The simultaneous use of CT and PET allows the excellent position resolution of anatomic imaging (providing detailed images of the structure of tissues) to be combined with functional imaging, which is sensitive to the tissue’s metabolic activity.

After the success of ClearPET, in 2002, CC developed a dedicated PET camera for breast imaging called ClearPEM. This system had a spatial resolution of 1.3 mm and represented the first PET imaging based on avalanche photodiodes, which were initially developed for the CMS electromagnetic calorimeter. The machine was installed in Coimbra, Portugal, where clinical trials were performed. In 2005, a second ClearPEM machine combined with 3D ultrasound and elastography was developed with the aim of providing anatomical and metabolic information to allow better identification of tumours. This machine was installed in Hôpital Nord in Marseille, France, in December 2010 for clinical evaluations of 10 patients, and three years later it was moved to the San Girardo hospital in Monza, Italy, to undertake larger clinical trials, which are ongoing.

In 2011, a European FP7 project called EndoTOFPET-US, which was a consortium of three hospitals, three companies and six institutes, began the development of a prototype for a novel bi-modal time-of-flight PET and ultrasound endoscope with a spatial resolution better than 1 mm and a time resolution of 200 ps. This was aimed at the detection of early stage pancreatic or prostatic tumours and the development of new biomarkers for pancreatic and prostatic cancers. Two prototypes have been produced (one for pancreatic and one for prostate cancers) and the first tests on a phantom-prostate prototype were performed in spring 2015 at the CERIMED centre in Marseille. Work is now ongoing to improve the two prototypes, in view of preclinical and clinical operation.

In addition to the development of ClearPET detectors, members of the collaboration have initiated the development of the Monte Carlo simulation software-package GATE, a GEANT4-based simulation tool allowing the simulation of full PET detector systems.

Clear impact

In 1992, the CC collaboration organised the first international conference on inorganic scintillators and their applications, which led to a global scientific community of around 300 people. Today, this community comes together every two years at the SCINT conferences, the next instalment of which will take place in Chamonix, France, from 18 to 22 September 2017.

To this day, the CC collaboration continues its investigations into new scintillators and understanding their underlying scintillation mechanisms and radiation-hardness characteristics – in addition to the development of detectors. Among its most recent activities is the investigation of key parameters in scintillating detectors that enable very precise timing information for various applications. These include mitigating the effect of “pile-up” caused by the high event rate at particle accelerators operating at high peak luminosities, and also medical applications in time-of-flight PET imaging. This research requires the study of new materials and processes to identify ultrafast scintillation mechanisms such as “hot intraband luminescence” or quantum-confined excitonic emission with sub-picosecond rise time and sub-nanosecond decay time. It also involves investigating the enhancement of the scintillator light collection by using various surface treatments, such as nano-patterning with photonic crystals. CC recently initiated a European COST Action called Fast Advanced Scintillator Timing (FAST) to bring together European experts from academia and industry to ultimately achieve scintillator-based detectors with a time precision better than 100 ps, which provides an excellent training opportunity for researchers interested in this domain.

Among other recent activities of the CC collaboration are new crystal-production methods. Micro-pulling-down techniques, which allow inorganic scintillating crystals to be grown in the shape of fibres with diameters ranging from 0.3 to 3 mm, open the way to attractive detector designs for future high-energy physics experiments by replacing a block of crystals with a bundle of fibres. A Horizon 2020 European RISE Marie Skłodowska-Curie project called Intelum has been set up by the CC collaboration to explore the cost-effective production of large quantities of fibres. More recently, the development of new PET crystal modules has been launched by CC collaborators. These make use of new photodetector silicon photomultipliers and have a high spatial resolution (1.5 mm), depth-of-interaction capability (better than 3 mm) and a fast timing resolution (better than 200 ps).

Future directions

For the past 25 years, the CC collaboration has actively carried out R&D on scintillating materials, and investigated their use in novel ionising radiation-detecting devices (including read-out electronics and data acquisition) for use in particle-physics and medical-imaging applications. In addition to significant progress made in the understanding of scintillation mechanisms and radiation hardness of different materials, the choice of lead tungstate for the CMS electromagnetic calorimeter and the realisation of various prototypes for medical imaging are among the CC collaboration’s highlights so far. It is now making important contributions to understanding the key parameters for fast-timing detectors.

The various activities of the CC collaboration, which today has 29 institutional members, have resulted in more than 650 publications and 72 PhD theses. The motivation of CC collaboration members and the momentum generated throughout its many projects open up promising perspectives for the future of inorganic scintillators and their use in HEP and other applications.

• An event to celebrate the 25th anniversary of the CC collaboration will take place at CERN on 24 November.

Energetic protons boost BNL isotope production

The mission of the US Department of Energy (DOE) isotope programme is to produce and distribute radioisotopes that are in short supply and in high demand for medical, industrial and environmental uses. The DOE programme also maintains the unique infrastructure of national laboratories across the country, one of which is Brookhaven National Laboratory’s medical radioisotope programme, MIRP. Although there are many small accelerators in the US that produce radioisotopes, the availability of proton energies up to 200 MeV from the Brookhaven Linac Isotope Producer (BLIP) is unique.

There is significant promise for treating a variety of diseases including metastatic cancer, viral and fungal infections and even HIV

Radioisotopes are of interest both for nuclear medicine and for diagnostic imaging and therapy. The most important aspect of Brookhaven’s isotope programme is the large-scale production and supply of clinical-grade strontium-82 (82Sr). Although 82Sr is not directly used in humans, its short-lived daughter product 82Rb is a potassium mimic and upon injection is rapidly taken up by viable cardiac tissue. It is therefore supplied to hospitals as a generator for positron emission tomography (PET) scans of the heart, where its short half-life (76 seconds) allows multiple scans to be performed and minimal doses delivered to the patient. At present, up to 350,000 patients per year in the US receive such PET scans, but demand is growing beyond capacity.

There is also significant promise for the utilisation of alpha emitters for treating a variety of diseases including metastatic cancer, viral and fungal infections and even HIV, for which the leading candidate is the alpha-emitter 225Ac. Thanks to a series of upgrades completed this year, Brookhaven is now in a position to boost production of both of these vital medical isotopes.

Protons on target

The BLIP was built in 1972 and was the world’s first facility to utilise high-energy, high-current protons for radioisotope production. It works by diverting the excess beam of Brookhaven’s 200 MeV proton linac to water-cooled target assemblies that contain specially engineered targets and degraders to allow optimal energy to be delivered to the targets. The use of higher-energy particles allows relatively thick targets to be irradiated, in which the large number of target nuclei compensate for the generally smaller reaction cross-sections compared to low-energy nuclear reactions.

CCfea34_08_16

Although the maximum proton energy is 200 MeV, lower energies can be delivered by sequentially turning off the accelerating sections to achieve 66, 92, 117, 139, 160, 181 and 200 MeV beams. This is the only linac with such a capability, and its energy and intensity can be controlled on a pulse-by-pulse basis. As a result, the linac can simultaneously supply high-intensity pulses to the BLIP and a low-intensity polarised proton beam to the booster synchrotron for injection into the Alternating Gradient Synchrotron (AGS) and the Relativistic Heavy Ion Collider (RHIC) for Brookhaven’s nuclear-physics programme. This shared use allows for cost-effective operation. The BLIP design also enables bombardment of up to eight targets, offering the unique ability to produce multiple radioisotopes at the same time (see table). Target irradiations for radiation-damage studies are also performed, including for materials relevant to collimators used at the LHC and Fermilab.

The Gaussian beam profile of the linac results in very high power density in the target centre. Until recently, the intensity of the beam was limited to 115 μA to ensure the survival of the target. This year, however, a raster system was installed that allows the current on the target to be increased by allowing a more uniform deposition of the beam across the target. This system requires rapid cycling magnets and power supplies to continuously move the beam spot, and has been fully operational since January 2016.

Production of 82Sr is accomplished by irradiating a target comprising rubidium-chloride salt with 117 MeV protons, with the raster parameters driven by the thermal properties of the target. This demanded diagnostic devices in the BLIP beamline that enable the profile of the beam spot to be measured, both for initial device tuning and commissioning and for routine monitoring. These included a laser-profile monitor, beam-position monitor and plunging multi-wire devices. It was also necessary to build an interlock system to detect raster failure, because the target could be destroyed rapidly if the smaller-diameter beam spot stopped moving. The beam is moved in a circular pattern at a rate of 5 kHz with two different radii to create one large and one smaller circle. The radius values and the number of beam pulses for each radius can be programmed to optimise the beam distribution, allowing a five-fold reduction in peak power density.

Given the resulting increase in current from these upgrades, a parallel effort was required to increase the linac-beam intensity. This was accomplished by extending the present pulse length by approximately five per cent and optimising low-energy beam-transport parameters. These adjustments have now raised the maximum beam current to 173 μA, boosting radioisotope production by more than a third. After irradiation, all targets need to be chemically processed to purify the radioisotope of interest from target material and all other coproduced radioisotopes, which is carried out at Brookhaven’s dedicated target-processing laboratory.

Tri-lab effort

Among the highest-priority research efforts of the MIRP is to assess the feasibility of using an accelerator to produce the alpha emitter 225Ac. Alpha particles impart a high dose in a very short path length, which means that high doses to abnormal diseased tissues can be delivered while limiting the dose to normal tissues. Although there have been several promising preclinical and clinical trials of alpha emitters in the US and Europe, the 10 day half-life of 225Ac would enable targeted alpha radiotherapy using large proteins such as monoclonal antibodies and peptides for selective treatments of metastatic disease. 225Ac decays through multiple alpha emissions to 213Bi, which is an alpha emitter with a half-life of 46 minutes and can therefore be used with peptides and small molecules for rapid targeted alpha therapy.

CCfea35_08_16

Although 225Ac is the leading-candidate alpha emitter, vital research has been hindered by its very limited availability. To accelerate this development, a formal “Tri-Lab” collaboration has been established between BNL and two other DOE laboratories: Los Alamos National Laboratory (LANL) and Oak Ridge National Laboratory (ORNL). The aim is to evaluate the feasibility of irradiating thorium targets with high-energy proton beams to produce much larger quantities of 225Ac for medical applications. Because there is a direct correlation between beam intensity and radioisotope yields, the higher the intensity the higher the yield of these and other useful isotopes. So far, BNL and LANL have measured cross-sections, developed and irradiated relevant alpha-emitter targets for shipment to ORNL and other laboratories. These include several targets containing 225Ac-radioactivity up to 5.9 GBq and others for chemical and biological evaluation of both direct 225Ac use as well as use of a generator to provide the shorter-lived 213Bi. Similar irradiation methods are available at LANL and also TRIUMF in Canada.

Irradiation of thorium metal at high energy also creates copious fission products. This complicates the chemical purification but also creates an opportunity because some coproduced radiometals are of interest for other medical applications. The BNL group therefore plans to develop and evaluate methods to extract these from the irradiated-thorium target in a form suitable for use. In addition to 225Ac, the BNL programme is evaluating the future production of other radioisotopes that can be used as “theranostics”. This term refers to isotope pairs or even the same radioisotope that can be used for both imaging and therapeutic applications. Among the potentially attractive isotopes for this purpose that can be produced at BLIP are the beta- and gamma-emitters 186Re and 47Sc.

BNL has served as the birthplace for nuclear medicine from the 1950s, and saw the first use of high-intensity, high-power beams for radioisotope production. Under the guidance of the DOE isotope programme, the laboratory is using its unique accelerator facilities to develop and supply radioisotopes for imaging and therapy. Completed and future upgrades will enable large-scale production of alpha emitters and theranostics to meet presently unmet clinical need. These will enable personalised patient treatments and overall improvements in patient health and quality of life.

CMS gears up for the LHC data deluge

ATLAS and CMS, the large general-purpose experiments at CERN’s Large Hadron Collider (LHC), produce enormous data sets. Bunches of protons circulating in opposite directions around the LHC pile into each other every 25 nanoseconds, flooding the detectors with particle debris. Recording every collision would produce data at an unmanageable rate of around 50 terabytes per second. To reduce this volume for offline storage and processing, the experiments use an online filtering system called a trigger. The trigger system must remove the data from 99.998% of all LHC bunch crossings but keep the tiny fraction of interesting data that drives the experiment’s scientific mission. The decisions made in the trigger, which ultimately dictate the physics reach of the experiment, must be made in real time and are irrevocable.

The trigger system of the CMS experiment has two levels. The first, Level-1, is built from custom electronics in the CMS underground cavern, and reduces the rate of selected bunch crossings from 40 MHz to less than 100 kHz. There is a period of only four microseconds during which a decision must be reached, because data cannot be held within the on-detector memory buffers for longer than this. The second level, called the High Level Trigger (HLT), is software-based. Approximately 20,000 commercial CPU cores, housed in a building on the surface above the CMS cavern, run software that further reduces the crossing rate to an average of about 1 kHz. This is low enough to transfer the remaining data to the CERN Data Centre for permanent storage.

The original trigger system served CMS well during Run 1 of the LHC, which provided high-energy collisions at up to 8 TeV from 2010–2013. Designed in the late 1990s and operational by 2008, the system allowed the CMS collaboration to co-discover the Higgs boson in multiple final-state topologies. Among hundreds of other CMS measurements, it also allowed us to observe the rare decay Bsμμ with a significance of 4.3σ.

In Run 2 of the LHC, which got under way last year, CMS faces a much more challenging collision environment. The LHC now delivers both an increased centre-of-mass energy of 13 TeV and increased luminosity beyond the original LHC design of 1034 s–1 cm–2. While these improve the detector’s capability to observe rare physics events, they also result in severe event “pile-up” due to multiple overlapping proton collisions within a single bunch crossing. This effect not only makes it much harder to select useful crossings, it can drive trigger rates beyond what can be tolerated. This could be partially mitigated by raising the energy thresholds for the selection of certain particles. However, it is essential that CMS maintains its sensitivity to physics at the electroweak scale, both to probe the couplings of the Higgs boson and to catch glimpses of any physics beyond the Standard Model. An improved trigger system is therefore required that makes use of the most up-to-date technology to maintain or improve on the selection criteria used in Run 1.

Thinking ahead

In anticipation of these challenges, CMS has successfully completed an ambitious “Phase-1” upgrade to its Level-1 trigger system that has been deployed for operation this year. Trigger rates are reduced via several criteria: tightening isolation requirements on leptons; improving the identification of hadronic tau-lepton decays; increasing muon momentum resolution; and using pile-up energy subtraction techniques for jets and energy sums. We also employ more sophisticated methods to make combinations of objects for event selection, which is accomplished by the global trigger system (see figure 1).

These new features have been enabled by the use of the most up-to-date Field Programmable Gate Array (FPGA) processors, which provide up to 20 times more processing capacity and 10 times more communication throughput than the technology used in the original trigger system. The use of reprogrammable FPGAs throughout the system offers huge flexibility, and the use of fully optical communications in a standardised telecommunication architecture (microTCA) makes the system more reliable and easier to maintain compared with the previous VME standard used in high-energy physics for decades (see Decisions down to the wire).

Decisions down to the wire

Overall, about 70 processors comprise the CMS Level-1 trigger upgrade. All processors make use of the large-capacity Virtex-7 FPGA from the Xilinx Corporation, and three board variants were produced. The first calorimeter trigger layer uses the CTP7 board, which highlights an on-board Zync system-on-chip from Xilinx for on-board control and monitoring. The second calorimeter trigger layer, the barrel muon processors, and the global trigger and global muon trigger use the MP7, which is a generic symmetric processor with 72 optical links for both input and output. Finally, a third, modular variant called the MTF7 is used for the overlap and end-cap muon trigger regions, and features a 1 GB memory mezzanine used for the momentum calculation in the end-cap region. This memory can store the calculation of the momentum from multiple angular inputs in the challenging forward region of CMS where the magnetic bending is small.

The Level-1 trigger requires very rapid access to detector information. This is currently provided by the CMS calorimeters and muon system, which have dedicated optical data links for this purpose. The calorimeter trigger system – which is used to identify electrons, photons, tau leptons, and jets, and also to measure energy sums – consists of two processing layers. The first layer is responsible for collecting the data from calorimeter regions, summing the energies from the electromagnetic and hadronic calorimeter compartments, and organising the data to allow efficient processing. These data are then streamed to a second layer of processors in an approach called time-multiplexing. The second layer applies clustering algorithms to identify calorimeter-based “trigger objects” corresponding to single particle candidates, jets or features in the overall transverse-energy flow of the collision. Time-multiplexing allows data from the entire calorimeter for one beam crossing to be streamed to a single processor at full granularity, avoiding the need to share data between processors. Improved energy and position resolutions for the trigger objects, along with the increased logic space available, allows more sophisticated trigger decisions.

The muon trigger system also consists of two layers. For the original trigger system, a separate trigger was provided from each of the three muon-detector systems employed at CMS: drift tubes (DT) in the barrel region; cathode-strip chambers (CSC) in the endcap regions; and resistive plate chambers (RPC) throughout the barrel and endcaps. Each system provides unique information useful for making a trigger decision; for example, the superior timing of the RPCs can correct the time assignment of DTs and CSC track segments, as well as provide redundancy in case a specific DT or CSC is malfunctioning.

In Run 2, we combine trigger segments from all of these units at an earlier stage than in the original system, and send them to the muon track-finding system in a first processing layer. This approach creates an improved, highly robust muon trigger that can take advantage of the specific benefits of each technology earlier in the processing chain. The second processing layer of the muon trigger takes as input the tracks from 36 track-finding processors to identify the best eight candidate muons. It cancels duplicate tracks that occur along the boundaries of processing layers, and will in the future also receive information from the calorimeter trigger to identify isolated muons. These are a signature of interesting rare particle decays such as those of vector bosons.

A feast of physics

Finally, the global trigger processor collects information from both the calorimeter and muon trigger systems to arrive at the final decision on whether to keep the data from a given beam crossing – again, all in a period of four microseconds or less. The trigger changes made for Run 2 allow an event selection procedure that is much closer to that traditionally performed in software in the HLT or in offline analysis. The global trigger applies the trigger “menu” of the experiment – a large set of selection criteria designed to identify the broad classes of events used in CMS physics analyses. For example, events with a W or Z boson in the final state can be identified by the requirement for one or two isolated leptons above a certain energy threshold; top-quark decays by demanding high-energy leptons and jets in the same bunch crossing; and dark-matter candidates via missing transverse energy. The new system can contain several hundred such items – which is quite a feast of physics – and the complete trigger menu for CMS evolves continually as our understanding improves.

The trigger upgrade was commissioned in parallel with the original trigger system during LHC operations in 2015. This allowed the new system to be fully tested and optimised without affecting CMS physics data collection. Signals from the detector were physically split to feed both the initial and upgraded trigger systems, a project that was accomplished during the LHC’s first long shutdown in 2013–2014. For the electromagnetic calorimeter, for instance, new optical transmitters were produced to replace the existing copper cables to send data to the old and new calorimeter triggers simultaneously. A complete split was not realistic for the barrel muon system, but a large detector slice was prepared nevertheless. The encouraging results during commissioning allowed the final decision to proceed, with the upgrade to be taken in early January 2016.

As with the electronics, an entirely new software system had to be developed for system control and monitoring. For example, low-level board communication changed from a PCI-VME bus adapter to a combination of Ethernet and PCI-express. This took two years of effort from a team of experts, but also offered the opportunity to thoroughly redesign the software from the bottom up, with an emphasis on commonality and standardisation for long-term maintenance. The result is a powerful new trigger system with more flexibility to adapt to the increasingly extreme conditions of the LHC while maintaining efficiency for future discoveries (figure 2, previous page).

Although the “visible” work of data analysis at the LHC takes place on a timescale of months or years at institutes across the world,  the first and most crucial decisions in the analysis chain happen underground and within microseconds of each proton–proton collision. The improvements made to the CMS trigger for Run 2 mean that a richer and more precisely defined data set can be delivered to physicists working on a huge variety of different searches and measurements in the years to come. Moreover, the new system allows flexibility and routes for expansion, so that event selections can continue to be refined as we make new discoveries and as physics priorities evolve.

The CMS groups that delivered the new trigger system are now turning their attention to the ultimate Phase-2 upgrade that will be possible by around 2025. This will make use of additional information from the CMS silicon tracker in the Level-1 decision, which is a technique never used before in particle physics and will approach the limits of technology, even in a decade’s time. As long as the CMS physics programme continues to push new boundaries, the trigger team will not be taking time off.

MAX IV paves the way for ultimate X-ray microscope

CCmax1_07_16

Since the discovery of X-rays by Wilhelm Röntgen more than a century ago, researchers have striven to produce smaller and more intense X-ray beams. With a wavelength similar to interatomic spacings, X-rays have proved to be an invaluable tool for probing the microstructure of materials. But a higher spectral power density (or brilliance) enables a deeper study of the structural, physical and chemical properties of materials, in addition to studies of their dynamics and atomic composition.

For the first few decades following Röntgen’s discovery, the brilliance of X-rays remained fairly constant due to technical limitations of X-ray tubes. Significant improvements came with rotating-anode sources, in which the heat generated by electrons striking an anode could be distributed over a larger area. But it was the advent of particle accelerators in the mid-1900s that gave birth to modern X-ray science. A relativistic electron beam traversing a circular storage ring emits X-rays in a tangential direction. First observed in 1947 by researchers at General Electric in the US, such synchrotron radiation has taken X-ray science into new territory by providing smaller and more intense beams.

Generation game

First-generation synchrotron X-ray sources were accelerators built for high-energy physics experiments, which were used “parasitically” by the nascent synchrotron X-ray community. As this community started to grow, stimulated by the increased flux and brilliance at storage rings, the need for dedicated X-ray sources with different electron-beam characteristics resulted in several second-generation X-ray sources. As with previous machines, however, the source of the X-rays was the bending magnets of the storage ring.

The advent of special “insertion devices” led to present-day third-generation storage rings – the first being the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, and the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory in Berkeley, California, which began operation in the early 1990s. Instead of using only the bending magnets as X-ray emitters, third-generation storage rings have straight sections that allow periodic magnet structures called undulators and wigglers to be introduced. These devices consist of rows of short magnets with alternating field directions so that the net beam deflection cancels out. Undulators can house 100 or so permanent short magnets, each emitting X-rays in the same direction, which boosts the intensity of the emitted X-rays by two orders of magnitude. Furthermore, interference effects between the emitting magnets can concentrate X-rays of a given energy by another two orders of magnitude.

Third-generation light sources have been a major success story, thanks in part to the development of excellent modelling tools that allow accelerator physicists to produce precise lattice designs. Today, there are around 50 third-generation light sources worldwide, with a total number of users in the region of 50,000. Each offers a number of X-ray beamlines (up to 40 at the largest facilities) that fan out from the storage ring: X-rays pass through a series of focusing and other elements before being focused on a sample positioned at the end station, with the longest beamlines (measuring 150 m or more) at the largest light sources able to generate X-ray spot sizes a few tens of nanometres in diameter. Facilities typically operate around the clock, during which teams of users spend anywhere between a few hours to a few days undertaking experimental shifts, before returning to their home institutes with the data.

Although the corresponding storage-ring technology for third-generation light sources has been regarded as mature, a revolutionary new lattice design has led to another step up in brightness. The MAX IV facility at Maxlab in Lund, Sweden, which was inaugurated in June, is the first such facility to demonstrate the new lattice. Six years in construction, the facility has demanded numerous cutting-edge technologies – including vacuum systems developed in conjunction with CERN – to become the most brilliant source of X-rays in the world.

The multi-bend achromat

CCmax2_07_16

Initial ideas for the MAX IV project started at the end of the 20th century. Although the flagship of the Maxlab laboratory, the low-budget MAX II storage ring, was one of the first third-generation synchrotron radiation sources, it was soon outcompeted by several larger and more powerful sources entering operation. Something had to be done to maintain Maxlab’s accelerator programme.

The dominant magnetic lattice at third-generation light sources consists of double-bend achromats (DBAs), which have been around since the 1970s. A typical storage ring contains 10–30 achromats, each consisting of two dipole magnets and a number of magnet lenses: quadrupoles for focusing and sextupoles for chromaticity correction (at MAX IV we also added octupoles to compensate for amplitude-dependent tune shifts). The achromats are flanked by straight sections housing the insertion devices, and the dimensions of the electron beam in these sections is minimised by adjusting the dispersion of the beam (which describes the dependence of an electron’s transverse position on its energy) to zero. Other storage-ring improvements, for example faster correction of the beam orbit, have also helped to boost the brightness of modern synchrotrons. The key quantity underpinning these advances is the electron-beam emittance, which is defined as the product of the electron-beam size and its divergence.

Despite such improvements, however, today’s third-generation storage rings have a typical electron-beam emittance of between 2–5 nm rad, which is several hundred times larger than the diffraction limit of the X-ray beam itself. This is the point at which the size and spread of the electron beam approaches the diffraction properties of X-rays, similar to the Abbe diffraction limit for visible light (see panel below). Models of machine lattices with even smaller electron-beam emittances predict instabilities and/or short beam lifetimes that make the goal of reaching the diffraction limit at hard X-ray energies very distant.

Although it had been known for a long time that a larger number of bends decreases the emittance (and therefore increases the brilliance) of storage rings, in the early 1990s, one of the present authors (DE) and others recognised that this could be achieved by incorporating a higher number of bends into the achromats. Such a multi-bend achromat (MBA) guides electrons around corners more smoothly, therefore decreasing the degradation in horizontal emittance. A few synchrotrons already employ triple-bend achromats, and the design has also been used in several particle-physics machines, including PETRA at DESY, PEP at SLAC and LEP at CERN, proving that a storage ring with an energy of a few GeV produces a very low emittance. To avoid prohibitively large machines, however, the MBA demands much smaller magnets than are currently employed at third-generation synchrotrons.

CCmax3_07_16

In 1995, our calculations showed that a seven-bend achromat could yield an emittance of 0.4 nm rad for a 400 m-circumference machine – 10 times lower than the ESRF’s value at the time. The accelerator community also considered a six-bend achromat for the Swiss Light Source and a five-bend achromat for a Canadian light source, but the small number of achromats in these lattices meant that it was difficult to make significant progress towards a diffraction-limited source. One of us (ME) took the seven-bend achromat idea and turned it into a real engineering proposal for the design of MAX IV. But the design then went through a number of evolutions. In 2002, the first layout of a potential new source was presented: a 277 m-circumference, seven-bend lattice that would reach an emittance of 1 nm rad for a 3 GeV electron beam. By 2008, we had settled on an improved design: a 520 m-circumference, seven-bend lattice with an emittance of 0.31 nm rad, which will be reduced by a factor of two once the storage ring is fully equipped with undulators. This is more or less the design of the final MAX IV storage ring.

In total, the team at Maxlab spent almost a decade finding ways to keep the lattice circumference at a value that was financially realistic, and even constructed a 36 m-circumference storage ring called MAX III to develop the necessary compact magnet technology. There were tens of problems that we had to overcome. Also, because the electron density was so high, we had to elongate the electron bunches by a factor of four by using a second radio-frequency (RF) cavity system.

Block concept

MAX IV stands out in that it contains two storage rings operated at an energy of 1.5 and 3 GeV. Due to the different energies of each, and because the rings share an injector and other infrastructure, high-quality undulator radiation can be produced over a wide spectral range with a marginal additional cost. The storage rings are fed electrons by a 3 GeV S-band linac made up of 18 accelerator units, each comprising one SLAC Energy Doubler RF station. To optimise the economy over a potential three-decade-long operation lifetime, and also to favour redundancy, a low accelerating gradient is used.

The 1.5 GeV ring at MAX IV consists of 12 DBAs, each comprising one solid-steel block that houses all the DBA magnets (bends and lenses). The idea of the magnet-block concept, which is also used in the 3 GeV ring, has several advantages. First, it enables the magnets to be machined with high precision and be aligned with a tolerance of less than 10 μm without having to invest in aligning laboratories. Second, blocks with a handful of individual magnets come wired and plumbed direct from the delivering company, and no special girders are needed because the magnet blocks are rigidly self-supporting. Last, the magnet-block concept is a low-cost solution.

We also needed to build a different vacuum system, because the small vacuum tube dimensions (2 cm in diameter) yield a very poor vacuum conductance. Rather than try to implement closely spaced pumps in such a compact geometry, our solution was to build 100% NEG-coated vacuum systems in the achromats. NEG (non-evaporable getter) technology, which was pioneered at CERN and other laboratories, uses metallic surface sorption to achieve extreme vacuum conditions. The construction of the MAX IV vacuum system raised some interesting challenges, but fortunately CERN had already developed the NEG coating technology to perfection. We therefore entered a collaboration that saw CERN coat the most intricate parts of the system, and licences were granted to companies who manufactured the bulk of the vacuum system. Later, vacuum specialists from the Budker Institute in Novosibirsk, Russia, mounted the linac and 3 GeV-ring vacuum systems.

Due to the small beam size and high beam current, intra beam scattering and “Touschek” lifetime effects must also be addressed. Both are due to a high electron density at small-emittance/high-current rings in which electrons are brought into collisions with themselves. Large energy changes among the electrons bring some of them outside of the energy acceptance of the ring, while smaller energy deviations cause the beam size to increase too much. For these reasons, a low-frequency (100 MHz) RF system with bunch-elongating harmonic cavities was introduced to decrease the electron density and stabilise the beam. This RF system also allows powerful commercial solid-state FM-transmitters to be used as RF sources.

CCmax4_07_16

When we first presented the plans for the radical MAX IV storage ring in around 2005, people working at other light sources thought we were crazy. The new lattice promised a factor of 10–100 increase in brightness over existing facilities at the time, offering users unprecedented spatial resolutions and taking storage rings within reach of the diffraction limit. Construction of MAX IV began in 2010 and commissioning began in August 2014, with regular user operation scheduled for early 2017.

On 25 August 2015, an amazed accelerator staff sat looking at the beam-position monitor read-outs at MAX IV’s 3 GeV ring. With just the calculated magnetic settings plugged in, and the precisely CNC-machined magnet blocks, each containing a handful of integrated magnets, the beam went around turn after turn with proper behaviour. For the 3 GeV ring, a number of problems remained to be solved. These included dynamic issues – such as betatron tunes, dispersion, chromaticity and emittance – in addition to more trivial technical problems such as sparking RF cavities and faulty power supplies.

As of MAX IV’s inauguration on 21 June, the injector linac and the 3 GeV ring are operational, with the linac also delivering X-rays to the Short Pulse Facility. A circulating current of 180 mA can be stored in the 3 GeV ring with a lifetime of around 10 h, and we have verified the design emittance with a value in the region of 300 pm rad. Beamline commissioning is also well under way, with some 14 beamlines under construction and a goal to increase that number to more than 20.

Sweden has a well-established synchrotron-radiation user community, although around half of MAX IV users will come from other countries. A variety of disciplines and techniques are represented nationally, which must be mirrored by MAX IV’s beamline portfolio. Detailed discussions between universities, industry and the MAX IV laboratory therefore take place prior to any major beamline decisions. The high brilliance of the MAX IV 3 GeV ring and the temporal characteristics of the Short Pulse Facility are a prerequisite for the most advanced beamlines, with imaging being one promising application.

Towards the diffraction limit

MAX IV could not have reached its goals without a dedicated staff and help from other institutes. As CERN has helped us with the intricate NEG-coated vacuum system, and the Budker Institute with the installation of the linac and ring vacuum systems, the brand new Solaris light source in Krakow, Poland (which is an exact copy of the MAX IV 1.5 GeV ring) has helped with operations, and many other labs have offered advice. The MAX IV facility has also been marked out for its environmental credentials: its energy consumption is reduced by the use of high-efficiency RF amplifiers and small magnets that have a low power consumption. Even the water-cooling system of MAX IV transfers heat energy to the nearby city of Lund to warm houses.

The MAX IV ring is the first of the MBA kind, but several MBA rings are now in construction at other facilities, including the ESRF, Sirius in Brazil and the Advanced Photon Source (APS) at Argonne National Laboratory in the US. The ESRF is developing a hybrid MBA lattice that would enter operation in 2019 and achieve a horizontal emittance of 0.15 nm rad. The APS has decided to pursue a similar design that could enter operation by the end of the decade and, being larger than the ESRF, the APS can strive for an even lower emittance of around 0.07 nm rad. Meanwhile, the ALS in California is moving towards a conceptual design report, and Spring-8 in Japan is pursuing a hybrid MBA that will enter operation on a similar timescale.

CCmax5_07_16

Indeed, a total of some 10 rings are currently in construction or planned. We can therefore look forward to a new generation of synchrotron storage rings with very high transverse-coherent X-rays. We will then have witnessed an increase of 13–14 orders of magnitude in the brightness of synchrotron X-ray sources in a period of seven decades, and put the diffraction limit at high X-ray energies firmly within reach.

One proposal would see such a diffraction-limited X-ray source installed in the 6.3 km-circumference tunnel that once housed the Tevatron collider at Fermilab, Chicago. Perhaps a more plausible scenario is PETRA IV at DESY in Hamburg, Germany. Currently the PETRA III ring is one of the brightest in the world, but this upgrade (if it is funded) could bring the ring performance to the diffraction limit at hard X-ray energies. This is the Holy Grail of X-ray science, providing the highest resolution and signal-to-noise ratio possible, in addition to the lowest-radiation damage and the fastest data collection. Such an X-ray microscope will allow the study of ultrafast chemical reactions and other processes, taking us to the next chapter in synchrotron X-ray science.

Towards the X-ray diffraction limit

Electromagnetic radiation faces a fundamental limit in terms of how sharply it can be focused. For visible light, it is called the Abbe limit, as shown by Ernst Karl Abbe in 1873. The diffraction limit is defined as λ/(4π), where λ is the wavelength of the radiation. Reaching the diffraction limit for X-rays emitted from a storage ring (approximately 10 pm rad) is highly desirable from a scientific perspective: not only would it bring X-ray microscopy to its limit, but material structure could be determined with much less X-ray damage and fast chemical reactions could be studied in situ. Currently, the electron beam travelling in a storage ring dilutes the X-ray emittance by orders of magnitude. Because this quantity determines the brilliance of the X-ray beam, reaching the X-ray diffraction limit is a case of reducing the electron-beam emittance as far as possible.

The emittance is defined as Cq*E2/N3, where Cq is the ring magnet-lattice constant, E is the electron energy and N is the number of dipole magnets. It has two components: horizontal (given by the magnet lattice and electron energy) and vertical (which is mainly caused by coupling from the horizontal emittance). While the vertical emittance is, in principle, controllable and small compared with the horizontal emittance, the latter has to be minimised by choosing an optimised magnet lattice with a large number of magnet elements.

Because Cq can be brought to the theoretical minimum emittance limit and E is given by the desired spectral range of the X-rays, the only parameter remaining with which we can decrease the electron-beam emittance is N. Simply increasing the number of achromats to increase N turns out not to be a practical solution, however, because the rings are too big and expensive and/or the electrons tend to be unstable and leave the ring. However, a clever compromise called the multi-bend achromat (MBA), based on compact magnets and vacuum chambers, allows more elements to be incorporated around a storage ring without increasing its diameter, and in principle this design could allow a future storage ring to achieve the diffraction limit.

 

Particle flow in CMS

In hadron-collider experiments, jets are traditionally reconstructed by clustering photon and hadron energy deposits in the calorimeters. As the information from the inner tracking system is completely ignored in the reconstruction of jet momentum, the performance of such calorimeter-based reconstruction algorithms is seriously limited. In particular, the energy deposits of all jet particles are clustered together, and the jet energy resolution is driven by the calorimeter resolution for hadrons – typically 100%/√E in CMS – and by the non-linear calorimeter response. Also, because the trajectories of low-energy charged hadrons are bent away from the jet axis in the 3.8 T field of the CMS magnet, their energy deposits in the calorimeters are often not clustered into the jets. Finally, low-energy hadrons may even be invisible if their energies lie below the calorimeter detection thresholds.

In contrast, in lepton-collider experiments, particles are identified individually through their characteristic interaction pattern in all detector layers, which allows the reconstruction of their properties (energy, direction, origin) in an optimal manner, even in highly boosted jets at the TeV scale. This approach was first introduced at LEP with great success, before being adopted as the baseline for the design of future detectors for the ILC, CLIC and the FCC-ee. The same ambitious approach has been adopted by the CMS experiment, for the first time at a hadron collider. For example, the presence of a charged hadron is signalled by a track connected to calorimeter energy deposits. The direction of the particle is indicated by the track before any deviation in the field, and its energy is calculated as a weighted average of the track momentum and the associated calorimeter energy. These particles, which typically carry about 65% of the energy of a jet, are therefore reconstructed with the best possible energy resolution. Calorimeter energy deposits not connected to a track are either identified as a photon or as a neutral hadron. Photons, which represent typically 25% of the jet energy, are reconstructed with the excellent energy resolution of the CMS electromagnetic calorimeter. Consequently, only 10% of the jet energy – the average fraction carried by neutral hadrons – needs to be reconstructed solely using the hadron calorimeter, with its 100%/√E resolution. In addition to these types of particles, the algorithm identifies and reconstructs leptons with improved efficiency and purity, especially in the busy jet environment.

Key ingredients for the success of particle flow are excellent tracking efficiency and purity, the ability to resolve the calorimeter energy deposits of neighbouring particles, and unambiguous matching of charged-particle tracks to calorimeter deposits. The CMS detector, while not designed for this purpose, turned out to be well-suited for particle flow. Charged-particle tracks are reconstructed with efficiency greater than 90% and a rate of false track reconstruction at the per cent level down to a transverse momentum of 500 MeV. Excellent separation of charged hadron and photon energy deposits is provided by the granular electromagnetic calorimeter and large magnetic-field strength. Finally, the two calorimeters are placed inside of the magnet coil, which minimises the probability for a charged particle to generate a shower before reaching the calorimeters, and therefore facilitates the matching between tracks and calorimeter deposits.

After particle flow, the list of reconstructed particles resembles that provided by an event generator. It can be used directly to reconstruct jets and the missing transverse momentum, to identify hadronic tau decays, and to quantify lepton isolation. Figure 1 illustrates, in a given event, the accuracy of the particle reconstruction by comparing the jets of reconstructed particles to the jets of generated particles. Figure 2 further demonstrates the dramatic improvement in jet-energy resolution with respect to the calorimeter-based measurement. In addition, the particle flow improves the jet angular resolution by a factor of three and reduces the systematic uncertainty in the jet energy scale by a factor of two. The influence of particle flow is, however, far from being restricted to jets with, for example, similar improvements for missing transverse-momentum reconstruction and a tau-identification background rate reduced by a factor three. This new approach to reconstruction also paved the way for particle-level pile-up mitigation methods such as the identification and masking of charged hadrons from pile-up before clustering jets or estimating lepton isolation, and the use of machine learning to estimate the contribution of pile-up to the missing transverse momentum.

The algorithm, optimised before the start of LHC Run I in 2009, remains essentially unchanged for Run II, because the reduced bunch spacing of 25 ns could be accommodated by a simple reduction of the time windows for the detector hits. The future CMS upgrades have been planned towards optimal conditions for particle flow (and therefore physics) performance. In the first phase of the upgrade programme, a new pixel layer will reduce the rate of false charged-particle tracks, while the read-out of multiple layers with low noise photodetectors in the hadron calorimeter will improve the neutral hadron measurement that limits the jet-energy resolution. The second phase includes extended tracking allowing for full particle-flow reconstruction in the forward region, and a new high-granularity endcap calorimeter with extended particle-flow capabilities. The future is therefore bright for the CMS particle-flow reconstruction concept.

• CMS Collaboration, “Particle flow and global event description in CMS”, in preparation.

The LSC welcomes new experiments

The Canfranc Underground Laboratory (LSC) in Spain is one of four European deep-underground laboratories, together with Gran Sasso (Italy), Modane (France) and Boulby (UK). The laboratory is located at Canfranc Estación, a small town in the Spanish Pyrenees situated about 1100 m above sea level. Canfranc is known for the railway tunnel that was inaugurated in 1928 to connect Spain and France. The huge station – 240 m long – was built on the Spanish side, and still stands as proof of the history of the place, although the railway operation was stopped in 1970.

In 1985, Angel Morales and his collaborators from the University of Zaragoza started to use the abandoned underground space to carry out astroparticle-physics experiments. In the beginning, the group used two service cavities, currently called LAB780. In 1994, during the excavation of the 8 km-long road tunnel (Somport tunnel), an experimental hall of 118 m2 was built 2520 m away from the Spanish entrance. This hall, called LAB2500, was used to install a number of experiments carried out by several international collaborations. In 2006, two additional larger halls – hall A and hall B, collectively called LAB2400 – were completed and ready for use. The LSC was born.

Today, some 8400 m3 are available to experimental installations at Canfranc in the main underground site (LAB2400), and a total volume of about 10,000 m3 on a surface area of 1600 m2 is available among the different underground structures. LAB2400 has about 850 m of rock overburden with a residual cosmic muon flux of about 4 × 10–3 m–2 s–1. The radiogenic neutron background (< 10 MeV) and the gamma-ray flux from natural radioactivity in the rock environment at the LSC are determined to be of the order of 3.5 × 10–6 n/(cm2 s) and 2γ/(cm2 s), respectively. The neutron flux is about 30 times less intense than on the surface. The radon level underground is kept in the order of 50–80 Bq/m3 by a ventilation system with fresh-air input of about 19,600 m3/h and 6300 m3/h for hall A and B, respectively. To reduce the natural levels of radioactivity, a new radon filtering system and a radon detector with a sensitivity of mBq/m3 will be installed in hall A in 2016, to be used by the experiments.

The underground infrastructure also includes a clean room to support detector assembly and to maintain the high level of cleanliness required for the most important components. A low-background screening facility, equipped with seven high-purity germanium γ-spectrometers, is available to experiments that need to select components with low radioactivity for their detectors. The screening facility has recently been used by the SuperKGd collaboration to measure the radiopurity of gadolinium salts for the Super-Kamiokande gadolinium project.

A network of 18 optical fibres, each equipped with humidity and temperature sensors, is installed in the main halls to monitor the rock stability. The sensitivity of the measurement is at the micrometer level; so far, across a timescale of four years, changes of 0.02% have been measured over 10 m scale lengths.

The underground infrastructure is complemented by a modern 1800 m2 building on the surface, which houses offices, a chemistry and an electronics laboratory, a workshop and a warehouse. Currently, some 280 scientists from around the world use the laboratory’s facilities to carry out their research.

The scientific programme at the LSC focuses on searches for dark matter and neutrinoless double beta decay, but it also includes experiments on geodynamics and on life in extreme environments.

Neutrinoless double beta decay

Unlike the two-neutrino mode observed in a number of nuclear decays (ββ2ν, e.g. 136Xe → 136Ba + 2e + 2–νe), the neutrinoless mode of double beta decay (ββ0ν, e.g. 136Xe → 136Ba + 2e) is as yet unobserved. The experimental signature of a neutrinoless double beta decay would be two electrons with total energy equal to the energy released in the nuclear transition. Observing this phenomenon would demonstrate that the neutrino is its own antiparticle, and is one of the main challenges in physics research carried out in underground laboratories. The NEXT experiment at the LSC aims to search for those experimental signatures in a high-pressure time projection chamber (TPC), using xenon enriched in 136Xe. The NEXT TPC is designed with a plane of photomultipliers on the cathode and a plane of silicon photomultipliers behind the anode. This set-up allows the collaboration to determine the energy and the topology of the event, respectively. In this way, background from natural radioactivity and from the environment can be accurately rejected. In its final configuration, NEXT will use 100 kg of 136Xe and 15 bar pressure. A demonstrator of the TPC with 10 kg of Xe, named NEW, is currently being commissioned at the LSC.

The Canfranc Laboratory also hosts R&D programmes in support of projects that will be carried out in other laboratories. An example is BiPo, a high-sensitivity facility that measures the radioactivity on thin foils for planar detectors. Currently, BiPo is performing measurements for the SuperNEMO project proposed at the Modane laboratory. SuperNEMO aims to make use of 100 kg of 82Se in thin foils to search for ββ0ν signatures. These foils must have very low contamination from other radioactive elements. In particular, the contamination must be less than 10 μBq/kg for 214Bi from the 238U decay chain, and less than 2 μBq/kg for 208Tl from the 232Th decay chain. These levels of radioactivity are too small to be measured with standard instruments. The BiPo experiment provides a technical solution to perform this very accurate measurement using a thin 82Se foil (40 mg/cm2) that is inserted between two detection modules equipped with scintillators and photomultipliers to tag 214Bi and 208Tl.

Dark matter

The direct detection of dark matter is another typical research activity of underground laboratories. At the LSC, two projects are in operation for this purpose: ANAIS and ArDM. In its final configuration, ANAIS will be an array of 20 ultrapure NaI(Tl) crystals aiming at investigating the annual modulation signature of dark-matter particles coming from the galactic halo. Each 12.5 kg crystal is put inside a high-purity electroformed copper shielding made at the LSC chemistry laboratory. Roman lead of 10 cm thickness, plus other lead structures totalling 20 cm thickness, are installed around the crystals, together with an active muon veto and passive neutron shielding. In 2016, the ANAIS detector will be in operation with a total of 112 kg high-purity NaI(Tl) crystals.

A different experimental approach is adopted by the ArDM detector. ArDM makes use of two tonnes of liquid argon to search for WIMP interactions in a two-phase TPC. The TPC is viewed by two arrays of 12 PMTs and can operate in single phase (liquid only) or double phase (liquid and gas). The single-phase operation mode was successfully tested up to summer 2015, and the collaboration will be starting the two-phase mode by the end of 2015.

Nuclear astrophysics

In recent decades, the scientific community has shown growing interest in measuring cross-sections of nuclear interactions taking place in stars. At the energy of interest (that is, the average energy of particles at the centre of the stars), the expected interaction rates are very small. The signal is so small that the measurement can only be performed in underground laboratories where the levels of background are reduced. For this reason, a project has been proposed at the LSC: the Canfranc Underground Nuclear Astrophysics (CUNA) facility. CUNA would require a new experimental hall to host a linear accelerator and the detectors. A feasibility study has been carried out and further developments are expected in the coming years.

Geodynamics

The geodynamic facility at the LSC aims to study local and global geodynamic events. The installation consists of a broadband seismometer, an accelerometer and two laser strainmeters underground, and two GPS stations on the surface in the surroundings of the underground laboratory. This facility allows seismic events to be studied over a wide spectrum, from seismic waves to tectonic deformations. The laser interferometer consists of two orthogonal 70 m-long strainmeters. Non-linear shallow water tides have been observed with this set-up and compared with predictions. This was possible because of the excellent signal-to-noise ratio for strain data at the LSC.

Life in extreme environments

In the 1990s, it became evident that life on Earth extends into the deep subsurface and extreme environments. Underground facilities can be an ideal laboratory for scientists specialising in astrobiology, environmental microbiology or other similar disciplines. The GOLLUM project proposed at the LSC aims to study micro-organisms inhabiting rocks underground. The project plans to sample the rock throughout the length of the railway tunnel and characterize microbial communities living at different depths (metagenomics) by DNA extraction.

Currently operating mainly in the field of dark matter and the search for rare decays, the LSC has the potential to grow as a multidisciplinary underground research infrastructure. Its large infrastructure equipped with specialized facilities allows the laboratory to host a variety of experimental projects. For example, the space previously used by the ROSEBUD experiment is now available to collaborations active in the field of direct dark-matter searches or exotic phenomena using scintillating bolometers or low-temperature detectors. A hut with exceptionally low acoustic and vibrational background, equipped with a 3 × 3 × 4.8 m3 Faraday cage, is available in hall B. This is a unique piece of equipment in an underground facility that, among other things, could be used to characterize new detectors for low-mass dark-matter particles. Moreover, some 100 m2 are currently unused in hall A. New ideas and proposals are welcome, and will be evaluated by the LSC International Scientific Committee.

• For further details about the LSC, visit www.lsc-canfranc.es.

SESAME: a bright hope for the Middle East

Synchrotron-light sources have become an essential tool in many branches of medicine, biology, physics, chemistry, materials science, environmental studies and even archaeology. There are some 50 storage-ring-based synchrotron-light sources in the world, including a few in developing countries, but none in the Middle East. SESAME is a 2.5-GeV, third-generation light source under construction near Amman. When it is commissioned in 2016, it will not only be the first light source in the Middle East, but arguably also the region’s first true international centre of excellence.

The members of SESAME are currently Bahrain, Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority and Turkey (others are being sought). Brazil, China, the European Union, France, Germany, Greece, Italy, Japan, Kuwait, Portugal, the Russian Federation, Spain, Sweden, Switzerland, the UK and the US are observers.

SESAME will: foster scientific and technological capacities and excellence in the Middle East and neighbouring regions, and help prevent or reverse the brain drain; build scientific links and foster better understanding and a culture of peace through collaboration between peoples with different creeds and political systems.

The origins of SESAME

The need for an international synchrotron light-source in the Middle East was recognized by the Pakistani Nobel laureate Abdus Salam, one of the fathers of the Standard Model of particle physics, more than 30 years ago. This need was also felt by the CERN-and-Middle-East-based Middle East Scientific Co-operation group (MESC), headed by Sergio Fubini. MESC’s efforts to promote regional co-operation in science, and also solidarity and peace, started in 1995 with the organization in Dahab, Egypt, of a meeting at which the Egyptian minister of higher education, Venice Gouda, and Eliezer Rabinovici of MESC and the Hebrew University in Israel – and now a delegate to the CERN and SESAME councils – took an official stand in support of Arab–Israeli co-operation.

At the request of Fubini and Herwig Schopper, the German government agreed to donate the components of BESSY I to SESAME

In 1997, Herman Winick of SLAC and the late Gustav-Adolf Voss of DESY suggested building a light source in the Middle East using components of the soon-to-be decommissioned BESSY I facility in Berlin. This brilliant proposal fell on fertile ground when it was presented and pursued during workshops organized in Italy (1997) and Sweden (1998) by MESC and Tord Ekelof, of MESC and Uppsala University. At the request of Fubini and Herwig Schopper, a former director-general of CERN, the German government agreed to donate the components of BESSY I to SESAME, provided that the dismantling and transport – eventually funded by UNESCO – were taken care of by SESAME.

The plan was brought to the attention of Federico Mayor, then director-general of UNESCO, who called a meeting of delegates from the Middle East and neighbouring regions at the organization’s headquarters in Paris in June 1999. The meeting launched the project by setting up an International Interim Council with Schopper as chair. Jordan was selected to host SESAME, in a competition with five other countries from the region. It has provided the land and funded the construction of the building.

In May 2002, the Executive Board of UNESCO unanimously approved the establishment of the new centre under UNESCO’s auspices. SESAME formally came into existence in April 2004, when the permanent council was established, and ratified the appointments of Schopper as president and of the first vice-presidents, Dincer Ülkü of Turkey and Khaled Toukan of Jordan. A year later, Toukan stepped down as vice-president and became director of SESAME.

Meanwhile, the ground-breaking ceremony was held in January 2003, and construction work began the following August. Since February 2008, SESAME has been working from its own premises, which were formally opened in November 2008 in a ceremony held under the auspices of King Abdullah II of Jordan, and with the participation of Prince Ghazi Ben Mohammed of Jordan and Koïchiro Matsuura, then director-general of UNESCO. In November 2008, Schopper stepped down as president of the Council and was replaced by Chris Llewellyn Smith, who is also a former director-general of CERN. In 2014, Rabinovici and Kamal Araj of Jordan became vice-presidents, replacing Tarek Hussein of Egypt and Seyed Aghamiri of Iran.

SESAME users

As at CERN, the users of SESAME will be based in universities and research institutes in the region. They will visit the laboratory periodically to carry out experiments, generally in collaboration. The potential user-community, which is growing rapidly, already numbers some 300, and is expected eventually to grow to between 1000 and 1500. It is being fostered by a series of Users’ Meetings – the 12th, in late 2014, attracted more than 240 applications, of which only 100 could be accepted. The training programme, which is supported by the International Atomic Energy Agency, various governments and many of the world’s synchrotron laboratories, and which includes working visits to operational light sources, is already bringing significant benefits to the region.

Technical developments

In 2002, the decision was taken to build a completely new main storage ring, with an energy of 2.5 GeV – compared with the 1 GeV that would have been provided by upgrading the main BESSY 1 ring – while retaining refurbished elements of the BESSY I microtron to provide the first stage of acceleration and the booster synchrotron. As a result, SESAME will not only be able to probe shorter distances, but will also be a third-generation light source, i.e. one that can accommodate insertion devices – wigglers and undulators – to produce enhanced synchrotron radiation. There are light sources with higher energy and greater brightness, but SESAME’s performance (see table) will be good enough to allow users – with the right ideas – to win Nobel prizes.

Progress has not been as rapid as had been hoped, owing mainly to lack of funding, as discussed below. The collapse of the roof under an unprecedented snowfall in December 2013, when it even snowed in Cairo, has not helped. Nevertheless, despite working under the open sky throughout 2014, the SESAME team successfully commissioned the booster synchrotron in September 2014. The beam was brought to the full energy of 800 MeV, essentially without loss, and the booster is now the highest-energy accelerator in the Middle East (CERN Courier November 2014 p5).

The final design of the magnets for the main ring and for the powering scheme was carried out by CERN in collaboration with SESAME. Construction of the magnets is being managed by CERN using funds provided by the European Commission. The first of 16 cells was assembled and successfully tested at CERN at the end of March, and installation will begin later this year (CERN Courier May 2015 p6). If all goes well, commissioning of the whole facility – initially with only two of the four accelerating cavities – should begin in June next year.

The scientific programme

SESAME will nominally have four “day-one” beamlines in Phase 1a, although to speed things up and save money, it will actually start with just two. Three more beamlines will be added in Phase 1b.

One of the beamlines that will be available next year will produce photons with energies of 0.01–1 eV for infrared spectromicroscopy, which is a powerful tool for non-invasive studies of chemical components in cells, tissues and inorganic materials. A Fourier transform infrared microscope, which will be adapted to this beamline, has already been purchased. Meanwhile, 11 proposals from the region to use it with a conventional thermal infrared source have been approved. The microscope has been in use since last year, and the first results include a study of breast cancer by Fatemeh Elmi of the University of Mazandaran, Iran, with Randa Mansour and Nisreen Dahshan, who are PhD students in the Faculty of Pharmacy, University of Jordan. When SESAME is in operation, the infrared beamline will be used in biological applications, environmental studies, materials and archaeological sciences.

An X-ray absorption fine-structure and X-ray fluorescence beamline, with photon energies of 3–30 keV, will also be in operation next year. It will have potential applications in materials and environmental sciences, providing information on chemical states and local atomic structure that can be used for designing new materials and improving catalysts (e.g. for the petrochemical industries). Other applications include the non-invasive identification of the chemical composition of fossils and of valuable paintings.

It is hoped that macro-molecular crystallography and material-science beamlines, with photon energies of 4–14 keV and 3–25 keV, respectively, will be added in the next two years, once the necessary funding is available. The former will be used for structural molecular biology, aimed at elucidating the structures of proteins and other types of biological macromolecules at the atomic level, to gain insight into mechanisms of diseases to guide drug design (as used by pharmaceutical and biotech companies). The latter will use powder diffraction for studies of disordered/amorphous material on the atomic scale. The use of powder diffraction to study the evolution of nanoscale structures and materials in extreme conditions of pressure and temperature has become a core technique for developing and characterizing new smart materials.

In Phase 1b, soft X-ray (0.05–2 keV), small and wide-angle X-ray scattering (8–12 keV) and extreme-ultraviolet (10–200 eV) beamlines will be added. They will be used, respectively, for atomic, molecular and condensed-matter physics; structural molecular biology and materials sciences; and atomic and molecular physics, in a spectral range that provides a window on the behaviour of atmospheric gases, and enables characterization of the electrical and mechanical properties of materials, surfaces and interfaces.

The main challenges

The main challenge has been – and continues to be – obtaining funding. Most of the SESAME members have tiny science budgets, many are in financial difficulties, and some have faced additional problems, such as floods in Pakistan and the huge influx of refugees in Jordan. Not surprisingly, they do not find it easy to pay their contributions to the operational costs, which are rising rapidly as more staff are recruited, and will increase even faster when SESAME comes into operation and is faced with paying large electricity bills at $0.36/kWh and rising. Nevertheless, increasing budgets have been approved by the SESAME Council. As soon as the funding can be found, a solar-power plant, which would soon pay for itself and ease the burden of paying the electricity bill, will be constructed. And SESAME has always been open to new members, who are being sought primarily to share the benefits but also to share the costs.

So far, $65 million has been invested, including the value to SESAME of in-kind contributions of equipment (from Jordan, Germany, the UK, France, Italy, the US and Switzerland), cash contributions to the capital budget (from the EU, Jordan, Israel, Turkey and Italy), and manpower and other operational costs that are paid by the members (but not including important in-kind contributions of manpower, especially from CERN and the French light source, SOLEIL).

SESAME is a working example of Arab–Israeli–Iranian–Turkish–Cypriot–Pakistani collaboration.

Thanks to the contributions already made and additional funding to come from Iran, Israel, Jordan and Turkey, which have each pledged voluntary contributions totalling $5 million, most of the funds that are required simply to bring SESAME into operation next year are now available. At the SESAME Council meeting in May, Egypt announced that it will also make a voluntary contribution, which will narrow the immediate funding gap. More will, however, be needed, to provide additional beamlines and a properly equipped laboratory, and additional funds are being sought from a variety of governments and philanthropic organizations.

The ongoing turbulence in the Middle East has only had two direct effects on SESAME. First, sanctions are making it impossible for Iran to pay its capital and operational contributions, which are much needed. Second, discussions of Egypt joining other members in making voluntary contributions were interrupted several times by changes in the government.

Outlook

SESAME is a working example of Arab–Israeli–Iranian–Turkish–Cypriot–Pakistani collaboration. Senior scientists and administrators from the region are working together to govern SESAME through the Council, with input from scientists from around the world through its advisory committees. Young and senior scientists from the region are collaborating in preparing the scientific programme at Users’ Meetings and workshops. And the extensive training programme of fellowships, visits and schools is already building scientific and technical capacity in the region.

According to the Italian political theorist Antonio Gramsci, there is a perpetual battle between the optimism of the will and the pessimism of the brain. Several times during its history, SESAME has faced seemingly impossible odds, and pessimists might have given up. Luckily, however, the will prevailed, and SESAME is now close to coming into operation. There are still huge challenges, but we are confident that thanks to the enthusiasm of all those involved they will be met and SESAME will fulfil its founders’ ambitious aims.

The LHC prepares for high-energy collisions

A proton-proton collision at 900GeV

Following the restart of CERN’s flagship accelerator in early April, commissioning the LHC with beam is progressing well. In the early hours of 10 April, the operations team successfully circulated a beam at 6.5 TeV for the first time – a new world record – but this was only one of many steps to be taken before the accelerator delivers collisions at this beam energy.

The operators reached another important milestone on 21 April, when they succeeded in circulating a nominal-intensity bunch. The first commissioning steps in particular take place with low-intensity (probe) beams – single bunches of 5 × 109 protons. The nominal intensity, in contrast, is a little over 1 x 1011 protons per bunch, and when the LHC is in full operation later this year, some 2800 bunches will circulate in each beam.

To handle the higher number of protons per bunch and the higher number of bunches safely, a number of key systems have to be fully operational and set up with beam. These include the beam-dump system, the beam-interlock system and the collimation system. The latter involves around 100 individual pairs of jaws, each of which has to be positioned with respect to the beam during all of the phases of the machine cycle. Confirmation that everything is as it should be is made by deliberately provoking beam losses and checking that the collimators catch the losses as they are supposed to.

On 2 May, this set-up procedure allowed a nominal-intensity bunch in each beam to be taken to 6.5 TeV. Four days later, collisions were produced at the injection energy of 450 GeV, enabling the experiment teams to record events and check alignment and synchronization of the detectors. One of the important steps in reaching this stage is to commission the “squeeze” – the final phase in the LHC cycle of injection, ramp and squeeze. During this phase, the strengths of the magnetic fields either side of a given experiment are adjusted to reduce the beam size at the corresponding interaction point.

• To find out more, see the LHC reports in CERN Bulletin: bulletin.cern.ch.

Laser set-up generates electron–positron plasma in the lab

More than 99% of the visible universe exists as plasma, the so-called fourth state of matter. Produced from the ionization of predominantly hydrogen- and helium-dominated gases, these electron–ion plasmas are ubiquitous in the local universe. An exotic fifth state of matter, the electron–positron plasma, exists in the intense environments surrounding compact astrophysical objects, such as pulsars and black holes, and until recently, such plasmas were exclusively the realm of high-energy astrophysics. However, an international team, led by Gianluca Sarri of Queen’s University of Belfast, together with collaborators in the UK, US, Germany, Portugal and Italy, has at last succeeded in producing a neutral electron–positron plasma in a terrestrial laboratory experiment.

Electron–positron plasmas display peculiar features when compared with the other states of matter, on account of the symmetry between the negatively charged and positively charged particles, which in this case have equal mass but opposite charge. These plasmas play a fundamental role in the evolution of extreme astrophysical objects, including black holes and pulsars, and are associated with the emission of ultra-bright gamma-ray bursts. Moreover, it is likely that the early universe in the leptonic era – that is, in the minutes following approximately one second after the Big Bang – consisted almost exclusively of a dense electron–positron plasma in a hot photon bath.

While production of positrons has long been achievable, the formation of a plasma of charge-neutral electron–positron pairs has remained elusive, owing to the practical difficulties in combining equal numbers of these extremely mobile charges. However, the recent success was made possible by looking at the problem from a different perspective. Instead of generating two separate electron and positron populations, and recombining them, it aimed to generate an electron–positron plasma directly, in situ.

These results represent a real novelty for experimental physics, and pave the way for a new experimental field of research.

In an experiment at the Central Laser Facility at the Rutherford Appleton Laboratory in the UK, Sarri and colleagues made use of a laser-induced plasma wakefield to accelerate an ultra-relativistic electron beam. They focused an ultra-intense and short laser pulse (around 40 fs) onto a mixture of nitrogen and helium gas to produce, in only a few millimetres, electrons with an average energy of the order of 500–600 MeV. This beam was then directed onto a thick slab of a material of high atomic number – lead, in this case – to initiate an electromagnetic cascade, in a mainly two-step process. First, high-energy bremsstrahlung photons are generated as electrons or newly generated positrons propagate through the electric fields of the nuclei. Then, electron–positron pairs are generated during the interactions of the high-energy photons with the same fields. Under optimum experimental conditions, the team obtained, at the exit of the lead slab, a beam of electrons and positrons in equal numbers and of sufficient density to allow plasma-like behaviour.

These results represent a real novelty for experimental physics, and pave the way for a new experimental field of research: the study of symmetric matter–antimatter plasmas in the laboratory. Not only will it allow a better understanding of plasma physics from a fundamental point of view, but it should also shed light on some of the most fascinating, yet mysterious, objects in the known universe.

• The Central Laser Facility is supported by the UK’s Science and Technology Facilities Council. This experiment is supported by the UK’s Engineering and Physical Science Research Council.

bright-rec iop pub iop-science physcis connect