Bluefors – leaderboard other pages

Topics

CLOUD experiment sharpens climate predictions


Future global climate projections have been put on more solid empirical ground, thanks to new measurements of the production rates of atmospheric aerosol particles by CERN’s Cosmics Leaving OUtdoor Droplets (CLOUD) experiment.

According to the Intergovernmental Panel on Climate Change, the Earth’s mean temperature is predicted to rise by between 1.5–4.5 °C for a doubling of carbon dioxide in the atmosphere, which is expected by around 2050. One of the main reasons for this large uncertainty, which makes it difficult for society to know how best to act against climate change, is a poor understanding of aerosol particles in the atmosphere and their effects on clouds.

To date, all global climate models use relatively simple parameterisations for aerosol production that are not based on experimental data, in contrast to the highly detailed modelling of atmospheric chemistry and greenhouse gases. Although the models agree with current observations, predictions start to diverge when the models are wound forward to project the future climate.

Now, data collected by CLOUD have been used to build a model of aerosol production based solely on laboratory measurements. The new CLOUD study establishes the main processes responsible for new particle formation throughout the troposphere, which is the source of around half of all cloud seed particles. It could therefore reduce the variation in projected global temperatures as calculated by complex global-circulation models.

“This marks a big step forward in the reliability and realism of how models describe aerosols and clouds,” says CLOUD spokesperson Jasper Kirkby. “It’s addressing the largest source of uncertainty in current climate models and building it on a firm experimental foundation of the fundamental processes.”

Aerosol particles form when certain trace vapours in the atmosphere cluster together, and grow via condensation to a sufficient size that they can seed cloud droplets. Higher concentrations of aerosol particles make clouds more reflective and long-lived, thereby cooling the climate, and it is thought that the increased concentration of aerosols caused by air pollution since the start of the industrial period has offset a large part of the warming caused by greenhouse-gas emissions. Until now, however, the poor understanding of how aerosols form has hampered efforts to estimate the total forcing of climate from human activities.

Thanks to CLOUD’s unique controlled environment, scientists can now understand precisely how new particles form in the atmosphere and grow to seed cloud droplets. In the latest work, published in Science, researchers built a global model of aerosol formation using extensive laboratory-measured nucleation rates involving sulphuric acid, ammonia, ions and organic compounds. Although sulphuric acid has long been known to be important for nucleation, the results show for the first time that observed concentrations of particles throughout the atmosphere can be explained only if additional molecules – organic compounds or ammonia – participate in nucleation. The results also show that ionisation of the atmosphere by cosmic rays accounts for nearly one-third of all particles formed, although small changes in cosmic rays over the solar cycle do not affect aerosols enough to influence today’s polluted climate significantly.

Early this year, CLOUD reported in Nature the discovery that aerosol particles can form in the atmosphere purely from organic vapours produced naturally by the biosphere (CERN Courier July/August 2016 p11). In a separate modelling paper published recently in PNAS, CLOUD shows that such pure biogenic nucleation was the dominant source of particles in the pristine pre-industrial atmosphere. By raising the baseline aerosol state, this process significantly reduces the estimated aerosol radiative forcing from anthropogenic activities and, in turn, reduces modelled climate sensitivities.

“This is a huge step for atmospheric science,” says lead-author Ken Carslaw of the University of Leeds, UK. “It’s vital that we build climate models on experimental measurements and sound understanding, otherwise we cannot rely on them to predict the future. Eventually, when these processes get implemented in climate models, we will have much more confidence in aerosol effects on climate. Already, results from CLOUD suggest that estimates of high climate sensitivity may have to be revised downwards.”

n_TOF deepens search for missing cosmic lithium

An experiment at CERN’s neutron time-of-flight (n_TOF) facility has filled in a missing piece of the cosmological-lithium problem puzzle, according to a report published in Physical Review Letters. Along with a few other light elements such as hydrogen and helium, much of the lithium in the universe is thought to have been produced in the very early universe during a process called Big-Bang nucleosynthesis (BBN). For hydrogen and helium, BBN theory is in excellent agreement with observations. But the amount of lithium (7Li) observed is about three times smaller than predicted – a discrepancy known as the cosmological-lithium problem.

The n_TOF collaboration has now made a precise measurement of one of the key processes involved – 7Be(n,α)4He – in an attempt to solve the mystery. The production and destruction of the unstable 7Be isotope regulates the abundance of cosmological lithium, but estimates of the probability of 7Be destruction via this channel have relied on a single measurement made in 1963 of thermal energies at the Ispra reactor in Italy. Therefore, a possible explanation for the higher theoretical value could be an underestimation of the destruction of primordial 7Be, in particular in reactions with neutrons.

Now, n_TOF has measured the cross-section of the 7Be(n,α)4He reaction over a wide range of neutron energies with a high level of accuracy. This was possible thanks to the extremely high luminosity of the neutron beam in the recently constructed experimental area (EAR2) at the n_TOF facility.

The results indicate that, at energies relevant for BBN, the probability for this reaction is 10 times smaller than that used in theoretical calculations. The destruction rate of 7Be is therefore even smaller than previously supposed, ruling out this channel as the source of the missing lithium and deepening the mystery of the cosmological-lithium problem

ATLAS spots light-by-light scattering

The γγγγ process proceeds at lowest order via virtual one-loop box diagrams involving fermions, leading to a severe suppression in the cross-section and thus making it very challenging to observe experimentally. To date, light-by-light scattering via an electron–positron loop has been tested precisely, but indirectly, in measurements of the anomalous magnetic moments of the electron and muon. Closely related observations are Delbrück scattering and photon splitting, both of which involve the scattering of a photon from the nuclear Coulomb field, and the fusion of photons into pseudoscalar mesons observed in electron–positron colliders. The direct observation of light-by-light scattering has, however, remained elusive.

It has recently been proposed that light-by-light scattering can be studied using photons produced in relativistic heavy-ion collisions at large impact parameters. Since the electric-field strength of relativistic ions scales with the square of their charge, collisions lead to huge electromagnetic field strengths relative to proton–proton collisions. The phenomenon manifests itself as beams of nearly real photons, allowing for the process γγγγ to occur directly, while the nuclei themselves generally stay intact. Light-by-light scattering is thus distinguished by the observation of two low-energy photons, back-to-back in azimuth, with no additional activity measured in the detector. Possible backgrounds can arise from misidentified electrons from the QED process γγe+e, as well as from the central exclusive production of two photons from the fusion of two gluons (gg γγ).

The ATLAS experiment has conducted a search for light-by-light scattering in 480 μb–1 of lead–lead data recorded at a nucleon–nucleon centre-of-mass energy of 5.02 TeV during the 2015 heavy-ion run. While almost four-billion strongly interacting events were provided by the LHC, only 13 diphoton candidates were observed. From the expectation of 7.3 signal events and 2.6 background events, a significance of 4.4σ was obtained for observing one of the most fundamental predictions of QED. With the additional integrated luminosity expected in upcoming runs, further study of the γγγγ process will allow tests of extensions of the Standard Model, in which new particles can participate via the loop diagrams, providing an additional window into new physics at the LHC.

Studies of electroweak-boson production by CMS

When such events do arise, however, the non-Abelian SU(2) nature of electroweak bosons – which are generally denoted V – allows the bosons to interact directly with each other. Of particular interest are the direct interactions of three electroweak gauge bosons, whose rate depends on the corresponding triple-gauge-boson-coupling (TGC) strength. Measurement of the rates of single V and double VV (diboson) production and of the strength of TGC interactions represent fundamental tests of the electroweak sector of the Standard Model (SM).

The inclusive production rates of single W or Z bosons at the LHC have been calculated in the SM to an accuracy of about 3%, while the ratio of the W-to-Z-boson production rate is predicted to even greater precision because certain uncertainties cancel. The CMS collaboration has recently measured the W and Z boson inclusive production rates and finds their ratio to be 10.46±0.17, in agreement with the SM prediction at the per cent level. CMS has also measured the ZZ, WZ and WW diboson production rates, finding agreement with the SM predictions within a precision of about 14, 12 and 9%, respectively. These results are based on leptonic-decay modes, specifically decays of a W boson to an electron or muon and the associated neutrino, and of a Z boson to an electron–positron pair or to a muon–antimuon pair.

Results obtained so far have established the viability of the techniques.

Leptonic decays provide an unambiguous experimental signature for a W or Z boson but suffer in statistical precision because of relatively small branching fractions. A complementary strategy is to use hadronic decay modes, namely decays of a W or Z boson to a quark–antiquark pair, which benefit from much larger branching fractions but are experimentally more challenging. Each quark or antiquark appears as a collimated stream of particles, or jet, in the detector. Thus the experimental signature for hadronic decays is the presence of two jets. Discriminating between the hadronic decay of a W boson with a mass of 81 GeV and that of a Z boson (91.2 GeV) is difficult on an event-by-event basis due to the finite jet-energy resolution. Nonetheless, the separation can be performed on a statistical basis for highly energetic jets (see figure).

CMS has selected WV diboson events in which a W boson decays leptonically and a highly energetic V boson decays hadronically. Because of the high V boson energy, the two jets from the V boson decay are partially merged and the WV system can have a very large mass. As a result, the analysis probes a regime where physics beyond the SM might be present. Searches are performed as a function of the mass of the WV system and are used to set limits on anomalous TGC interactions. Results obtained so far have established the viability of the techniques, but much greater sensitivity to the presence of anomalous TGC interactions is expected with the larger data samples that will be analysed in the future.

LHCb searches for strong CP violation

CP violation, which relates to an asymmetry between matter and antimatter, is a well-established feature of the weak interaction that mediates decays of strange, charm and beauty particles. It arises in the Standard Model from a single complex phase in the Cabibbo–Kobayashi–Maskawa matrix that relates the mass and flavour eigenstates of the quarks. However, the strength of the effect is well below what is needed to explain the dominance of matter over antimatter in the present universe. The LHCb collaboration has now looked for evidence of CP violation in the strong interaction, which binds quarks and gluons within hadrons.

In principle, the theory of the strong interactions, quantum chromodynamics (QCD), allows for a CP-violating component, but measurements of the electric dipole moment of the neutron have shown that any effect in QCD must be very small indeed. This apparent absence of CP violation in QCD is known as “the strong CP problem”.

One way to look for evidence of CP violation in strong interactions is to search for η and η′(958) meson decays to pairs of charged pions: η()π+π, both of which would violate CP symmetry. The LHCb collaboration has recently used its copious production of charm mesons to perform such a search, establishing a new method to isolate potential samples of η and η′ decays into two pions. The D+ and D+s mesons (and their opposite sign modes) have well-measured decay modes to ηπ+ and η′π+, as well as to π+π+π. Therefore, any η or η′ decays to π+π would potentially show up as narrow peaks in the π+π mass spectra from D and Ds decays to π+π+π.

The LHCb team used a sample of about 25 million each of D+ and D+s meson decays to π+π+πcollected during Run 1 and the first year of Run 2 of the LHC (figure 1). The analysis used a boosted decision tree to suppress backgrounds, with fits to the π+π mass spectra from the D+ and D+s decays used to set limits on the amount of η and η′ that could be present. No evidence for the CP-violating decays was found and upper limits were set on the branching fractions, at 90% confidence level, of less than 1.6 × 10–5 for ηπ+π and 1.8 × 10–5 for η′(958) π+π. The result for the η meson is comparable with the current world best, while that for the η′ is a factor three below the previous best, further constraining the possibility for a new CP-violating mechanism in strong interactions.

ALICE prepares for high-luminosity LHC

The LHC is preparing for a major high-luminosity upgrade (HL-LHC) with the objective to increase the instantaneous luminosity to around 2 × 1035 cm–2 s–1 for proton–proton (pp) collisions and 6 × 1027 cm–2 s–1 for lead–lead (Pb–Pb) collisions. To fully exploit this new and unique accelerator performance, the ALICE experiment has engaged an ambitious upgrade programme that will allow the inspection of Pb–Pb collisions at an expected rate of 50 kHz while preserving and even enhancing its unique capabilities in particle identification and low transverse-momentum measurements. This will open a new era in the high-precision characterisation of the quark–gluon plasma (QGP), the state of matter at extreme temperatures.

Measurements of pp collisions serve as vital reference measurements to calibrate the Pb–Pb measurements. However, to limit the event “pile-up” during pp collisions (i.e. the number of pp collisions per bunch crossing) and to ensure a high-quality data set, the instantaneous luminosity must be limited in ALICE at a value of 1030 cm–2 s–1. This is achieved by applying a beam–beam separation in the horizontal plane of up to several σ (beam-size units): first, once the beams are ready for physics, a controlled and automatic luminosity ramp-up sets in to reach the target luminosity defined by ALICE. Next, fine-tuning is carried out during the fill – a procedure known as luminosity levelling, which requires algorithms running synchronously on the ALICE and LHC sides.

Following detailed simulations and several tests at the LHC, a new luminosity levelling algorithm has been in operation since June this year. The algorithm calculates the beam separation for both the target luminosity and the measured instantaneous luminosity, and uses the difference of the two separations to calculate step sizes. These are then transmitted to the LHC, which steers the beams until the target luminosity is reached within ±5%. When the beams approach the final separation in the horizontal plane, much smaller step sizes are applied to ensure a smooth and precise convergence of the luminosity to the target (see figure). This automatic procedure speeds up the collider operation and also prevents luminosity overshooting, which can occur during manual operations. Thanks to this new procedure, ALICE has increased its data-taking efficiency and can safely change the target luminosity even during fills with thousands of colliding bunches, a necessary step in anticipation of the high luminosities to be delivered by the LHC in the near future.

KATRIN celebrates first beam

On 14 October, the KArlsruhe TRItium Neutrino (KATRIN) experiment, which is presently being assembled at Tritium Laboratory Karlsruhe on the KIT Campus North site, Germany, celebrated “first light”. For the first time, electrons were guided through the 70 m-long beamline towards a giant spectrometer, which allows the kinetic energy of the beta electron from tritium beta decays to be determined very precisely. Although actual measurements will not get under way until next year, it marks the beginning of KATRIN operation.

The goal of the technologically challenging KATRIN experiment, which has been a CERN-recognised experiment since 2007, is to determine the absolute mass scale of neutrinos in a model-independent way. Previous experiments using the same technique set an upper limit to the electron antineutrino mass of 2.3 eV/c2, but KATRIN will either improve on this by one order of magnitude or, if neutrinos weigh more than 0.35 eV/c2, discover the actual mass.

KATRIN involves more than 150 scientists, engineers and technicians from 12 institutions in Germany, the UK, Russia, the Czech Republic and the US.

European XFEL enters commissioning phase

On 6 October, commissioning began at the world’s largest X-ray laser: the European XFEL in Hamburg, Germany. The 3.4 km-long European XFEL will generate ultrashort X-ray flashes with a brilliance one billion times greater than the best conventional X-ray radiation sources based on synchrotrons. The beams will be directed towards samples at a rate of 27,000 flashes per second, allowing scientists from a broad range of disciplines to study the atomic structure of materials and to investigate ultrafast processes in situ. Commissioning will take place over the next few months, with external scientists able to perform first experiments in summer 2017.

The linear accelerator that drives the European XFEL is based on superconducting “TESLA” technology, which has been developed by DESY and its international partners. Since 2005, DESY has been operating a free-electron laser called FLASH, which is a 260 m-long prototype of the European XFEL that relies on the same technology.

The European XFEL is managed by 11 member countries: Denmark, France, Germany, Hungary, Italy, Poland, Russia, Slovakia, Spain, Sweden and Switzerland. On 1 January 2017, surface-physicist Robert Feidenhans’l, currently head of the Niels Bohr Institute at the University of Copenhagen, was appointed as the new chairman of the European XFEL management board taking over from Massimo Altarelli, who had been in the role since 2009.

Hubble misses 90% of distant galaxies

A team of astronomers has estimated that the number of galaxies in the observable universe is around two trillion (2 × 1012), which is 10 times more than could be observed by the Hubble Space Telescope in a hypothetical all-sky survey. Although the finding does not affect the matter content of the universe, it shows that small galaxies unobservable by Hubble were much more numerous in the distant, early universe.

Asking how many stars and galaxies there are in the universe might seem a simple enough question, but it has no simple answer. For instance, it is only possible to probe the observable universe, which is limited to the region from where light could reach us in less time than the age of the universe. The Hubble Deep Field images captured in the mid-1990s gave us the first real insight into this fundamental question: myriad faint galaxies were revealed, and extrapolating from the tiny area on the sky suggested that the observable universe contains about 100 billion galaxies.

Now, an international team led by Christopher Conselice of the University of Nottingham in the UK has shown that this number is at least 10 times too low. The conclusion is based on a compilation of many published deep-space observations from Hubble and other telescopes. Conselice and co-workers derived the distance and the mass of the galaxies to deduce how the number of galaxies in a given mass interval evolves over the history of the universe. The team extrapolated its results to infer the existence of faint galaxies, which the current generation of telescopes cannot observe, and found that galaxies are less big and more numerous in the distant universe compared with local regions. Since less-massive galaxies are also the dimmest and therefore the most difficult to observe at great distances, the researchers conclude that the Hubble ultra-deep-field observations are missing about 90% of all galaxies in any observed area in the sky. The total number of galaxies in the observable universe, they suggest, is more like two trillion.

This intriguing result must, however, be put in context. Critically, the galaxy count depends heavily on the lower limit that one chooses for the galaxy mass: since there are more low-mass than high-mass galaxies, any change in this value has huge effects. Conselice and his team took a stellar-mass limit of one million solar masses, which is a very small value corresponding to a galaxy 1000 times smaller than the Large Magellanic Cloud (which is itself about 20–30 times less massive than the Milky Way). The authors explain that were they to take into account even smaller galaxies of 100,000 solar masses, the estimated total number of galaxies would be seven times greater.

The result also does not mean that the universe contains more visible matter than previously thought. Rather, it shows that the bigger galaxies we see in the local universe have been assembled via multiple mergers of smaller galaxies, which were much more numerous in the early, distant universe. While the vast majority of these small, faint and remote galaxies are not yet visible with current technology, they offer great opportunities for future observatories, in particular the James Webb Space Telescope (Hubble’s successor), which is planned for launch in 2018.

CERN soups up its antiproton source

The Antiproton Decelerator (AD) facility at CERN, which has been operational since 2000, is a unique source of antimatter. It delivers antiprotons with very low kinetic energies, enabling physicists to study the fundamental properties of baryonic antimatter – namely antiprotons, antiprotonic helium and antihydrogen – with great precision. Comparing the properties of these simple systems to those of their respective matter conjugates therefore provides highly sensitive tests of CPT invariance, which is the most fundamental symmetry underpinning the relativistic quantum-field theories of the Standard Model (SM). Any observed difference between baryonic matter and antimatter would hint at new physics, for instance due to the existence of quantum fields beyond the SM.

In the case of matter particles, physicists have developed advanced experimental techniques to characterise simple baryonic systems with extraordinary precision. The mass of the proton, for example, has been determined with a fractional precision of 89 parts in a trillion (ppt) and its magnetic moment is known to a fractional precision of three parts in a billion. Electromagnetic spectroscopy on hydrogen atoms, meanwhile, has allowed the ground-state hyperfine splitting of the hydrogen atom to be determined with a relative accuracy of 0.7 ppt and the 1S/2S electron transition in hydrogen to be determined with a fractional precision of four parts in a quadrillion – a number that has 15 digits.

ELENA will lead to an increase by one to two orders of magnitude in the number of antiprotons captured by experiments

In the antimatter sector, on the other hand, only the mass of the antiproton has been determined at a level comparable to that in the baryon world (see table). In the late 1990s, the TRAP collaboration at CERN’s LEAR experiment used advanced trapping and cooling methods to compare the charge-to-mass ratios of the antiproton and the proton with a fractional uncertainty of 90 ppt. This was, among others, one of the crucial steps that inspired CERN to start the AD programme. Over the past 20 years, CERN has made huge strides towards our understanding of antimatter (see panel). This includes the first ever production of anti-atoms – antihydrogen, which comprises an antiproton orbited by a positron – in 1995 and the production of antiprotonic helium (in which an antiproton and an electron orbit a normal helium nucleus).

CERN has decided to boost its AD programme by building a brand new synchrotron that will improve the performance of its antiproton source. Called the Extra Low ENergy Antiproton ring (ELENA), this new facility is now in the commissioning phase. Once it enters operation, ELENA will lead to an increase by one to two orders of magnitude in the number of antiprotons captured by experiments using traps and also make new types of experiments possible (see figure). This will provide an even more powerful probe of new physics beyond the SM.

Combined technologies

The production and investigation of antimatter relies on combining two key technologies: high-energy particle-physics sources and classical low-energy atomic-physics techniques such as traps and lasers. One of the workhorses of experiments in the AD facility is the Penning trap. This static electromagnetic cage for antiprotons serves for both high-precision measurements of the fundamental properties of single trapped antiprotons and for trapping large amounts of antiprotons and positrons for antihydrogen production.

The AD routinely provides low-energy antiprotons to a dynamic and growing user community. It comprises a ring with a circumference of 182.4 m, which currently supplies five operational experiments devoted to studying the properties of antihydrogen, antiprotonic helium and bare antiprotons with high precision: ALPHA, ASACUSA, ATRAP, AEgIS and BASE (see panel). All of these experiments are located in the existing experimental zone, covering approximately one half of the space inside the AD ring. With this present scheme, one bunch containing about 3 × 107 antiprotons is extracted roughly every 120 seconds at a kinetic energy of 5.3 MeV and sent to a particular experiment.

Although there is no hard limit for the lowest energy that can be achieved in a synchrotron, operating a large machine at low energies requires magnets with low field strengths and is therefore subject to perturbations due to remanence, hysteresis and external stray-field effects. The AD extraction energy of 5.3 MeV is a compromise: it allows beam to be delivered under good conditions given the machine’s circumference, while enabling the experiments to capture a reasonable quantity of antiprotons. Most experiments further decelerate the antiprotons by sending them through foils or using a radiofrequency quadrupole to take them down to a few keV so that they can be captured. This present scheme is inefficient, however, and less than one antiproton in 100 that have been decelerated with a foil can be trapped and used by the experiments.

The ELENA project aims to further decelerate the antiprotons from 5.3 MeV down to 100 keV in a controlled way. This is achieved via a synchrotron equipped with an electron cooler to avoid losses during deceleration and to generate dense bunches of antiprotons for users. To achieve this goal, the machine has to be smaller than the AD; a circumference of 30.4 metres has been chosen, which is one sixth of the AD. The experiments still have to further decelerate the beam either using thinner foils or other means, but the lower energy from the synchrotron makes this process more efficient and therefore increases the number of captured antiprotons dramatically.

With ELENA, the available intensity will be distributed to several (the current baseline is four) bunches, which are sent to several experiments simultaneously. Despite the reduction in intensity, the higher beam availability for a given experiment means that a given experiment will receive beam almost continuously 24 hours per day, as opposed to during an eight-hour-long shift a few times per week, as is the case with the present AD.

The ELENA project started in 2012 with the detailed design of the machine and components. Installations inside the AD hall and inside the AD ring itself began in spring 2015, in parallel to AD operation for the existing experiments. Installing ELENA inside the AD ring is a simple cost-effective solution because no large additional building to house a synchrotron and a new experimental area had to be constructed, plus the existing experiments have been able to remain at their present locations. Significant external contributions from the user community include a H ion and proton source for commissioning, and very sensitive profile monitors for the transfer lines.

Low-energy challenges

Most of the challenges and possible issues of the ELENA project are a consequence of its low energy, small size and low intensity. The low beam energy makes the beam very sensitive to perturbations such that even the Earth’s magnetic field has a significant impact, for instance deforming the “closed orbit” such that the beam is no longer located at the centre of the vacuum chamber. The circumference of the machine has therefore been chosen to be as small as possible, thus demanding higher-field magnets, to mitigate these effects. On the other hand, the ring has to be long enough to install all necessary components.

For similar reasons, magnets have to be designed very carefully to ensure a sufficiently good field quality at very low field levels, where hysteresis effects and remanence become important. This challenge triggered thorough investigations by the CERN magnet experts and involved several prototypes using different types of yokes, resulting in unexpected conclusions relevant for any project that relies on low-field magnets. The initially foreseen bending magnets based on “diluted” yokes, with laminations made of electrical steel alternated with thicker non-magnetic stainless steel laminations, were found to have larger remnant fields and to be less suitable. Based on this unexpected empirical observation, which was later explained by theoretical considerations, it has been decided that most ELENA magnets will be built with conventional yokes. The corrector magnets have been built without magnetic yoke to completely suppress hysteresis effects.

Electron cooling is an essential ingredient for ELENA: cooling on an intermediate plateau is applied to reduce emittances and losses during deceleration to the final energy. Once the final energy is reached, electron cooling is applied again to generate dense bunches with low emittances and energy spread, which are then transported to the experiments. At the final energy, so-called intra beam scattering (IBS) caused by Coulomb interactions between different particles in the beam increases the beam “emittances” and the energy spread, which, in turn, increases the beam size. This phenomenon will be the dominant source of beam degradation in ELENA, and the equilibrium between IBS and electron cooling will determine the characteristics of the bunches sent to the experiments.

Another possible limitation for a low-energy machine such as ELENA is the large cross-section for scattering between antiprotons and the nuclei of at-rest gas molecules, which leads to beam loss and degradation. This phenomenon is mitigated by a carefully designed vacuum system that can reach pressures as low as a few 10–12 mbar. Furthermore, ELENA’s low intensities and energy mean that the beam can generate only very small signals and therefore makes beam diagnostics challenging. For example, the currents of the circulating beam are less than 1 μA, which is well below what can be measured with standard beam-current transformers and therefore demands that we seek alternative techniques to estimate the intensity.

An external source capable of providing 100 keV H and proton beams will be used for a large part of the commissioning. Although this allows commissioning to be carried out in parallel with AD operation for the experiments, it means that commissioning starts at the most delicate low-energy part of the ELENA cycle where perturbations have the most impact. Another advantage of ELENA’s low energy is that the transfer lines to the experiments are electrostatic – a low-cost solution that allows for the installation of many focusing quadrupoles and makes the lines less sensitive to perturbations.

CERN's AD facility opens new era of precision anitmatter studies

CERN’s Antiproton Decelerator (AD) was approved in 1997, just two years after the production of the first antihydrogen atoms at the Low Energy Antiproton Ring (LEAR), and entered operation in 2000. Its debut discovery was the production of cold antihydrogen in 2002 by the ATHENA and ATRAP collaborations. These experiments were joined by the ASACUSA collaboration, which aims at precision spectroscopy of antiprotonic helium and Rabi-like spectroscopy of the antihydrogen ground-state hyperfine splitting. Since then, techniques have been developed that allow trapping of antihydrogen atoms and the production of a beam of cold antihydrogen atoms. This culminated in 2010 in the first report on trapped antihydrogen by the ALPHA collaboration (the successor of ATHENA). In the same year, ASACUSA produced antihydrogen using a cusp trap, and in 2012 the ATRAP collaboration also reported on trapped antihydrogen.

TRAP, which was based at LEAR and was the predecessor of ATRAP, is one of two CERN experiments that have allowed the first direct investigations of the fundamental properties of antiprotons. In 1999, the collaboration published a proton-to-antiproton charge-to-mass ratio with a factional precision of 90 ppt based on single-charged-particle spectroscopy in a Penning trap using data taken up to 1996. Then, published in 2013, ATRAP measured the magnetic moment of the antiproton with a fractional precision of 4.4 ppm. The BASE collaboration, which was approved in the same year, is now preparing to improve the ATRAP value to the ppb level. In addition, in 2015 BASE reported on a comparison of the proton-to-antiproton charge-to-mass ratio with a fractional precision of 69 ppm. So far, all measured results are consistent with CPT invariance.

The ALPHA, ASACUSA and ATRAP experiments, with the goal of performing precise antihydrogen spectroscopy, are challenging because they need antihydrogen first to be produced and then to be trapped. This requires the accumulation of both antiprotons and positrons, in addition to antihydrogen production via three-body reactions in a nested Penning trap. In 2012, ALPHA reported on a first spectroscopy-type experiment and published the observation of resonant quantum transitions in antihydrogen (see figure) and, later, ASACUSA reported in 2014 on the first production of a beam of cold antihydrogen atoms. The reliable production/trapping scheme of ALPHA, meanwhile, enabled several high-resolution studies, including the precise investigation of the charge neutrality of antihydrogen with a precision at the 0.7 ppb level.

The ASACUSA, ALPHA and ATRAP collaborations are now preparing their experiments to produce the first electromagnetic spectroscopy results on antihydrogen. This is difficult because ALPHA typically reports on about one trapped antihydrogen atom per mixing cycle, while ASACUSA detects approximately six antihydrogen atoms per shot. Both numbers demand for higher antihydrogen production rates, and to further boost AD physics, CERN built the new low-energy antiproton synchrotron ELENA. In parallel to these efforts, proposals to study gravity with antihydrogen were approved. This led to the formation of the AEgIS collaboration in 2008, which is currently being commissioned, and the GBAR project in 2012.

Towards first beam

As of the end of October 2016, all sectors of the ELENA ring –except for the electron cooler, which has temporarily been replaced by a simple vacuum chamber, and a few transfer lines required for the commissioning of the ring – have been installed and baked to reach the very low rest-gas density required. Following hardware tests, commissioning with beam is under way and will be resumed in early 2017, only interrupted for the installation of the electron cooler some time in spring.

ELENA will be ready from 2017 to provide beam to the GBAR experiment, which will be installed in the new experimental area (see panel). The existing AD experiments, however, will be connected only during CERN’s Long Shutdown 2 in 2019–2020 to minimise the period without antiprotons and to optimise the exploitation of the experiments. GBAR, along with another AD experiment called AEgIS, will target direct tests of the weak-equivalence principle by measuring gravitational acceleration based on antihydrogen. This is another powerful way to test for any violations between the way the fundamental forces affect matter and antimatter. Although the first antimatter fall experiments were reported by the ALPHA collaboration in 2013, these results will potentially be improved by several orders of magnitude using the dedicated gravity experiments offered by ELENA.

ELENA is expected to operate for at least 10 years and be exploited by a user community consisting of six approved experiments. This will take physicists towards the ultimate goal of performing spectroscopy on antihydrogen atoms at rest, and also to investigate the effect of gravity on matter and antimatter. A potential discovery of CPT violation will constitute a dramatic challenge to the relativistic quantum-field theories of the SM and will potentially contribute to an understanding of the striking imbalance of matter and antimatter observed on cosmological scales.

bright-rec iop pub iop-science physcis connect