The LHC is preparing for a major high-luminosity upgrade (HL-LHC) with the objective to increase the instantaneous luminosity to around 2 × 1035 cm–2 s–1 for proton–proton (pp) collisions and 6 × 1027 cm–2 s–1 for lead–lead (Pb–Pb) collisions. To fully exploit this new and unique accelerator performance, the ALICE experiment has engaged an ambitious upgrade programme that will allow the inspection of Pb–Pb collisions at an expected rate of 50 kHz while preserving and even enhancing its unique capabilities in particle identification and low transverse-momentum measurements. This will open a new era in the high-precision characterisation of the quark–gluon plasma (QGP), the state of matter at extreme temperatures.
Measurements of pp collisions serve as vital reference measurements to calibrate the Pb–Pb measurements. However, to limit the event “pile-up” during pp collisions (i.e. the number of pp collisions per bunch crossing) and to ensure a high-quality data set, the instantaneous luminosity must be limited in ALICE at a value of 1030 cm–2 s–1. This is achieved by applying a beam–beam separation in the horizontal plane of up to several σ (beam-size units): first, once the beams are ready for physics, a controlled and automatic luminosity ramp-up sets in to reach the target luminosity defined by ALICE. Next, fine-tuning is carried out during the fill – a procedure known as luminosity levelling, which requires algorithms running synchronously on the ALICE and LHC sides.
Following detailed simulations and several tests at the LHC, a new luminosity levelling algorithm has been in operation since June this year. The algorithm calculates the beam separation for both the target luminosity and the measured instantaneous luminosity, and uses the difference of the two separations to calculate step sizes. These are then transmitted to the LHC, which steers the beams until the target luminosity is reached within ±5%. When the beams approach the final separation in the horizontal plane, much smaller step sizes are applied to ensure a smooth and precise convergence of the luminosity to the target (see figure). This automatic procedure speeds up the collider operation and also prevents luminosity overshooting, which can occur during manual operations. Thanks to this new procedure, ALICE has increased its data-taking efficiency and can safely change the target luminosity even during fills with thousands of colliding bunches, a necessary step in anticipation of the high luminosities to be delivered by the LHC in the near future.
On 14 October, the KArlsruhe TRItium Neutrino (KATRIN) experiment, which is presently being assembled at Tritium Laboratory Karlsruhe on the KIT Campus North site, Germany, celebrated “first light”. For the first time, electrons were guided through the 70 m-long beamline towards a giant spectrometer, which allows the kinetic energy of the beta electron from tritium beta decays to be determined very precisely. Although actual measurements will not get under way until next year, it marks the beginning of KATRIN operation.
The goal of the technologically challenging KATRIN experiment, which has been a CERN-recognised experiment since 2007, is to determine the absolute mass scale of neutrinos in a model-independent way. Previous experiments using the same technique set an upper limit to the electron antineutrino mass of 2.3 eV/c2, but KATRIN will either improve on this by one order of magnitude or, if neutrinos weigh more than 0.35 eV/c2, discover the actual mass.
KATRIN involves more than 150 scientists, engineers and technicians from 12 institutions in Germany, the UK, Russia, the Czech Republic and the US.
On 6 October, commissioning began at the world’s largest X-ray laser: the European XFEL in Hamburg, Germany. The 3.4 km-long European XFEL will generate ultrashort X-ray flashes with a brilliance one billion times greater than the best conventional X-ray radiation sources based on synchrotrons. The beams will be directed towards samples at a rate of 27,000 flashes per second, allowing scientists from a broad range of disciplines to study the atomic structure of materials and to investigate ultrafast processes in situ. Commissioning will take place over the next few months, with external scientists able to perform first experiments in summer 2017.
The linear accelerator that drives the European XFEL is based on superconducting “TESLA” technology, which has been developed by DESY and its international partners. Since 2005, DESY has been operating a free-electron laser called FLASH, which is a 260 m-long prototype of the European XFEL that relies on the same technology.
The European XFEL is managed by 11 member countries: Denmark, France, Germany, Hungary, Italy, Poland, Russia, Slovakia, Spain, Sweden and Switzerland. On 1 January 2017, surface-physicist Robert Feidenhans’l, currently head of the Niels Bohr Institute at the University of Copenhagen, was appointed as the new chairman of the European XFEL management board taking over from Massimo Altarelli, who had been in the role since 2009.
A team of astronomers has estimated that the number of galaxies in the observable universe is around two trillion (2 × 1012), which is 10 times more than could be observed by the Hubble Space Telescope in a hypothetical all-sky survey. Although the finding does not affect the matter content of the universe, it shows that small galaxies unobservable by Hubble were much more numerous in the distant, early universe.
Asking how many stars and galaxies there are in the universe might seem a simple enough question, but it has no simple answer. For instance, it is only possible to probe the observable universe, which is limited to the region from where light could reach us in less time than the age of the universe. The Hubble Deep Field images captured in the mid-1990s gave us the first real insight into this fundamental question: myriad faint galaxies were revealed, and extrapolating from the tiny area on the sky suggested that the observable universe contains about 100 billion galaxies.
Now, an international team led by Christopher Conselice of the University of Nottingham in the UK has shown that this number is at least 10 times too low. The conclusion is based on a compilation of many published deep-space observations from Hubble and other telescopes. Conselice and co-workers derived the distance and the mass of the galaxies to deduce how the number of galaxies in a given mass interval evolves over the history of the universe. The team extrapolated its results to infer the existence of faint galaxies, which the current generation of telescopes cannot observe, and found that galaxies are less big and more numerous in the distant universe compared with local regions. Since less-massive galaxies are also the dimmest and therefore the most difficult to observe at great distances, the researchers conclude that the Hubble ultra-deep-field observations are missing about 90% of all galaxies in any observed area in the sky. The total number of galaxies in the observable universe, they suggest, is more like two trillion.
This intriguing result must, however, be put in context. Critically, the galaxy count depends heavily on the lower limit that one chooses for the galaxy mass: since there are more low-mass than high-mass galaxies, any change in this value has huge effects. Conselice and his team took a stellar-mass limit of one million solar masses, which is a very small value corresponding to a galaxy 1000 times smaller than the Large Magellanic Cloud (which is itself about 20–30 times less massive than the Milky Way). The authors explain that were they to take into account even smaller galaxies of 100,000 solar masses, the estimated total number of galaxies would be seven times greater.
The result also does not mean that the universe contains more visible matter than previously thought. Rather, it shows that the bigger galaxies we see in the local universe have been assembled via multiple mergers of smaller galaxies, which were much more numerous in the early, distant universe. While the vast majority of these small, faint and remote galaxies are not yet visible with current technology, they offer great opportunities for future observatories, in particular the James Webb Space Telescope (Hubble’s successor), which is planned for launch in 2018.
The Antiproton Decelerator (AD) facility at CERN, which has been operational since 2000, is a unique source of antimatter. It delivers antiprotons with very low kinetic energies, enabling physicists to study the fundamental properties of baryonic antimatter – namely antiprotons, antiprotonic helium and antihydrogen – with great precision. Comparing the properties of these simple systems to those of their respective matter conjugates therefore provides highly sensitive tests of CPT invariance, which is the most fundamental symmetry underpinning the relativistic quantum-field theories of the Standard Model (SM). Any observed difference between baryonic matter and antimatter would hint at new physics, for instance due to the existence of quantum fields beyond the SM.
In the case of matter particles, physicists have developed advanced experimental techniques to characterise simple baryonic systems with extraordinary precision. The mass of the proton, for example, has been determined with a fractional precision of 89 parts in a trillion (ppt) and its magnetic moment is known to a fractional precision of three parts in a billion. Electromagnetic spectroscopy on hydrogen atoms, meanwhile, has allowed the ground-state hyperfine splitting of the hydrogen atom to be determined with a relative accuracy of 0.7 ppt and the 1S/2S electron transition in hydrogen to be determined with a fractional precision of four parts in a quadrillion – a number that has 15 digits.
ELENA will lead to an increase by one to two orders of magnitude in the number of antiprotons captured by experiments
In the antimatter sector, on the other hand, only the mass of the antiproton has been determined at a level comparable to that in the baryon world (see table). In the late 1990s, the TRAP collaboration at CERN’s LEAR experiment used advanced trapping and cooling methods to compare the charge-to-mass ratios of the antiproton and the proton with a fractional uncertainty of 90 ppt. This was, among others, one of the crucial steps that inspired CERN to start the AD programme. Over the past 20 years, CERN has made huge strides towards our understanding of antimatter (see panel). This includes the first ever production of anti-atoms – antihydrogen, which comprises an antiproton orbited by a positron – in 1995 and the production of antiprotonic helium (in which an antiproton and an electron orbit a normal helium nucleus).
CERN has decided to boost its AD programme by building a brand new synchrotron that will improve the performance of its antiproton source. Called the Extra Low ENergy Antiproton ring (ELENA), this new facility is now in the commissioning phase. Once it enters operation, ELENA will lead to an increase by one to two orders of magnitude in the number of antiprotons captured by experiments using traps and also make new types of experiments possible (see figure). This will provide an even more powerful probe of new physics beyond the SM.
Combined technologies
The production and investigation of antimatter relies on combining two key technologies: high-energy particle-physics sources and classical low-energy atomic-physics techniques such as traps and lasers. One of the workhorses of experiments in the AD facility is the Penning trap. This static electromagnetic cage for antiprotons serves for both high-precision measurements of the fundamental properties of single trapped antiprotons and for trapping large amounts of antiprotons and positrons for antihydrogen production.
The AD routinely provides low-energy antiprotons to a dynamic and growing user community. It comprises a ring with a circumference of 182.4 m, which currently supplies five operational experiments devoted to studying the properties of antihydrogen, antiprotonic helium and bare antiprotons with high precision: ALPHA, ASACUSA, ATRAP, AEgIS and BASE (see panel). All of these experiments are located in the existing experimental zone, covering approximately one half of the space inside the AD ring. With this present scheme, one bunch containing about 3 × 107 antiprotons is extracted roughly every 120 seconds at a kinetic energy of 5.3 MeV and sent to a particular experiment.
Although there is no hard limit for the lowest energy that can be achieved in a synchrotron, operating a large machine at low energies requires magnets with low field strengths and is therefore subject to perturbations due to remanence, hysteresis and external stray-field effects. The AD extraction energy of 5.3 MeV is a compromise: it allows beam to be delivered under good conditions given the machine’s circumference, while enabling the experiments to capture a reasonable quantity of antiprotons. Most experiments further decelerate the antiprotons by sending them through foils or using a radiofrequency quadrupole to take them down to a few keV so that they can be captured. This present scheme is inefficient, however, and less than one antiproton in 100 that have been decelerated with a foil can be trapped and used by the experiments.
The ELENA project aims to further decelerate the antiprotons from 5.3 MeV down to 100 keV in a controlled way. This is achieved via a synchrotron equipped with an electron cooler to avoid losses during deceleration and to generate dense bunches of antiprotons for users. To achieve this goal, the machine has to be smaller than the AD; a circumference of 30.4 metres has been chosen, which is one sixth of the AD. The experiments still have to further decelerate the beam either using thinner foils or other means, but the lower energy from the synchrotron makes this process more efficient and therefore increases the number of captured antiprotons dramatically.
With ELENA, the available intensity will be distributed to several (the current baseline is four) bunches, which are sent to several experiments simultaneously. Despite the reduction in intensity, the higher beam availability for a given experiment means that a given experiment will receive beam almost continuously 24 hours per day, as opposed to during an eight-hour-long shift a few times per week, as is the case with the present AD.
The ELENA project started in 2012 with the detailed design of the machine and components. Installations inside the AD hall and inside the AD ring itself began in spring 2015, in parallel to AD operation for the existing experiments. Installing ELENA inside the AD ring is a simple cost-effective solution because no large additional building to house a synchrotron and a new experimental area had to be constructed, plus the existing experiments have been able to remain at their present locations. Significant external contributions from the user community include a H– ion and proton source for commissioning, and very sensitive profile monitors for the transfer lines.
Low-energy challenges
Most of the challenges and possible issues of the ELENA project are a consequence of its low energy, small size and low intensity. The low beam energy makes the beam very sensitive to perturbations such that even the Earth’s magnetic field has a significant impact, for instance deforming the “closed orbit” such that the beam is no longer located at the centre of the vacuum chamber. The circumference of the machine has therefore been chosen to be as small as possible, thus demanding higher-field magnets, to mitigate these effects. On the other hand, the ring has to be long enough to install all necessary components.
For similar reasons, magnets have to be designed very carefully to ensure a sufficiently good field quality at very low field levels, where hysteresis effects and remanence become important. This challenge triggered thorough investigations by the CERN magnet experts and involved several prototypes using different types of yokes, resulting in unexpected conclusions relevant for any project that relies on low-field magnets. The initially foreseen bending magnets based on “diluted” yokes, with laminations made of electrical steel alternated with thicker non-magnetic stainless steel laminations, were found to have larger remnant fields and to be less suitable. Based on this unexpected empirical observation, which was later explained by theoretical considerations, it has been decided that most ELENA magnets will be built with conventional yokes. The corrector magnets have been built without magnetic yoke to completely suppress hysteresis effects.
Electron cooling is an essential ingredient for ELENA: cooling on an intermediate plateau is applied to reduce emittances and losses during deceleration to the final energy. Once the final energy is reached, electron cooling is applied again to generate dense bunches with low emittances and energy spread, which are then transported to the experiments. At the final energy, so-called intra beam scattering (IBS) caused by Coulomb interactions between different particles in the beam increases the beam “emittances” and the energy spread, which, in turn, increases the beam size. This phenomenon will be the dominant source of beam degradation in ELENA, and the equilibrium between IBS and electron cooling will determine the characteristics of the bunches sent to the experiments.
Another possible limitation for a low-energy machine such as ELENA is the large cross-section for scattering between antiprotons and the nuclei of at-rest gas molecules, which leads to beam loss and degradation. This phenomenon is mitigated by a carefully designed vacuum system that can reach pressures as low as a few 10–12 mbar. Furthermore, ELENA’s low intensities and energy mean that the beam can generate only very small signals and therefore makes beam diagnostics challenging. For example, the currents of the circulating beam are less than 1 μA, which is well below what can be measured with standard beam-current transformers and therefore demands that we seek alternative techniques to estimate the intensity.
An external source capable of providing 100 keV H– and proton beams will be used for a large part of the commissioning. Although this allows commissioning to be carried out in parallel with AD operation for the experiments, it means that commissioning starts at the most delicate low-energy part of the ELENA cycle where perturbations have the most impact. Another advantage of ELENA’s low energy is that the transfer lines to the experiments are electrostatic – a low-cost solution that allows for the installation of many focusing quadrupoles and makes the lines less sensitive to perturbations.
CERN's AD facility opens new era of precision anitmatter studies
CERN’s Antiproton Decelerator (AD) was approved in 1997, just two years after the production of the first antihydrogen atoms at the Low Energy Antiproton Ring (LEAR), and entered operation in 2000. Its debut discovery was the production of cold antihydrogen in 2002 by the ATHENA and ATRAP collaborations. These experiments were joined by the ASACUSA collaboration, which aims at precision spectroscopy of antiprotonic helium and Rabi-like spectroscopy of the antihydrogen ground-state hyperfine splitting. Since then, techniques have been developed that allow trapping of antihydrogen atoms and the production of a beam of cold antihydrogen atoms. This culminated in 2010 in the first report on trapped antihydrogen by the ALPHA collaboration (the successor of ATHENA). In the same year, ASACUSA produced antihydrogen using a cusp trap, and in 2012 the ATRAP collaboration also reported on trapped antihydrogen.
TRAP, which was based at LEAR and was the predecessor of ATRAP, is one of two CERN experiments that have allowed the first direct investigations of the fundamental properties of antiprotons. In 1999, the collaboration published a proton-to-antiproton charge-to-mass ratio with a factional precision of 90 ppt based on single-charged-particle spectroscopy in a Penning trap using data taken up to 1996. Then, published in 2013, ATRAP measured the magnetic moment of the antiproton with a fractional precision of 4.4 ppm. The BASE collaboration, which was approved in the same year, is now preparing to improve the ATRAP value to the ppb level. In addition, in 2015 BASE reported on a comparison of the proton-to-antiproton charge-to-mass ratio with a fractional precision of 69 ppm. So far, all measured results are consistent with CPT invariance.
The ALPHA, ASACUSA and ATRAP experiments, with the goal of performing precise antihydrogen spectroscopy, are challenging because they need antihydrogen first to be produced and then to be trapped. This requires the accumulation of both antiprotons and positrons, in addition to antihydrogen production via three-body reactions in a nested Penning trap. In 2012, ALPHA reported on a first spectroscopy-type experiment and published the observation of resonant quantum transitions in antihydrogen (see figure) and, later, ASACUSA reported in 2014 on the first production of a beam of cold antihydrogen atoms. The reliable production/trapping scheme of ALPHA, meanwhile, enabled several high-resolution studies, including the precise investigation of the charge neutrality of antihydrogen with a precision at the 0.7 ppb level.
The ASACUSA, ALPHA and ATRAP collaborations are now preparing their experiments to produce the first electromagnetic spectroscopy results on antihydrogen. This is difficult because ALPHA typically reports on about one trapped antihydrogen atom per mixing cycle, while ASACUSA detects approximately six antihydrogen atoms per shot. Both numbers demand for higher antihydrogen production rates, and to further boost AD physics, CERN built the new low-energy antiproton synchrotron ELENA. In parallel to these efforts, proposals to study gravity with antihydrogen were approved. This led to the formation of the AEgIS collaboration in 2008, which is currently being commissioned, and the GBAR project in 2012.
Towards first beam
As of the end of October 2016, all sectors of the ELENA ring –except for the electron cooler, which has temporarily been replaced by a simple vacuum chamber, and a few transfer lines required for the commissioning of the ring – have been installed and baked to reach the very low rest-gas density required. Following hardware tests, commissioning with beam is under way and will be resumed in early 2017, only interrupted for the installation of the electron cooler some time in spring.
ELENA will be ready from 2017 to provide beam to the GBAR experiment, which will be installed in the new experimental area (see panel). The existing AD experiments, however, will be connected only during CERN’s Long Shutdown 2 in 2019–2020 to minimise the period without antiprotons and to optimise the exploitation of the experiments. GBAR, along with another AD experiment called AEgIS, will target direct tests of the weak-equivalence principle by measuring gravitational acceleration based on antihydrogen. This is another powerful way to test for any violations between the way the fundamental forces affect matter and antimatter. Although the first antimatter fall experiments were reported by the ALPHA collaboration in 2013, these results will potentially be improved by several orders of magnitude using the dedicated gravity experiments offered by ELENA.
ELENA is expected to operate for at least 10 years and be exploited by a user community consisting of six approved experiments. This will take physicists towards the ultimate goal of performing spectroscopy on antihydrogen atoms at rest, and also to investigate the effect of gravity on matter and antimatter. A potential discovery of CPT violation will constitute a dramatic challenge to the relativistic quantum-field theories of the SM and will potentially contribute to an understanding of the striking imbalance of matter and antimatter observed on cosmological scales.
Throughout history, our notion of space and time has undergone a number of dramatic transformations, thanks to figures ranging from Aristotle, Leibniz and Newton to Gauss, Poincaré and Einstein. In our present understanding of nature, space and time form a single 4D entity called space–time. This entity plays a key role for the entire field of physics: either as a passive spectator by providing the arena in which physical processes take place or, in the case of gravity as understood by Einstein’s general relativity, as an active participant.
Since the birth of special relativity in 1905 and the CPT theorem of Bell, Lüders and Pauli in the 1950s, we have come to appreciate both Lorentz and CPT symmetry as cornerstones of the underlying structure of space–time. The former states that physical laws are unchanged when transforming between two inertial frames, while the latter is the symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity inversion (P) and time reversal (T). These closely entwined symmetries guarantee that space–time provides a level playing field for all physical systems independent of their spatial orientation and velocity, or whether they are composed of matter or antimatter. Both have stood the tests of time, but in the last quarter century these cornerstones have come under renewed scrutiny as to whether they are indeed exact symmetries of nature. Were physicists to find violations, it would lead to profound revisions in our understanding of space and time and force us to correct both general relativity and the Standard Model of particle
Accessing the Planck scale
Several considerations have spurred significant enthusiasm for testing Lorentz and CPT invariance in recent years. One is the observed bias of nature towards matter – an imbalance that is difficult, although perhaps possible, to explain using standard physics. Another stems from the synthesis of two of the most successful physics concepts in history: unification and symmetry breaking. Many theoretical attempts to combine quantum theory with gravity into a theory of quantum gravity allow for tiny departures from Lorentz and CPT invariance. Surprisingly, even deviations that are suppressed by 20 orders of magnitude or more are experimentally accessible with present technology. Few, if any, other experimental approaches to finding new physics can provide such direct access to the Planck scale.
Unfortunately, current models of quantum gravity cannot accurately pinpoint experimental signatures for Lorentz and CPT violation. An essential milestone has therefore been the development of a general theoretical framework that incorporates Lorentz and CPT violation into both the Standard Model and general relativity: the Standard Model Extension (SME), as formulated by Alan Kostelecký of Indiana University in the US and coworkers beginning in the early 1990s. Due to its generality and independence of the underlying models, the SME achieves the ambitious goal of allowing the identification, analysis and interpretation of all feasible Lorentz and CPT tests (see panel below). Any putative quantum-gravity remnants associated with Lorentz breakdown enter the SME as a multitude of preferred directions criss-crossing space–time. As a result, the playing field for physical systems is no longer level: effects may depend slightly on spatial orientation, uniform velocity, or whether matter or antimatter is involved. These preferred directions are the coefficients of the SME framework; they parametrise the type and extent of Lorentz and CPT violation, offering specific experiments the opportunity to try to glimpse them.
The Standard Model Extension
At the core of attempts to detect violations in space–time symmetry is the Standard Model Extension (SME) – an effective field theory that contains not just the SM but also general relativity and all possible operators that break Lorentz symmetry. It can be expressed as a Lagrangian in which each Lorentz-violating term has a coefficient that leads to a testable prediction of the theory.
Lorentz and CPT research is unique in the exceptionally wide range of experiments it offers. The SME makes predictions for symmetry-violating effects in systems involving neutrinos, gravity, meson oscillations, cosmic rays, atomic spectra, antimatter, Penning traps and collider physics, among others. In the case of free particles, Lorentz and CPT violation lead to a dependence of observables on the direction and magnitude of the particles’ momenta, on their spins, and on whether particles or antiparticles are studied. For a bound system such as atomic and nuclear states, the energy spectrum depends on its orientation and velocity and may differ from that of the corresponding antimatter system.
The vast spectrum of experiments and latest results in this field were the subject of the triennial CPT conference held at Indiana University in June this year (see panel below), highlights from which form the basis of this article.
The seventh triennial CPT conference
A host of experimental efforts to probe space–time symmetries were the focus of the week-long Seventh Meeting on CPT and Lorentz Symmetry (CPT’16) held at Indiana University, Bloomington, US, on 20–24 June, which are summarised in the main text of this article. With around 120 experts from five continents discussing the most recent developments in the subject, it has been the largest of all meetings in this one-of-a-kind triennial conference series. Many of the sessions included presentations involving experiments at CERN, and the discussions covered a number of key results from experiments at the Antiproton Decelerator and future improvements expected from the commissioning of ELENA. The common thread weaving through all of these talks heralds an exciting emergent era of low-energy Planck-reach fundamental physics with antimatter.
CERN matters
As host to the world’s only cold-antiproton source for precision antimatter physics (the Antiproton Decelerator, AD) and the highest-energy particle accelerator (the Large Hadron Collider, LHC), CERN is in a unique position to investigate the microscopic structure of space–time. The corresponding breadth of measurements at these extreme ends of the energy regime guarantees complementary experimental approaches to Lorentz and CPT symmetry at a single laboratory. Furthermore, the commissioning of the new ELENA facility at CERN is opening brand new tests of Lorentz and CPT symmetry in the antimatter sector (see panel below).
Cold antiprotons offer powerful tests of CPT symmetry
CPT – the combination of charge conjugation (C), parity inversion (P) and time reversal (T) – represents a discrete symmetry between matter and antimatter. As the standard CPT test framework, the Standard Model Extension (SME) possesses a feature that might perhaps seem curious at first: CPT violation always comes with a breakdown of Lorentz invariance. However, an extraordinary insight gleaned from the celebrated CPT theorem of the 1950s is that Lorentz symmetry already contains CPT invariance under “mild smoothness” assumptions: since CPT is essentially a special Lorentz transformation with a complex-valued velocity, the symmetry holds whenever the equations of physics are smooth enough to allow continuation into the complex plane. Unsurprisingly, then, the loss of CPT invariance requires Lorentz breakdown, an argument made rigorous in 2002. Lorentz violation, on the other hand, does not imply CPT breaking.
That CPT breaking comes with Lorentz violation has the profound experimental implication that CPT tests do not necessarily have to involve both matter and antimatter: hypothetical CPT violation might also be detectable via the concomitant Lorentz breaking in matter alone. But this feature comes at a cost: the corresponding Lorentz tests typically cannot disentangle CPT-even and CPT-odd signals and, worse, they may even be blind to the effect altogether. Antimatter experiments decisively brush aside these concerns, and the availability at CERN of cold antiprotons has thus opened an unparalleled avenue for CPT tests. In fact, all six fundamental-physics experiments that use CERN’s antiprotons have the potential to place independent limits on distinct regions of the SME’s coefficient space. The upcoming Extra Low ENergy Antiproton (ELENA) ring at CERN (see “CERN soups up its antiproton source”) will provide substantially upgraded access to antiprotons for these experiments.
One exciting type of CPT test that will be conducted independently by the ALPHA, ATRAP and ASACUSA experiments is to produce antihydrogen, an atom made up of an antiproton and a positron, and compare its spectrum to that of ordinary hydrogen. While the production of cold antihydrogen has already been achieved by these experiments, present efforts are directed at precision spectroscopy promising clean and competitive constraints on various CPT-breaking SME coefficients for the proton and electron.
At present, the gravitational interaction of antimatter remains virtually untested. The AEgIS and GBAR experiments will tackle this issue by dropping antihydrogen atoms in the Earth’s gravity field. These experiments differ in their detailed set-up, but both are projected to permit initial measurements of the gravitational acceleration, g, for antihydrogen at the per cent level. The results will provide limits on SME coefficients for the couplings between antimatter and gravity that are inaccessible with other experiments.
A third fascinating type of CPT test is based on the equality of the physical properties of a particle and its antiparticle, as guaranteed by CPT invariance. The ATRAP and BASE experiments have been advocating such a comparison between protons and antiprotons confined in a cryogenic Penning trap. Impressive results for the charge-to-mass ratios and g factors have already been obtained at CERN and are poised for substantial future improvements. These measurements permit clean bounds on SME coefficients of the proton with record sensitivities.
Regarding the LHC, the latest Lorentz- and CPT-violation physics comes from the LHCb collaboration, which studies particles made up of b quarks. The experiment’s first measurements of SME coefficients in the Bd and Bs systems, published in June this year, have improved existing results by up to two orders of magnitude. LHCb also has competition from other major neutral-meson experiments. These involve studies of the Bs system at the Tevatron’s DØ experiment, recent searches forLorentz and CPT violation with entangled kaons at KLOE and the upcoming KLOE-2 at DAΦNE in Italy, as well as results on CPT-symmetry tests in Bd mixing and decays from the BaBar experiment at SLAC. The LHC’s general-purpose ATLAS and CMS experiments, meanwhile, hold promise for heavy-quark studies. Data on single-top production at these experiments would allow the world’s first CPT test for the top quark, while the measurement of top–antitop production can sharpen by a factor of 10 the earlier measurements of CPT-even Lorentz violation at DØ.
Other possibilities for accelerator tests of Lorentz and CPT invariance include deep inelastic scattering and polarised electron–electron scattering. The first ever analysis of the former offers a way to access previously unconstrained SME coefficients in QCD employing data from, for example, the HERA collider at DESY. Polarised electron–electron scattering, on the other hand, allows constraints to be placed on currently unmeasured Lorentz violations in the Z boson, which are also parameterised by the SME and have relevance for SLAC’s E158 data and the proposed MOLLER experiment at JLab. Lorentz-symmetry breaking would also cause the muon spin precession in a storage ring to be thrown out of sync by just a tiny bit, which is an effect accessible to muon g-2 measurements at J-PARC and Fermilab.
Historically, electromagnetism is perhaps most closely associated with Lorentz tests, and this idea continues to exert a sustained influence on the field. Modern versions of the classical Michelson–Morley experiment have been realised with tabletop resonant cavities as well as with the multi-kilometre LIGO interferometer, with upcoming improvements promising unparalleled measurements of the SME’s photon sector. Another approach for testing Lorentz and CPT symmetry is to study the energy- and direction-dependent dispersion of photons as predicted by the SME. Recent observations by the space-based Fermi Large Area Telescope severely constrain this effect, placing tight limits on 25 individual non-minimal SME coefficients for the photon.
AMO techniques
Experiments in atomic, molecular and optical (AMO) physics are also providing powerful probes of Lorentz and CPT invariance and these are complementary to accelerator-based tests. AMO techniques excel at testing Lorentz-violating effects that do not grow with energy, but they are typically confined to normal-matter particles and cannot directly access the SME coefficients of the Higgs or the top quark. Recently, advances in this field have allowed researchers to carry out interferometry using systems other than light, and an intriguing idea is to use entangled wave functions to create a Michelson–Morley interferometer within a single Yb+ ion. The strongly enhanced SME effects in this system, which arise due to the ion’s particular energy-level structure, could improve existing limits by five orders of magnitude.
Other AMO systems, such as atomic clocks, have long been recognised as a backbone of Lorentz tests. The bright SME prospects arising from the latest trend toward optical clocks, which are several orders of magnitude more precise than traditional varieties based on microwave transitions, are being examined by researchers at NIST and elsewhere. Also, measurements on the more exotic muonium atom by J-PARC and by the PSI can place limits on the SME’s muon coefficients, which is a topic of significant interest in light of several current puzzles involving the muon.
From neutrinos to gravity
Unknown neutrino properties, such as their mass, and tension between various neutrino measurements have stimulated a wealth of recent research including a number of SME analyses. The breakdown of Lorentz and CPT symmetry would cause the ordinary neutrino–neutrino and antineutrino–antineutrino oscillations to exhibit unusual direction, energy and flavour dependence, and would also induce unconventional neutrino–antineutrino mixing and kinematic effects – the latter leading to modified velocities and dispersion, as measured in time-of-flight experiments. Existing and planned neutrino experiments offer a wealth of opportunities to examine such effects. For example: upcoming results from the Daya Bay experiment should yield improved limits on Lorentz violation from antineutrino–antineutrino mixing; EXO has obtained the first direct experimental bound on a difficult-to-access “counter-shaded” coefficient extracted from the electron spectrum of double beta decay; T2K has announced new constraints on the a-and-c coefficients tightened by a factor of two using the muon-neutrino; and IceCube promises extreme sensitivities to “non-minimal” effects with kinematical studies of astrophysical neutrinos, such as Cherenkov effects of various kinds.
The modern approach to Lorentz and CPT tests remains as active as ever.
The feebleness of gravity makes the corresponding Lorentz and CPT tests in this SME sector particularly challenging. This has led researchers from HUST in China and from Indiana University to use an ingenious tabletop experiment to seek Lorentz breaking in the short-range behaviour of the gravitational force. The idea is to bring gravitationally interacting test masses to within submillimetre ranges of one another and observe their mechanical resonance behaviour, which is sensitive to deviations from Lorentz symmetry in the gravitational field. Other groups are carrying out related cutting-edge measurements of SME gravity coefficients with laser ranging of the Moon and other solar-system objects, while analysis of the gravitational-wave data recently obtained by LIGO has already yielded many first constraints on SME coefficients in the gravity sector, with the promise of more to come.
After a quarter century of experimental and theoretical work, the modern approach to Lorentz and CPT tests remains as active as ever. As the theoretical understanding of Lorentz and CPT violation continues to evolve at a rapid pace, it is remarkable that experimental studies continue to follow closely behind and now stretch across most subfields of physics. The range of physical systems involved is truly stunning, and the growing number of different efforts displays the liveliness and exciting prospects for a research field that could help to unlock the deepest mysteries of the universe.
The International Space Station (ISS) is the largest and most complex engineering project ever built in space. It has also provided a unique platform from which to conduct the physics mission of the Alpha Magnetic Spectrometer (AMS). Over the past five years on board the ISS, AMS has orbited the Earth every 93 minutes at an altitude of 400 km and recorded 85 billion cosmic-ray events with energies reaching the multi-TeV range. AMS has been collecting its unprecedented data set and beaming it down to CERN since 2011, and is expected to continue to do so for the lifetime of the ISS.
AMS is a unique experiment in particle physics. The idea for a space-based detector developed after the cancellation of the Superconducting Super Collider in the US in 1993. The following year, an international group of physicists who had worked together for many years at CERN’s LEP collider had a discussion with Roald Sagdeev, former director of the Soviet Institute of Space Research, about the possibility of performing a precision particle-physics experiment in space. Sagdeev arranged for the team to meet with Daniel Goldin, the administrator of NASA, and in May 1994 the AMS collaboration presented the science case for AMS at NASA’s headquarters. Goldin advised the group that use of the ISS as a platform required strong scientific endorsement from the US Department of Energy (DOE) and, after the completion of a detailed technical review of AMS science, the DOE and NASA formalised responsibilities for AMS deployment on the ISS on 20 September 1995.
A 10 day precursor flight of AMS (AMS-01) was carried out in June 1998, demonstrating for the first time the viability of using a precision, large-acceptance magnetic spectrometer in space for a multi-year mission. The construction of AMS-02 for the ISS started immediately afterwards in collaborating institutes around the world. With the loss of the shuttle Columbia in 2003 and the resulting redirection of space policy, AMS was removed from the space-shuttle programme in October 2005. However, the importance of performing fundamental science on the ISS was widely recognised and supported by the NASA Space Station management under the leadership of William Gerstenmaier. In 2008, the US Congress unanimously agreed that AMS be reinstated, mandating an additional flight for the shuttle Endeavour with AMS as its prime payload. Shortly after installation on the ISS in May 2011, AMS was powered on and began collecting and transmitting data (CERN Courier July/August 2011 p18).
The first five years
Much has been learnt in the first five years of AMS about operating a particle-physics detector in space, especially the challenges presented by the ever changing thermal environment and the need to monitor the detector elements and electronics 24 hours per day, 365 days per year. Communications with NASA’s ISS Mission Control Centers are also an essential requirement to ensure the operations of the ISS – such as sudden, unscheduled power cuts and attitude changes – do not disrupt the operations of AMS or imperil the detector.
Of course, it is the data recorded by AMS from events in the distant universe that are the most rich scientifically. AMS is able to detect both elementary particles – namely electrons, positrons, protons and antiprotons – in addition to nuclei of helium, lithium and heavier elements up to indium. The large acceptance and multiple redundant measurements allow AMS to analyse the data to an accuracy of approximately 1%. Combined with its atmosphere-free window on the cosmos, its long-duration exposure time and its extensive calibration at the CERN test beam, this allows AMS to greatly improve the accuracy of previous charged cosmic-ray observations. This is opening up new avenues through which to investigate the nature of dark matter, the existence of heavy antimatter and the true properties of primordial cosmic rays.
The importance of precision studies of positrons and antiprotons as a means to search for the origin of dark matter was first pointed out by theorists John Ellis and, independently, by Michael Turner and Frank Wilczek. They noted that annihilations of the leading dark-matter candidate, neutralinos, produce energies that transform neutralinos into ordinary particles such as positrons and antiprotons. Crucially, this characteristic excess of positrons and antiprotons in cosmic rays can be measured. The characteristic signature of dark-matter annihilations is a sharp drop-off of these positron and antiproton excesses at high energies, due to the finite mass of the colliding neutralinos. In addition, since dark matter is ubiquitous, the excesses of the fluxes should be isotropic.
Early low-energy measurements by balloons and satellites indicated that both the positron fraction (that is, the ratio of the positron flux to the flux of electrons and positrons) and the antiproton-to-proton fluxes are larger than predicted by models based on the collisions of cosmic rays. The superior precision of AMS over previous experiments is now allowing researchers to investigate such features, in particular the drop-off in the positron and antiproton excesses, in unprecedented detail.
The first major result from AMS came in 2013 and concerned the positron fraction (CERN Courier October 2013 p22). This highly accurate result showed that, up to a positron energy of 350 GeV, the positron fraction fits well to dark-matter models. This result generated widespread interest in the community and motivated many new interpretations of the positron-fraction excess, for instance whether it is due to astrophysical sources or propagation effects. In 2014, AMS published the positron and electron fluxes, which showed that their behaviours are quite different from each other and that neither can be fitted with the single-power-law assumption underpinning the traditional understanding of cosmic rays.
A deepening mystery
The latest AMS results are based on 17.6 million electrons and positrons and 350,000 antiprotons. In line with previous AMS measurements, the positron flux exhibits a distinct difference from the electron flux, both in its magnitude and energy dependence (figure 1). The positrons show a unique feature: they have a tendency to drop-off sharply at energies above 300 GeV, as expected from dark-matter collisions or new astrophysical phenomena. The positron fraction decreases with energy and reaches a minimum at 8 GeV. It then increases with energy and rapidly exceeds the predictions from cosmic-ray collisions, reaching a maximum at 265 GeV and then beginning to fall off. Whereas neither the electron flux nor the positron flux can be described by a single power law, surprisingly the sum of the electron and positron fluxes can be described very accurately by a single power law above an energy of 30 GeV.
Since astrophysical sources of cosmic-ray positrons and electrons may induce some degree of anisotropy in their arrival directions, it is also important to measure the anisotropy of cosmic-ray events recorded by AMS. Using the latest data set, a systematic search for anisotropies has been carried out on the electron and positron samples in the energy range 16–350 GeV. The dipole-anisotropy amplitudes measured on 82,000 positrons and 1.1 million electrons are 0.014 for positrons and 0.003 for electrons, which are consistent with the expectations from isotropy.
The latest AMS results on the fluxes and flux ratio of electrons and positrons exhibit unique and previously unobserved features. These include the energy dependence of the positron fraction, the existence of a maximum at 265 GeV in the positron fraction, the exact behaviour of the electron and positron fluxes and, in particular, the sharp drop-off of the positron flux. These features require accurate theoretical interpretation as to their origin, be it from dark-matter collisions or new astrophysical sources.
Concerning the measured antiproton-to-proton flux ratio (figure 2), the new data show that this ratio is independent of rigidity (defined as the momentum per unit charge) in the rigidity range 60–450 GV. This is contrary to traditional cosmic-ray models, which assume that antiprotons are produced only in the collisions of cosmic rays and therefore that the ratio decreases with rigidity. In addition, due to the large mass of antiprotons, the observed excess of the antiproton-to-proton flux ratio cannot come from pulsars. Indeed, the excess is consistent with some of the latest model predictions based on dark-matter collisions as well as those based on new astrophysical sources. Unexpectedly, the antiproton-to-positron flux ratio is also independent of rigidity in the range 60–450 GV (CERN Courier October 2016 p8). This is considered as a major result from the five-year summary of AMS data.
The upshot of these new findings in elementary-particle cosmic rays is that the rigidity dependences of the fluxes of positrons, protons and antiprotons are nearly identical, whereas the electron flux has a distinctly different rigidity dependence. This is unexpected because electrons and positrons lose much more energy in the galactic magnetic fields than do protons and antiprotons.
Nuclei in cosmic rays
Most of the cosmic rays flying through the cosmos comprise protons and nuclei, and AMS collects nuclei simultaneously with elementary particles to enable an accurate understanding of both astrophysical phenomena and cosmic-ray propagation. The latest AMS results shed light on the properties of protons, helium, lithium and heavier nuclei in the periodic table. Protons, helium, carbon and oxygen are traditionally assumed to be primary cosmic rays, which means they are produced directly from a source such as supernova remnants.
Protons and helium are the two most abundant charged cosmic rays. They have been measured repeatedly by many experiments over many decades, and their energy dependence has traditionally been assumed to follow a single power law. In the case of lithium, which is assumed to be produced from the collision of primary cosmic rays with the interstellar medium and therefore yields a single power law but with a different spectral index, experimental data have been very limited.
No one has a clue what could be causing these spectacular effects
Sam Ting
The latest AMS data reveal, with approximately 1% accuracy, that the proton, helium and lithium fluxes as a function of rigidity all deviate from the traditional single-power-law dependence at a rigidity of about 300 GV (figure 3). It is completely unexpected that all three deviate from a single power law, that all three deviations occur at about the same rigidity and increase at higher rigidities, and that the three spectra can be fitted with double power laws above a rigidity of 45 GV. In addition, it has long been assumed that since both protons and helium are primary cosmic rays with the same energy dependence at high energies, their flux ratio would be independent of rigidity. The AMS data show that above rigidities of 45 GV, the flux ratio decreases with rigidity and follows a single-power-law behaviour. Despite being a secondary cosmic ray, lithium also exhibits the same rigidity behaviour as protons and helium. It is fair to say that, so far, no one has a clue what could be causing these spectacular effects.
The latest AMS measurement of the boron-to-carbon flux ratio (B/C) also contains surprises (figure 4). Boron is assumed to be produced through the interactions of primary cosmic rays such as carbon and oxygen with the interstellar medium, which means that B/C provides information both on cosmic-ray propagation and on the properties of the interstellar medium. The B/C ratio does not show any significant structures, in contrast to many cosmic-ray propagation models that assume such behaviour at high rigidities (including a class of propagation models that explain the observed AMS positron fraction). Cosmic-ray propagation is commonly modelled as relativistic gas diffusion through a magnetised plasma, and models of the magnetised plasma predict different behaviours of B/C as a function of rigidity. At rigidities above 65 GV, the latest AMS data can be well fitted by a single power law with spectral index Δ in agreement with the Kolmogorov model of turbulence, which predicts Δ = –1/3 asymptotically.
Building a spectrometer in space
AMS is a precision, multipurpose TeV spectrometer measuring 5 × 4 × 3 m and weighing 7.5 tonnes. It consists of a transition radiation detector (TRD) to identify electrons and positrons; a permanent magnet together with nine layers of silicon tracker (labelled 1 to 9) to measure momentum up to the multi-TeV range and to identify different species of particles and nuclei via their energy loss; two banks of time-of-flight (TOF) counters to measure the direction and velocity of cosmic rays and identify species by energy loss; veto counters (ACC) surrounding the inner bore of the magnet to reject cosmic rays from the side; a ring-image Cherenkov counter (RICH) to measure the cosmic-ray energy and identify particle species; and an electromagnetic calorimeter (ECAL) to provide 3D measurements of the energy and direction of electrons and positrons, and distinguish them from antiprotons, protons and other nuclei.
Future directions
Much has been learnt from the unexpected physics results from the first five years of AMS. Measuring many different species of charged cosmic rays at the same time with high accuracy provides unique input for the development of a comprehensive theory of cosmic rays, which have puzzled researchers for a century. AMS data are also providing new information that is essential to our understanding of the origin of dark matter, the existence of heavy antimatter, and the properties of charged cosmic rays in the cosmos.
The physics potential of AMS is the reason why the experiment receives continuous support. AMS is a US DOE and NASA-sponsored international collaboration and was built with European participation from Finland, France, Germany, Italy, Portugal, Spain and Switzerland, together with China, Korea, Mexico, Russia, Taiwan and the US. CERN has provided critical support to AMS, with CERN engineers engaged in all phases of the construction. Of particular importance was the extensive calibration of the AMS detector with different particle test beams at various energies, which provided key reference points for verifying the detector’s operation in space.
AMS will continue to collect data at higher energies and with high precision during the lifetime of the ISS, at least until 2024. To date, AMS is the only long-duration precision magnetic spectrometer in space and, given the challenges involved in such a mission, it is likely that it will remain so for the foreseeable future.
In the first half of the 20th century, many of the most important discoveries of new particles were made by cosmic-ray experiments. Examples include antimatter, the muon, pion, kaon and other hadrons, which opened up the field of high-energy physics and set in motion our modern understanding of elementary particles. This came about because cosmic-ray interactions with nuclei in the upper atmosphere are among the highest-energy events known, surpassing anything that could be produced in laboratories at the time – and even in collisions at the LHC today.
However, around the middle of the century the balance of power in particle physics shifted to accelerator experiments. By generating high-energy interactions in the laboratory under controlled conditions, accelerators offered new possibilities for precise measurements and thus for the study of rare particles and phenomena. These experiments helped to flush out the quark model and also the fundamental force-carrying bosons, leading to the establishment of the Standard Model (SM) – whose success was crowned by the discovery of the Higgs boson at the LHC in 2012.
Today, thanks to its unique position on the International Space Station, the AMS experiment combines the best of both worlds as a highly sensitive particle detector that is free from the complicated environment of the atmosphere (see “Cosmic rays continue to confound“). Collecting data since 2011, AMS has initiated a new epoch of precision cosmic-ray experiments that help to address basic puzzles in particle physics such as the nature of dark matter. The experiment’s latest round of data continues to throw up surprises. Arriving at the correct interpretation of events due to particles produced far away in the universe, however, still presents challenges for physicists trying to understand dark matter and the cosmological asymmetry between matter and antimatter.
Best of both worlds
The emphasis in particle physics now is on the search for physics beyond the SM, for which many motivations come from astrophysics and cosmology. Examples include dark matter, which contributes many times more to the overall density of matter in the universe than does the conventional matter described by the SM, and the origin of matter itself. Many physicists think that dark matter may be composed of particles that could be detected at the LHC, or might reveal themselves in astrophysical experiments such as AMS. As for the origin of matter, the big question has been whether it is due to an intrinsic difference between the properties of matter and antimatter particles, or whether the dominance of matter over antimatter in the universe around us is merely a local phenomenon. Although it is unlikely that there exist other regions of the observable universe where antimatter dominates, there is limited direct experimental evidence against it.
The AMS approach to cosmic-ray physics is based on decades of experience in high-statistics, high-precision accelerator experiments. It has a strong focus on measurements of antiparticle spectra that allows it to search indirectly for possible dark-matter particles, which would produce antiparticles if they annihilated with each other, as well as for possible harbingers of astrophysical concentrations of antimatter. In parallel, AMS is able to make measurements of the energy spectra of many different nuclear species, posing challenges for models of the origin of cosmic rays – a mystery that has stood ever since their discovery in 1912.
Unconventional physics?
The latest AMS results on the cosmic-ray electron and positron fluxes provide very accurate measurements of the very different spectra of these particles. Numerous previous experiments had discovered an increase in the positron-to-electron ratio at increasing energies, although with considerable scatter. AMS has now confirmed this trend with greater precision, but it also indicates that the positron-to-electron ratio may decrease again at energies above about 300 GeV. The differences between the electron and positron fluxes mean that different mechanisms must be dominating their production. The natural question is whether some exotic mechanism is contributing to positron production.
One possibility is the annihilation of dark-matter particles, but a more conventional possibility is production by electromagnetic processes around one or more nearby pulsars. In both cases, one might expect the positron spectrum to turn down at higher energies, being constrained by either the mass of the dark-matter particle or by the strength of the acceleration mechanism around the pulsar(s). In the latter case, one would also expect the positron flux to be non-isotropic, but no significant effect has been seen so far. It will be interesting to see whether the high-energy decrease in the positron-to-electron ratio is confirmed by future AMS data, and whether this can be used to discriminate between exotic and conventional models for positron production.
A more sensitive probe of unconventional physics could be provided by the AMS measurement of the spectrum of antiprotons. These cannot be produced in the electromagnetic processes around pulsars, but would be produced as “secondaries” in the collisions between primary-matter cosmic rays and ordinary-matter particles. It is striking, for instance, that the antiproton-to-proton ratio measured by AMS is almost constant at energies of about 10 GeV. The ratio is significantly higher than some earlier calculations of secondary antiproton production, although recent calculations (which account more completely for the theoretical uncertainties) indicate that the antiproton-to-proton ratio may be somewhat higher – possibly even consistent with the AMS measurements. As with the case for positron production, extending the measurements to higher energies will be crucial for distinguishing between exotic and conventional mechanisms for antiproton production.
AMS has also released interesting data concerning the fluxes of protons, helium and lithium nuclei. Intriguingly, all three spectra show strong indications of breaks in the spectra at rigidities of around 200 GV. The higher-energy portions of the spectra lie significantly above simple power-law extrapolations of the lower-energy data. It seems that some additional acceleration mechanism might be playing a role at higher energies, providing food-for-thought for astrophysical models of cosmic-ray acceleration. In particular, the unexpected shape of the spectrum of primary protons in the cosmic rays may also need to be taken into account when calculating the secondary antiproton spectrum.
The AMS data on the boron-to-carbon ratio also provide interesting information for models of the propagation of cosmic rays. In the most general picture, cosmic rays can be considered as a relativistic gas diffusing through a magnetised plasma. This leads to a boron-to-carbon ratio that decreases as a power, Δ, of the rigidity, with different models yielding values of Δ between –1/2 and –1/3. The latest AMS data constrain this power law with very high precision: Δ = –0.333±0.015, in excellent agreement with the simplest Kolmogorov model of diffusion.
The AMS collaboration has already collected data on the production of many heavier nuclei, and it would be interesting if the team could extract information about unstable nuclear isotopes that might have been produced by a recent nearby supernova explosion. Such events might already have had an effect on Earth: analyses of deep-ocean sediments have recently confirmed previous reports of a layer of iron-60 that was presumably deposited by a supernova explosion within about 100 parsecs about 2.5 million years ago, and there is evidence of iron-60 also in lunar rock samples and cosmic rays. Other unstable isotopes of potential interest include beryllium-10, aluminium-26, chlorine-39, manganese-53 and nickel-59.
Promising prospects
What else may we expect from AMS in the future? The prospective gains from measuring the spectra of positrons and antiprotons to higher energies have already been mentioned. Since these antiparticles can also be produced by other processes, such as pulsars and primary-matter cosmic rays, they may not provide smoking guns for antimatter production via dark-matter annihilation, or for concentrations of antimatter in the universe. However, searches for antinuclei in cosmic rays present interesting prospects in either or both of these directions. The production of antideuterons in dark-matter annihilations may be visible above the background of secondary production by primary-matter cosmic rays, for example. On the other hand, the production of heavier antinuclei in both dark-matter annihilations and cosmic-ray collisions is expected to be very small. The search for such antinuclei has always been one of the main scientific objectives of AMS, and the community looks forward to hearing whatever data they may acquire on their possible (non-)appearance.
As this brief survey has indicated, AMS has already provided much information of great interest for particle physicists studying scenarios for dark matter, for astrophysicists and for the cosmic-ray community. Moreover, there are good prospects for further qualitative advances in future years of data-taking. The success of AMS is another example of the fruitful marriage of particle physics and astrophysics, in this case via the deployment in space of a state-of-the-art particle spectrometer. We look forward to seeing the future progeny of this happy marriage.
LHC proton running for 2016 reached a conclusion on 26 October, after seven months of colliding protons at an energy of 13 TeV. The tally for the year is truly impressive. I could mention the fact that the machine’s design luminosity of 1034 cm–2 s–1 was regularly achieved and exceeded by 30 to 40%. Or I could say that with an integrated luminosity of 40 fb–1 delivered in 2016, we comfortably exceeded our year target of 25 fb–1 – allowing the LHC experiments to accumulate sizable data samples in time for the biennial ICHEP conference in August.
But what impresses me the most, and what really sets a marker for the future, is the availability of the machine. For 60% of its 2016 operational time, the LHC was running with stable beams delivering high-quality data to the experiments. This is unprecedented. Typical availability figures for big energy-frontier machines are around 50%, and that is the target we set ourselves for the LHC this year. Given the scale and complexity of the LHC, even that seemed ambitious. To put it in perspective, CERN’s previous and much simpler flagship facility, the Large Electron Positron (LEP) collider, achieved a figure of 30% over its operational lifetime from 1989 to 2000.
After hitting its design luminosity on 26 June, the LHC’s peak luminosity was further increased by using smaller beams from the injectors and reducing the angle at which the beams cross inside the ATLAS and CMS experiments. The resulting luminosity topped out at around 1.4 × 1034 cm–2 s–1, 40% above design. This year’s proton operation also included successful forward-physics runs for the TOTEM/CT-PPS, ALFA and AFP experiments.
The LHC is no ordinary machine. The world’s largest, most complex and highest-energy collider is also the world’s largest cryogenic facility. The difficulties we had when commissioning the machine in 2008 are well documented, and there is more to do: we are still not running at the design energy of 14 TeV, for example. But this does not detract from the fact that the 2016 run has shown what a fundamentally good design the LHC is, what a fantastic team it has running it, and that clearly it is possible to run a large-scale cryogenic facility with metronomic reliability.
This augurs well for the future of the LHC and its high-luminosity upgrade, the HL-LHC, which will take us well into the 2030s. But it is not only a good sign for particle physics. Other upcoming cryogenic facilities such as the ITER fusion experiment under construction in France can also take heart from the LHC’s performance, and who knows where else this technology might take us? If it is possible to run a 27 km-circumference superconducting particle accelerator with high reliability, then a superconducting electrical-power-distribution network, for example, does not seem so unrealistic. With developments in high-temperature superconductors proceeding apace, that possibility looks tantalisingly close.
With the way that the LHC has performed this year, it would be easy to be complacent, but the 2016 run has not been without difficulties. From the unfortunate beech marten that developed a short-lived taste for the high-voltage connections of an outdoor high-voltage transformer in May to rather more challenging technical issues, the LHC team has had numerous problems to solve, and the upcoming end-of-year technical stop will be a busy one. With a machine as complex as the LHC, its entire operational lifetime is a learning curve for accelerator physicists.
Which brings me back to the question of the LHC’s design energy. With proton running finished for another year, the LHC has now moved into a period of heavy-ion physics. When that is over, we will conclude the year with two weeks dedicated to re-training the magnets in two of the machine’s eight sectors, with a view to 14 TeV running. News from this work will provide valuable input to the LHC performance workshop in January, which will set the scene for the coming years at the energy frontier.
In this book, the author discusses the scientific nature of light and colours, how we see them and how we use them in a variety of applications. Colours are the way that our vision system and – ultimately – our brain translate the different wavelengths of a part of the light spectrum. Other living things are sensitive in different ways to light and not all of them can see colours.
After presenting the science behind colours and our vision, the book discusses the use that mankind has made of colours. Ever since the time that humans lived in caves, we have used pigments to make graffiti on walls, which evolved into paintings and, lately, graphic art. Here, as is the case when designing decorations and dyes for clothing, the colours are not natural but man-made.
In the chapters that follow, the author reviews three technologies integrated in our everyday life that emerged as black-and-white and evolved into colour by way of photography, cinematography and television. The final part of the book is dedicated to describing various forms of light displays, mostly used for entertainment purposes, and to the application of colours as a code in many contexts – including road safety, hospital emergencies and industry.
Readers attracted by this mixture of science, art and culture will find the book easily readable.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.