A new national facility at La Silla Observatory in Chile, operated by the European Southern Observatory (ESO), made its first observations at the beginning of the year. ExTrA (Exoplanets in Transits and their Atmospheres) will search for Earth-sized planets orbiting nearby red dwarf stars, its three 0.6 m-diameter near-infrared telescopes (pictured) increasing the sensitivity compared to previous searches. ExTrA is a French project also funded by the European Research Council and the telescopes will be operated remotely from Grenoble.
Measuring the production of the top quark with vector bosons can provide fresh insight into the Standard Model (SM), in particular by testing the top quark and heavy vector boson vertices, which may be modified by extensions to the SM. In two new results, ATLAS presents strong evidence for the production of a single top quark in association with a Z boson (tZ) and has for the first time extracted differential cross-sections for the production of a top quark in association with a W boson (tW). While tW production was already measured during LHC Run 1, the next in line, the tZ process, is much harder to observe because its production rate is about one hundredth lower.
For both the tZ and tW processes, separating them from background events is critical. ATLAS searched for events containing leptons (electrons or muons), jets and transverse momentum imbalance. All the information from the measured particles is condensed into one multivariate discriminator (MVA) trained to separate the signal from the background.
The new ATLAS results use data collected in 2015 and 2016, corresponding to an integrated luminosity of 36.1 fb–1. For the tZ analysis, 25 signal events are found after selection, together with 120 background events. Applying the MVA allows the signal and background to be better separated (see figure, left), leading to a signal significance of 4.2 standard deviations. This constitutes strong evidence that the associated production of a single top quark and a Z boson has been seen, and the observed production rate agrees with that predicted by the SM.
The extraction of differential cross-sections for tW is particularly challenging, as top quarks almost always decay into a b quark and a W boson, leaving two W bosons in the final state. The dominant background from the production of a top quark with a top antiquark has an 11 times larger inclusive production rate. Applying the MVA it is possible to select events with a signal to background ratio of about 1:2, which allows the signal cross-section to be extracted as a function of kinematic observables. Differential cross-sections have been measured as a function of several variables and measured and compared to predictions implemented in different Monte Carlo programmes (see figure). The uncertainty on the measurements is at the 20–50% level, dominated by statistical effects. While the analysis was not able to exclude particular models, the data tend to have more events with high-momentum particles than predicted.
With the additional data to be collected over the next years, ATLAS will study both tW and tZ production in more detail, and improve its searches for the even rarer and more elusive production of a (single) top quark in association with a Higgs boson.
The quest to search for new physics inspires searches in CMS for very rare processes, which, if discovered, could open the door to a new understanding of particle physics.
One such process is the production and decay of heavy sterile Majorana neutrinos, a type of heavy neutral lepton (HNL) introduced to describe the very small neutrino masses via the so-called seesaw mechanism. Two further fundamental puzzles of particle physics can be solved by adding three HNLs to the Standard Model (SM) particle spectrum: the lightest (with a mass of a few keV) can serve as a dark-matter candidate; the two heavier ones (heavier than about a GeV) could, when mass-degenerate, be responsible for a sizable amount of CP violation and thus help explain the cosmological matter–antimatter asymmetry.
Through their mixing with the SM neutrinos (see figure, left), the heavier HNLs could be produced at the LHC in leptonic W-boson decays. Subsequently, the HNL can decay to another W boson and a lepton, leading to a signal containing three isolated leptons. Depending on how weakly the new particles couple to the SM neutrinos, characterised by the parameters |VeN|2, |VμN|2 and |VτN|2, they can either decay shortly after production, or after flying some distance in the detector.
A new search performed with data collected in 2016 by CMS focuses on prompt trilepton (electrons or muons) signatures of HNL production. It explores a mass range from 1 GeV to 1.2 TeV, more than doubling the scope of LHC results so far. It also probes a mass regime that was unexplored since the days of the Large Electron-Positron collider (LEP), indicating that eventually the LHC will supersede these results with more data.
The trilepton final state does not lead to a sharp peak in an invariant mass spectrum, and therefore the search has to employ various kinematic properties of the events to be able to detect a possible presence of HNLs. To be sensitive to very low HNL masses, the search uses soft muons (with pT > 5 GeV) and electrons (pT > 10 GeV). While no signs of HNL have been found so far (see figure, right), the constraints on |VμN|2 (|VeN|2 is similar) in the high-mass region are the strongest to date. In the low mass region, the analysis has comparable sensitivity to previous searches.
Using dedicated analysis techniques, it is foreseen to extend this search to explore the parameter space where HNLs have longer lifetimes and so travel large distances in the detector before they decay. Together with more data this will enable CMS to significantly improve the sensitivity at low masses and eventually probe unexplored territory in this important region of HNL parameter space.
In two publications submitted to the Journal of High Energy Physics and Physics Letters B in December, the ALICE collaboration reports new production cross-section measurements of the charmed baryons Λ+c and Ξ0c in proton–proton collisions at an energy of 7 TeV and in proton–lead collisions at a collision energy of 5.02 TeV per nucleon–nucleon pair. The Λ+c were reconstructed in the hadronic decay modes Λ+c→ pK− π+ and Λ+c→ p K0S, and in the semileptonic channel Λ+c→ e+ νe Λ (and charge conjugates). For the Ξ0c analysis, the semi-leptonic channel Ξ0c→ e+ νeΞ– was used.
The comparison of charm baryon and meson cross-sections provides information on c-quark hadronisation. Surprisingly, the measured values of the Λ+c/D0 baryon-to-meson ratio were significantly larger than those previously measured in other experiments in collisions involving electron beams at different centre-of-mass energies, rapidity and pT intervals.
The results (see figure) are compared with the expectations obtained from perturbative QCD calculations and Monte Carlo event generators. None of the models reproduce the data, indicating that the fragmentation of charm quarks is not well understood. A similar pattern is seen when comparing the Ξ0c/D0 baryon-to-meson ratio with predicted values (see figure, right), where the latter have a sizable uncertainty due to the unknown branching ratio of the decay.
These two results suggest that charmed baryon formation might not be universal, and that the baryon/meson ratio depends on the collision system. Hints of non-universality of the fragmentation functions are also seen when comparing beauty-baryon production measurements at the Tevatron and LHC with those at LEP. The ratios measured in pPb collisions are similar to the result in pp collisions.
The statistical precision of the Λ+c and Ξ0c measurements is expected to be improved with data collected during the LHC Run 2, and with data from Run 3 and Run 4 following a major upgrade of the ALICE apparatus. This set of measurements also provides a reference for future investigation of Λ+c and Ξ0c production in lead–lead collisions, where the formation and kinematic properties of charm baryons are expected to be affected by the presence of the quark–gluon plasma.
Many questions remain about what happened in the first billion years of the universe. At around 100million years old, the universe was a dark place consisting of mostly neutral hydrogen without many objects emitting detectable radiation. This situation changed as stars and galaxies formed, leading to a phase transition known as reionisation where the neutral hydrogen was ionised. Exactly when reionisation started and how long it took is still not fully clear, but a recent discovery of the oldest massive black hole ever found can help answer this important question.
Up to about 300,000years after the Big Bang, the universe was hot and dense, and electrons and protons were fully separated. As the universe started to expand, it cooled down and underwent a first phase transition where electrons and protons formed neutral gases such as hydrogen. The following period is known as the cosmic dark ages. During this period, protons and electrons were mostly combined into neutral hydrogen, but the universe had to cool much further before matter could condense to the level where light-producing objects such as stars could form. These new objects started to emit both the radiation we can now detect to study the early universe and also the radiation responsible for the last phase transition – the reionisation of the universe. Some of the brightest and therefore easiest-to-detect objects are quasars: massive black holes surrounded by discs of hot accreting matter that emit radiation over a wide but distinctive spectrum.
Using data from a range of large-area surveys by different telescopes, a group led by Eduardo Bañados from the Carnegie Institution for Science has discovered a distant quasar called J1342+0928, with the black hole at its centre found to be eight million solar masses. After the radiation was emitted by J1342+0928, it travelled through the expanding universe, increasing its wavelength or “red shifting” in proportion to its travel time. Using known spectral features of quasars, the redshift (and therefore the moment at which the radiation was emitted) can be calculated.
The spectrum of J1342+0928, shown in the figure, demonstrates that the universe was only 690million years old – just 5% of its current age – at the time we see J1342+0928. The spectrum also shows a second interesting feature: the absorption of a part of the spectrum by neutral hydrogen, which implies that at the time we are observing the black hole, the universe was not fully ionised yet. By modelling the emission and absorption, Bañados and co-workers found that the spectrum from J1342+0928 is compatible with emission in a universe where half the hydrogen was ionised, putting the time of emission right in the middle of the epoch of reionisation.
The next mystery is to explain how a black hole weighing eightmillion solar masses could form so early in the universe. Black holes grow as they accrete mass surrounding them, but the accreting mass radiates and this radiation pushes other accreting mass away from the black hole. As a result, there is a theoretical limit on the amount of matter a black hole can accrete. Forming a black hole the size of J1342+0928 with such accretion limits would require black holes in the very early universe with sizes that challenge current theoretical models. One possible explanation, however, is that this particular black hole is a peculiar case and was formed by a merger of several smaller black holes.
Thanks to continuous data taking from a range of existing telescopes and upcoming new instrumentation, we can expect more objects like J1342+0928 or even older to be discovered, offering a probe of the universe at even earlier stages. The discovery of further objects would allow a more exact date for the period of reionisation, which can be compared with indirect measurements coming from the cosmic microwave background. At the same time, more measurements will show if black holes of this size in the early universe are just an anomaly or if there are more. In either case, such observations would provide important input for research on early black hole formation.
When deciding on the shape of a particle accelerator, physicists face a simple choice: a ring of some sort, or a straight line? This is about more than aesthetics, of course. It depends on which application the accelerator is to be used for: high-energy physics, advanced light sources, medical or numerous others.
Linear accelerators (linacs) can have denser bunches than their circular counterparts, and are widely used for research. However, for both high-energy physics collider experiments and light sources, linacs can be exceedingly power-hungry because the beam is essentially discarded after each use. This forces linacs to operate at an extremely low current compared to ring accelerators, which in turn limits the data rate (or luminosity) delivered to an experiment. On the other hand, in a collider ring there is a limit to the focusing of the bunches at an interaction point as each bunch has to survive the potentially disruptive collision process on each of millions of turns. Bunches from a linac have to collide only once and can therefore be focused to aggressively collide at a higher luminosity.
Linacs could outperform circular machines for light-source and collider applications, but only if they can be operated with higher currents by not discarding the energy of the spent beam. Energy-recovery linacs (ERLs) fill this need for a new accelerator type with both linac-quality bunches and the large currents more typical of circular accelerators. By recovering the energy of the spent beam through deceleration in superconducting radio-frequency (SRF) cavities, ERLs can recycle that energy to accelerate new bunches, combining the dense beam of a linear accelerator with the high current of a storage ring to achieve significant RF power savings.
A new facility called CBETA (Cornell-Brookhaven ERL Test Accelerator) that combines some of the best traits of linear and circular accelerators has recently entered construction at Cornell University in the US. Set to become the world’s first multi-turn SRF ERL, with a footprint of about 25 × 15 m, CBETA is designed to accelerate an electron beam to an energy of 150 MeV. As an additional innovation, this four-turn ERL relies on only one return loop for its four beam energies, using a single so-called fixed-field alternating-gradient return loop that can accommodate a large range of different electron energies. To further save energy, this single return loop is constructed from permanent Halbach magnets (an arrangement of permanent magnets that augments the magnetic field on the beam side while cancelling the field on the outside).
Initially, CBETA is being built to test the SRF ERL and the single-return-loop concept of permanent magnets for a proposed future electron-ion collider (EIC). Thereafter, CBETA will provide beam for applications such as Compton-backscattered hard X-rays and dark-photon searches. This future ERL technology could be an immensely important tool for researchers who rely on the luminosity of colliders as well as for those that use synchrotron radiation at light sources. ERLs are envisioned for nuclear and elementary particle-physics colliders, as in the proposed eRHIC and LHeC projects, but are also proposed for basic-research coherent X-ray sources, medical applications and industry, for example in lithography sources for the production of yet-smaller computer chips.
The first multi-turn SRF ERL
The theoretical concept of ERLs was introduced long before a functional device could be realised. With the introduction of the CBETA accelerator, scientists are following up on a concept first introduced by physicist Maury Tigner at Cornell in 1965. Similarly, non-scaling fixed-field alternating-gradient optics for beams of largely varying energies were introduced decades ago and will be implemented in an operational accelerator for only the second time with CBETA, after a proof-of-principle test at the EMMA facility at Daresbury Laboratory in the UK, which was commissioned in 2010.
The key behind the CBETA design is to recirculate the beam four times through the SRF cavities, allowing electrons to be accelerated to four very different energies. The beam with the highest energy (150 MeV) will be used for experiments, before being decelerated in the same cavities four times. During deceleration, energy is taken out of the electron beam and is transferred to electromagnetic fields in the cavities, where the recovered energy is then used to accelerate new particles. Reusing the same cavities multiple times significantly reduces the construction and operational costs, and also the overall size of the accelerator.
The energy-saving potential of the CBETA technology cannot be understated, and is a large consideration for the project’s funding agency the New York State Energy Research and Development Authority. By incrementally increasing the energy of the beam through multiple passes in the accelerator section, CBETA can achieve a high-energy beam without a high initial energy at injection – characteristics more commonly found in storage rings. CBETA’s use of permanent magnets provides further energy savings. The precise energy savings from CBETA are difficult to estimate at this stage, but the machine is expected to require about a factor of 20 less RF power than a traditional linac. This saving factor would be even larger for future ERLs with higher beam energy.
SRF linacs have been operated in ERL mode before, for example at Jefferson Lab’s infrared free-electron laser, where a single-pass energy recovery has reclaimed nearly all of the electron’s energy. CBETA will be the first SRF ERL with more than one turn and is unique in its use of a single return loop for all beams. Simultaneously transporting beam at four very different energies (from 42 to 150 MeV) requires a different bending field strength for each energy. While traditional beamlines are simply unable to keep beams with very different energies on the same “track”, the CBETA design relies on fixed-field alternating-gradient optics. To save energy, permanent Halbach magnets containing all four beam energies in a single 70 mm-wide beam pipe were designed and prototyped at Brookhaven National Laboratory (BNL). The special optics for a large energy range had already been proposed in the 1960s, but a modern rediscovery began in 1999 at the POP accelerator at KEK in Japan. This concept has various applications, including medicine, nuclear energy, and in nuclear and particle physics, culminating so far with the construction of CBETA. Important aspects of these optics will be investigated at CBETA, including the following: time-of-flight control, maintenance of performance in the presence of errors, adiabatic transition between curved and straight regions, the creation of insertions that maintain the large energy acceptance, the operation and control of multiple beams in one beam pipe, and harmonic correction of the fields in the permanent magnets.
Harmonic field correction is achieved by an elegant invention first used in CBETA: in order to overcome the magnetisation errors present in the NdFeB blocks and to produce magnets with 10–3 field accuracy, 32 to 64 iron wires of various lengths are inserted around the magnet bore, with lengths chosen to minimise the lowest 18 multipole harmonics.
A multi-turn test ERL was proposed by Cornell researchers following studies that started in 2005. Cornell was the natural site, given that many of the components needed for such an accelerator had been prototyped by the group there. A collaboration with BNL was formed in the summer of 2014; the test ERL was called CBETA and construction started in November 2016.
CBETA has some quite elaborate accelerator elements. The most complex components already existed before the CBETA collaboration, constructed by Cornell’s ERL group at Wilson Lab: the DC electron source, the SRF injector cryomodule, the main ERL cryomodule, the high-power beam stop, and a diagnostic section to map out six-dimensional phase-space densities. They were designed, constructed and commissioned over a 10-year period and hold several world records in the accelerator community. These components have produced the world’s largest electron current from a photo-emitting source, the largest continuous current in an SRF linac and the largest normalised brightness of an electron bunch.
Setting records
Meanwhile, the DC photoemission electron gun has set a world record for the average current from a photoinjector, demonstrating operation at 350 kV with a continuous current of 75 mA with 1.3 GHz pulse structure. It operates with a KCsSb cathode, which has a typical quantum efficiency of 8% at a wavelength of 527 m and requires a large ceramic insulator and a separate high voltage, high current, power supply to be able to support the high voltage and current. The present version of the Cornell gun has a segmented insulator design with metal guard rings to protect the ceramic insulator from punch-through by field emission, which was the primary limiting factor in previous designs. This gun has been processed up to 425 kV under vacuum, typically operating at 400 kV.
The SRF injector linac, or injector cryomodule (ICM), set new records in current and normalised brightness. It operates with a bunch train containing a series of five two-cell 1.3 GHz SRF cavities, each with twin 50 kW input couplers that receive microwaves from high-power klystrons, and the input power couplers are adjustable to allow impedance matching for a variety of different beam currents. The ICM is capable of a total energy gain of around 15 MeV, although CBETA injects beam at a more modest energy of 6 MeV. The high-current CW main linac cryomodule, meanwhile, has a maximum energy gain of 70 MeV and a beam current of up to 40 mA, and for CBETA will accelerate the beam by 36 MeV on each of the four beam passes.
Several other essential components that have also been commissioned include a high-power beam stop and diagnostics tools for high-current and high-brightness beams, such as a beamline for measuring 6D phase-space densities, a fast wire scanner for beam profiles and beam-loss diagnostics. All these components are now being incorporated in CBETA. While the National Science Foundation provided the bulk funding for the development of all these components, the LCLS-II project contributed funding to investigate the utility of Cornell’s ERL technology, and the company ASML contributed funds to test the use of ERL components for an industrial EUV light source.
Complementary development work has been ongoing at BNL, and last summer the BNL team successfully tested a fixed-field alternating-gradient beam transport line at the Accelerator Test Facility. It uses lightweight, 3D-printed frames to hold blocks of permanent magnets and uses the above-mentioned innovative method for fine-tuning the magnetic field to steer multiple beams at different energies through a single beam pipe. With this design, physicists can accelerate particles through multiple stages to higher and higher energies within a single ring of magnets, instead of requiring more than one ring to achieve these energies. The beams reached a top momentum that was more than 3.8 times that of the lowest transferred momentum, which is to be compared to the previous result in EMMA, where the highest momentum was less than twice that of the lowest one. The properties of the permanent Halbach magnets match or even surpass those of electromagnets, which require much more precise engineering and machining to create each individual piece of metal. The success of this proof-of-principle experiment reinforces the CBETA design choices.
The initial mission for CBETA is to prototype components for BNL’s proposed version of an EIC called eRHIC, which would be built using the existing Relativistic Heavy Ion Collider infrastructure at BNL. JLAB also has a design for an EIC, which requires an ERL for its electron cooler and therefore also benefits from research at CBETA. Currently, the National Academy of Sciences is studying the scientific potential of an EIC. More than 25 scientists, engineers and technicians are collaborating on CBETA and they are currently running preliminary beam tests, with the expectation of completing CBETA installation by the summer of 2019. Then we will test and complete CBETA commissioning by the spring of 2020, and begin to explore the scientific applications of this new acceleration and energy-saving technique.
The enigma of why the universe contains more matter than antimatter has been with us for more than half a century. While charge–parity (CP) violation can, in principle, account for the existence of such an imbalance, the observed matter excess is about nine orders of magnitude larger than what is expected from known CP-violating sources within the Standard Model (SM). This striking discrepancy inspires searches for additional mechanisms for the universe’s baryon asymmetry, among which are experiments that test fundamental charge–parity–time (CPT) invariance by comparing matter and antimatter with great precision. Any measured difference between the two would constitute a dramatic sign of new physics. Moreover, experiments with antimatter systems provide unique tests of hypothetical processes beyond the SM that cannot be uncovered with ordinary matter systems.
The Baryon Antibaryon Symmetry Experiment (BASE) at CERN, in addition to several other collaborations at the Antiproton Decelerator (AD), probes the universe through exclusive antimatter “microscopes” with ever higher resolution. In 2017, following many years of effort at CERN and the University of Mainz in Germany, the BASE team measured the magnetic moment of the antiproton with a precision 350 times better than by any other experiment before, reaching a relative precision of 1.5 parts per billion (figure 1). The result followed the development of a multi-Penning-trap system and a novel two-particle measurement method and, for a short period, represented the first time that antimatter had been measured more precisely than matter.
Non-destructive physics
The BASE result relies on a quantum measurement scheme to observe spin transitions of a single antiproton in a non-destructive manner. In experimental physics, non-destructive observations of quantum effects are usually accompanied by a tremendous increase in measurement precision. For example, the non-destructive observation of electronic transitions in atoms or ions led to the development of optical frequency standards that achieve fractional precisions on the 10–18 level. Another example, allowing one of the most precise tests of CPT invariance to date, is the comparison of the electron and positron g-factors. Based on quantum non-demolition detection of the spin state, such studies during the 1980s reached a fractional accuracy on the parts-per-trillion level.
The latest BASE measurement follows the same scheme but targets the magnetic moment of protons and antiprotons instead of electrons and positrons. This opens tests of CPT in a totally different particle system, which could behave entirely differently. In practice, however, the transfer of quantum measurement methods from the electron/positron to the proton/antiproton system constitutes a considerable challenge owing to the smaller magnetic moments and higher masses involved.
The idea is to store single particles in ultra-stable, high-precision Penning traps, where they oscillate at characteristic frequencies. By measuring those frequencies, we can access the cyclotron frequency, νc, which defines the particle’s revolutions per second in the trap’s magnetic field. Together with a measurement of the spin precession frequency νL, the g-factor can be extracted from the relation:
To determine νc we use a technique called image-current detection. The oscillation of the antiproton in the trap induces tiny image currents in the trap electrodes, which are picked up by highly sensitive superconducting tuned circuits.
The measurement of νL, on the other hand, relies on single-particle spin-transition spectroscopy – comparable to performing NMR with a single antiproton. The idea is to switch the spin of the individual antiproton from one state to the other and then detect the flip. To this end a smart trick is used: the continuous Stern–Gerlach effect, which imprints the collapsed spin state of the single antiproton on its axial oscillation frequency (a parameter that can be measured non-destructively). We use a special Penning trap configuration in which an inhomogeneous magnetic bottle is superimposed on the homogeneous magnetic field of the ideal Penning trap (figure 2, top). The inhomogeneousfield adds a spin-dependent quadratic magnetic potential to the axial electrostatic trapping potential and, consequently, the continuously measured axial oscillation frequency of the trapped antiproton becomes a function of the spin eigenstate.
In practice, to detect spin quantum-transitions we first measure the axial frequency, then inject a magnetic radio-frequency to drive spin transitions, and finally measure the axial frequency again. The observation of an axial frequency jump corresponds to the clear signature that a spin-transition was driven, and by repeating such measurements many times and for different drive frequencies, we obtain the spin-flip probability as a function of the drive frequency. The corresponding resonance curve gives νL (figure 2, bottom).
Doubling up
This challenge has become the passion of the members of the BASE collaboration for the past decade. A trap was developed at Mainz with a superimposed magnetic inhomogeneity of 300,000 T/m2, which corresponds to a magnetic field change of about 1 T over a distance of about 1.5 mm! In this extreme magnetic environment, a proton/antiproton spin transition induces an axial frequency shift of only 170 mHz when driven at a frequency of around 650 kHz.
Using this unique device, in 2011 we reported the first observation of spin flips with a single trapped proton. This was followed by the unambiguous quantum-non-demolition detection of proton spin-transitions, which was later also demonstrated with antiprotons (figure 3). The high-fidelity detection of the spin state, however, requires the particle to be cooled to temperatures of the order of 100 mK. This was achieved by sub-thermal cooling of the particle’s cyclotron mode by means of cryogenic resistors, but is an inconceivably time-consuming procedure.
The high-fidelity resolution of single-spin quantum transitions is the key to measuring the antiproton magnetic moment at the parts-per-billion level. The elegant double-trap technique that makes this possible was invented at Mainz and applied with great success in tests of bound-state quantum electrodynamics, in collaboration with GSI Darmstadt and the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, both institutes also being part of the BASE collaboration. This double Penning-trap technology separates the sensitive frequency measurements of νL and νc, and the spin analysis measurements into two traps: a homogeneous “precision trap” (PT) and the spin state “analysis trap” (AT) with the superimposed strong magnetic bottle. The magnetic field in the PT is about 100,000 times more homogeneous than that of the AT and allows sampling of the spin-flip resonance at much higher resolution, compared to measurements solely carried out in the inhomogeneous AT.
The single-particle “double-trap method”, however, comes with the drawback that each frequency measurement in the PT heats the particle’s radial mode to about room-temperature and requires repeated particle preparation to sub-thermal radial energy, a condition that is ultimately required for the high-fidelity detection of spin transitions. Each of these sub-thermal-energy preparation cycles takes several hours, while a well resolved g-factor resonance contains at least 400 individual data points. We applied this method at BASE to measure the proton magnetic moment with parts-per-billion precision in a measurement campaign that took, including systematic studies and maintenance of the instrument, about half a year.
To reduce the total measurement time, we invented the novel two-particle method in which the precision frequency measurements and the high-fidelity spin-state analysis are carried out using two particles: a hot “cyclotron particle” and a cold “Larmor particle”, in addition to adding a third trap called the “park trap” (figure 4). We first identify the spin state of the cold antiproton in the AT. Then we measure the cyclotron frequency with the hot particle in the PT, move this particle to the park trap and transport the cold antiproton to the PT, where spin-flip drives are irradiated. Afterwards, the cold particle is shuttled back to the AT and the hot particle to the PT. There, the cyclotron frequency is measured again, and in a last step the spin state of the cold particle in the AT is identified. By repeating this scheme many times and for different drive frequencies, the spin-flip probability as a function of the spin-flip drive frequency, normalized to the measured cyclotron frequency, is obtained – a g-factor resonance – with all the required frequency information sampled in the homogeneous PT. This novel two-particle scheme drastically reduces the measurement time, since it avoids the time-consuming preparation of sub-thermal radial energy-states.
Successfully implementing this new method, we were able to sample about 1000 data points over a period of just two months. From this campaign we extracted the antiproton magnetic moment as µp̅ = –2.792 847 344 1 (42) μN, the value having a fractional precision of 1.5 parts per billion and thereby improving the previous best value by BASE by a factor of 350. The result is consistent with our most precise measurement of the proton magnetic moment, μp = 2.792 847 350 (9) µN, and thus supports CPT invariance.
Trappings of success
Underpinning this rapid achievement of the initially defined major experimental goal of the BASE collaboration was another BASE invention called the reservoir trap (RT) method. This RT, being one of four traps in the BASE trap-stack, is loaded with a shot of antiprotons and provides single particles to the precision measurement traps on request. The method allows BASE to operate antiproton experiments even during the winter shut-down of CERN’s accelerators and practically doubles the available experiment time. Indeed, we have demonstrated antiproton trapping and experiment optimisation for a period of more than 400 days and operated the entire 2016 run with antiprotons captured in 2015. This long storage time also allows us to set limits on directly measured antiproton lifetime.
Together with the proton-to-antiproton charge-to-mass ratio comparison with a fractional precision of 69 parts in a trillion CERN Courier September 2015 p7), which was carried out during the 2014 antiproton run, BASE has set tighter constraints on all the fundamental antiproton parameters that are directly accessible by this type of experiment. So far, all the BASE results are consistent with CPT invariance.
The latest triple-trap measurement of the antiproton magnetic moment sets new constraints on CPT violating coefficients in the Standard Model extension (SME) – an effective theory that allows the sensitivities of different experiments at different locations to be compared with respect to CPT violation. The recent BASE magnetic-moment measurement addresses a total of six combinations of SME coefficients and improves the limits on all of them by more than two orders of magnitude. Finding a non-zero coefficient would, for example, indicate the discovery of a new type of exchange boson that couples exclusively to antimatter and immediately raise the question of its role in the universal baryon asymmetry.
Although up to now all results are CPT-consistent, this not-yet-understood asymmetry is one of the motivations to further improve the experimental resolution of the AD experiments. The recent successes reported by the ALPHA collaboration herald the first ultra-high-precision measurements on the optical spectrum of antihydrogen. Improved methods in measurements on antiprotonic helium by the ASACUSA collaboration will lead to even higher resolution results in comparisons of the antiproton-to-electron mass ratio, while the ATRAP collaboration continues to contribute independent measurements of antiprotons and antihydrogen.
Gravitational sensitivity
A new branch of experiments at CERN’s AD, AEgIS, GBAR and ALPHA-g, will soon investigate the gravitational acceleration of antimatter in Earth’s gravitational field – which has never been directly observed before. Indirect measurements were carried out with antiprotons by the TRAP collaboration at the AD’s predecessor, LEAR, and by BASE, which set constrains on antigravity effects.
The AD community aims to verify the laws of physics with antimatter in various ways, thereby testing fundamental CPT invariance. The experiments are striving to access yet unmeasured quantities, or to improve their sensitivities to new physics. In this respect, the BASE–Mainz experiment succeeded recently in measuring the proton magnetic moment at an 11-fold improved precision, reaching a fractional uncertainty of 0.3 parts per billion. By applying these even further advanced methods to the antiproton, BASE will improve the sensitivity of the CPT invariance test by at least another factor of five.
The physics programme at CERN’s Antiproton Decelerator (AD) is concerned with fundamental studies of the properties and behaviour of antimatter. Diverse experiments endeavour to study the basic characteristics of the antiproton (BASE, ATRAP), the spectra of antiprotonic helium (ASACUSA) and antihydrogen (ALPHA, ASACUSA, ATRAP), and gravitational effects on antimatter (GBAR, AEGIS, ALPHA-g). These innovative experiments at the AD – itself a unique facility in the world – can test fundamental symmetries such as charge–parity–time (CPT) and search for indications of physics beyond the Standard Model involving systems that have never before been studied.
Lurking in the background to all this is the baryon asymmetry problem: the mystery of what happened to all the antimatter that should have been created after the Big Bang. This mystery forces us to question whether antimatter and terrestrial matter really obey the same laws of physics. There is no guarantee that AD experiments will find any new physics, but if you can get your hands on some antimatter, it seems prudent to take a good, hard look at it.
We live in interesting times for antimatter. In addition to experiments at the AD, physicists study potential matter–antimatter asymmetries at the energy frontier at the LHCb experiment, and search for evidence of primordial antimatter streaming through space using the AMS-02 spectrometer onboard the International Space Station. Antihelium-4 nuclei were observed for the first time at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) in 2011, while the LHC’s ALICE collaboration observed and studied anti-deuterons and antihelium-3 nuclei in 2015. By contrast, the experiments at the AD are low-energy affairs: we are essentially dealing with antimatter at rest.
One of the unique advantages of AD physics, therefore, is that we can address antimatter using precision techniques from modern atomic and ion-trap physics. Following three decades of development in advanced experimental techniques by the low-energy antimatter community, the ALPHA collaboration has recently achieved the major goal of examining the spectrum of antihydrogen atoms for the first time. These results herald the start of a new field of inquiry that should enable some of the most precise comparisons between matter and antimatter ever attempted.
If you want to measure something precisely, you should probably ask an atomic physicist. For example, the measured frequency of the electronic transition between the ground state and the first excited state in hydrogen (the so-called 1S–2S transition) is 2 466 061 413 187 035 (10) Hz, corresponding to an uncertainty of 4.2 × 10–15, and the measurement is referenced directly to a cesium time standard. Sounds impressive, but, to quote a recent article in Nature Photonics, “Atomic clocks based on optical transitions approach uncertainties of 10−18, where full frequency descriptions are far beyond the reach of the SI second”. In other words, the current time standard just isn’t good enough anymore, at least not for matter. For comparison, the current best value for the mass of the Higgs boson is 125.09 ± 0.24 GeV/c2, representing an uncertainty of about 2 × 10–3.
To be fair, scientists had already been observing hydrogen’s spectrum for about 200 years by the time the Higgs was discovered. Fraunhofer is credited with mapping out absorption lines, some of which are due to hydrogen, in sunlight in 1814. From there we can trace a direct path through Kirchhoff and Bunsen (1859/1860), who associated Fraunhofer lines with emission lines from distinct elements, to Rydberg, Balmer, Lyman and ultimately to Niels Bohr, who revolutionised atomic physics with his quantum theory in 1913. It is no exaggeration to say that physicists learned modern atomic physics by studying hydrogen, and we are therefore morally obligated to subject antihydrogen to all of the analytical tools at our disposal.
Anti-atomic spectra are not the only hot topic in precision physics at the AD. In 2015 the BASE collaboration determined that the charge-to-mass ratios for the proton and antiproton agree to 69 parts per trillion. The following year, the ASACUSA experiment – which has been making precision measurements on antiprotonic helium for more than a decade – reported that the antiproton-to-electron mass ratio agrees with its proton counterpart to a level of 8 × 10–10 (CERN Courier December 2016 p19). One of the long term and most compelling goals of the AD programme has always been to compare the properties of hydrogen and antihydrogen to precisions like these.
A word of caution is in order here. In searching for deviations from existing theories, it is tempting to use dimensionless uncertainties such as Δm/m, Δf/f or Δq/q (corresponding to mass, frequency or charge) to compare the merits of different types of measurements. Yet, it is of course not obvious that a hitherto unknown mechanism that breaks CPT or Lorentz invariance, or reveals some other new physics, should create an observable effect that is proportional to the mass, frequency or charge of the state being studied. An alternative approach is to consider the absolute energy scale to which a measurement is sensitive. There is good historical precedent for this in the quantum mechanics of atoms. Roughly speaking, atomic structure, fine structure, hyperfine structure and the Lamb shift reflect different energy scales describing the physical effects that became apparent as experimental techniques became more precise in the 20th century.
At the time of the construction of the AD in the late 1990s, the gold standard for tests of CPT violation was the neutral kaon system. The oft-quoted limit for the fractional difference between the masses of the neutral kaon and anti-kaon was of the order 10–18. Although there are many other tests of CPT using particle/antiparticle properties, this one in particular stands out for its precision. In the most recent review of the Particle Data Group, the kaon limit is presented as an absolute mass difference of less than 4 × 10–19 GeV. Although purists of metrology will argue that nothing has actually been measured with a precision of 10–18 here, the AD physics programme needed a potential goal that could compete, at least in principle, with this level of precision.
The holy grail
Thus the hydrogenic 1S–2S transition became a kind of “holy grail” for antihydrogen physics. The idea was that if the transition in antihydrogen could be measured to the same precision (10–15) as in hydrogen, any difference between the two transition frequencies could be determined with a precision approaching that of the kaon system. On an absolute scale, the 1S–2S transition energy is about 10.2 eV, so a precision of 10–15 in this value corresponds to an energy sensitivity of 10–14 eV (10–23 GeV). Other features in hydrogen such as the ground-state hyperfine splitting or the Lamb shift have even smaller energies, on the order of µeV. They are also of fundamental interest in antihydrogen and test different types of physical phenomena than the 1S–2S transition. The BASE antiproton experiment probes CPT invariance in the baryon sector at the atto-electron volt scale – 10–27 GeV – and recently measured the magnetic moment of the antiproton to a precision of 1.5 parts-per-billion. Amazingly, the result was better than the most precise measurement of the proton at the time.
It is sobering to reflect on the state of antihydrogen physics when the AD started operations in 2000. The experiments at CERN’s Low Energy Antiproton Ring (LEAR) in 1996 and at the Accumulator at Fermilab in 1998 had detected nine and 66 relativistic atoms of antihydrogen, respectively, which were produced by interactions between a stored antiproton beam and a gas-jet target. These experiments proved the existence of antihydrogen, but they held no potential for precision measurements.
The pioneering TRAP experiment had already developed the techniques needed for stopping and trapping antiprotons from LEAR, and demonstrated the first capture of antiprotons way back in 1986. The PS200 collaboration succeeded in trapping up to a million antiprotons from LEAR, and TRAP compared the charge-to-mass ratio of protons and antiprotons to a relative precision of about 10–9. However, no serious attempt had yet been made to synthesise “cold” antihydrogen by the time LEAR stopped operating in 1996.
In 2002 the ATHENA experiment won the race to produce low-energy antihydrogen and the global number of antihydrogen atoms jumped dramatically to 50,000, observed over a few weeks of data taking. This accomplishment had a dramatic effect on world awareness of the AD via the rapidly growing Internet, and it even featured on the front page of the New York Times. Today in ALPHA, which succeeded ATHENA in 2005, we can routinely produce about 50,000 antihydrogen atoms every four minutes.
The antihydrogen atoms produced by ATHENA, and subsequently by ATRAP and ASACUSA, were not confined; they would quickly encounter normal matter in the walls of the production apparatus and annihilate. It would take until 2010 for ALPHA to show that it was possible to trap antihydrogen atoms. Although antihydrogen atoms are electrically neutral, they can be confined through the interaction of their magnetic moments with an inhomogeneous magnetic field. Using superconducting magnets, we can trap antihydrogen atoms that are created with a kinetic energy of less than 43 μeV, or about 0.5 K in temperature units.
In ALPHA’s milestone 2010 experiment, we could trap on average one atom of antihydrogen every eight times we tried, with a single attempt requiring about 20 minutes. Today, in the second-generation ALPHA-2 apparatus, we trap up to 30 atoms in a procedure that takes four minutes. We have also learned how to “stack” antihydrogen atoms. In December 2017 we accumulated more than 1000 anti-atoms at once – limited only by the time available to mess about like this without measuring anything useful! It is no exaggeration to say that no one would have found this number credible in 2000 when the AD began running.
Since the first demonstration of trapped antihydrogen, we have induced quantum transitions in anti-atoms using microwaves, probed the neutrality of antihydrogen, and carried out a proof-of-principle experiment on how to study gravitation by releasing trapped antihydrogen atoms. These experiments were all performed with a trapping rate of about one atom per attempt. In 2016 we made several changes to our antihydrogen synthesis procedure that led to an increase in trapping rate of more than a factor of 10, and we also learned how to accumulate multiple shots of anti-atoms. At the same time, the laser system and internal optics necessary for exciting the 1S–2S transition were fully commissioned in the ALPHA-2 apparatus, and we were finally able to systematically search for this most sought-after spectral line in antimatter.
Antihydrogen’s colours
The ALPHA-2 apparatus for producing and trapping antihydrogen is shown in figure 1. It involves various Penning traps that utilise solenoidal magnetic fields and axial electrostatic wells to confine the charged antiprotons and positrons from which antihydrogen is synthesised. Omitting 30 years of detail, we produce cold antihydrogen by gently merging trapped clouds of antiprotons and positrons that have carefully controlled size, density and temperature. The upshot is that we can combine about 100,000 antiprotons with about two million positrons to produce 50,000 antihydrogen atoms. We trap only a small fraction of these in the superconducting atom trap, which comprises an octupole for transverse confinement and two “mirror coils” for longitudinal confinement.
Anti-atoms that are trapped can be stored for at least 1000 s, but we have yet to carefully characterise the upper limit of the storage lifetime, which depends on the quality of the vacuum. The internal components of ALPHA are cooled to 4 K by liquid helium, and antihydrogen annihilations are detected using a three-layer silicon vertex detector (SVD) surrounding the production region. The SVD senses the charged pions that result from the antiproton annihilation, and event topology is used to differentiate the latter from cosmic rays, which constitute the dominant background (figure 2).
Trapping antihydrogen is extremely challenging because the trapped, charged particles that are needed to synthesise it start out with energies measured in eV (in the case of positrons) or keV (antiprotons), whereas the atom can only be confined if it has sub-meV energy. The antihydrogen is trapped due to the interaction of its magnetic moment, which is dominated by the positron spin, with an inhomogeneous magnetic field. Even with very careful preparation of the trapped positron and antiproton clouds in a cryogenic trap, only a small fraction of the produced antiatoms are “cold” enough to be trapped. The good news is that once you have trapped them, the antiatoms stick around for long enough to perform experiments.
Compared to atomic physics with normal matter, one has to somehow make up for the dramatic reduction – at least 20 orders of magnitude – in particle number at the source. The key to this is twofold: the long interaction times available with trapped particles, and the single-atom detection sensitivity afforded by antimatter annihilation. The annihilation of an antihydrogen atom is a microscopically violent event, releasing almost 2 GeV of mass-energy that can be easily detected. This is perhaps the only good thing about working with antihydrogen: if you lose it, even just one atom of it, you know it. Conversely, the loss of a single atom of hydrogen in an equivalent experiment would go unnoticed and un-mourned if there are, say, 1012 remaining (a typical number for trapped hydrogen). Thus, the two experiments recently reported by ALPHA are conceptually simple: trap some antihydrogen atoms; illuminate them with electromagnetic radiation that causes the anti-atoms to be lost from the trap when the radiation is on-resonance; sit back and watch what falls out.
Let’s consider first the “holy grail” (1S–2S) transition, which is excited by two, counter-propagating ultraviolet photons with a wavelength of 243 nm. The power from our Toptica 243 nm laser is enhanced in a Fabry–Pérot cavity formed by two mirrors inside the cryogenic, ultra-high vacuum system. (This cavity owes its existence to the paucity of atoms available; without the optical power buildup achieved, the experiment would not be currently possible.) The 1S–2S transition has a very narrow linewidth – this is what makes it interesting – so the laser frequency needs to be just right to excite it. The other side of the same coin is that the 2S state lives for a relatively long time, about one eighth of a second, so there can be time for an excited antihydrogen atom to absorb a third photon, which will ionise it. Stripped of its positron, the antiproton is no longer confined in the magnetic trap and is free to escape to the wall and annihilate. There is also a chance that an un-ionised 2S state atom will suffer a positron spin-flip in the decay to the ground state, in which case the atom is also lost.
In the actual experiment, we illuminate trapped antihydrogen atoms with a laser for about 10 minutes, then turn off the trap (in a period of 1.5 s) and use the SVD to count any remaining atoms as they escape. Also, using the SVD we can observe any antihydrogen atoms that are lost during the laser illumination. In this way, we obtain a self-consistent picture of the fate of the atoms that were initially trapped. The evidence for the laser interaction comes from comparing what happens when the laser has the “right” frequency, compared to what happens when we intentionally de-tune the laser to a frequency where no interaction is expected (for hydrogen). As a control, and to monitor the varying trapping rate, we perform the same sequence with no laser present. The whole thing can be summarised in a simple table (figure 3), which shows the results of 11 trials of each type.
A quick glance reveals that the off-resonance and no-laser numbers are consistent with each other and with “nothing going on”. In contrast, the on-resonance numbers show excess events due to atoms knocked out when the laser is on, and a dearth of events left over after the exposure. If we consider the overall inventory of antihydrogen atoms and compare the on- and off-resonance data only, we see that about 138 atoms (79–27)/0.376 have been knocked out, and 134 atoms (159–67)/0.688 are missing from the left-over sample, so our interpretation is self-consistent within the uncertainties.
This initial “go/no-go” experiment demonstrates that the transition is where we expect it to be for hydrogen and localises it to a frequency of about 400 kHz (the laser detuning for the off-resonance trials) out of 2.5 × 1015 Hz. That’s a relative precision of about 2 × 10–10, or 2 × 10–18 GeV in absolute energy units, just for showing up, and this was achieved by employing a total of just 650 or so trapped atoms. The next step is obviously to measure more frequencies around the resonance to study the shape of the spectral line, which will allow more precise determination of the resonance frequency. Note that CPT invariance requires that the shape must be identical to that expected for hydrogen in the same environment. Determination of this lineshape was the main priority for ALPHA’s 2017 experimental campaign, so stay tuned.
To hyperfine splitting and beyond
A similar strategy can be used to study other transitions in antihydrogen, in particular its hyperfine splitting. With ALPHA we can drive transitions between different spin states of antihydrogen in the magnetic trap. In a magnetic field, the 1S ground state splits into four states that correspond, at high fields, to the possible alignments of the positron and antiproton spins with the field (figure 4). The upper two states can be trapped in ALPHA’s magnetic trap and, using microwaves at a frequency of about 30 GHz, it is possible to resonantly drive transitions from these two states to the lower energy states, which are not trappable and are thus expelled from the trap.
We concentrate on the two transitions |d〉→|a〉 and |c〉→|b〉, which in the ALPHA trapping field (minimum 1 T) correspond to positron spin flips. We had previously demonstrated that these transitions are observable, but in 2016 we took the next step and actually characterised the spectral shapes of the two discrete transitions in our trap. We are now able to accumulate antihydrogen atoms, scan the microwave frequency over the range corresponding to the two transitions, and watch what happens using the SVD. The result, which may be considered to be the first true antihydrogen spectrum, is shown in figure 5.
The difference between the onset frequencies of the two spectral lines gives us the famous ground-state hyperfine splitting (in hydrogen, the ground-state hyperfine transition is the well known “21 cm line”, so beloved of radioastronomers and those searching for signs of extraterrestrial life). From figure 5 we extract a value for this splitting of 1420.4 ± 0.5 MHz, for a relative precision of 3.5 × 10–4; the energy sensitivity is 2 × 10–18 GeV. In normal hydrogen this number has been measured to be 1420.405751768 (2) MHz – that’s 1.2 × 10–12 relative precision or a shockingly small 10–26 GeV. ALPHA is busily improving the precision of the antihydrogen hyperfine measurement, and the ASACUSA collaboration at the AD hopes to measure the same quantity to the ppm level using a challenging antihydrogen-beam technique; an analogous experiment on hydrogen was recently reported (CERN Courier December 2017 p23).
The antihydrogen atom still holds many structural secrets to be explored. Near-term perspectives in ALPHA include the Lyman-alpha (1S–2P) transition, with its notoriously difficult-to-produce 121.5 nm wavelength in the vacuum ultraviolet. We are currently attempting to address this with a pulsed laser, with the ultimate goal to laser-cool antihydrogen for studies in gravitation and for improved resolution in spectroscopy. To give a flavour of the pace of activities, a recent daily run meeting saw ALPHA collaborators actually debate which of the three antihydrogen transitions we should study that day, which was somewhat surreal. In the longer term, even the ground-state Lamb shift should be accessible using ALPHA’s trapped antiatoms.
It is clearly “game on” for precision comparisons of matter and antimatter at the AD. It is fair to say that the facility has already exceeded its expectations, and the physics programme is in full bloom. We have some way to go before we reach hydrogen-like precision in ALPHA, but the road ahead is clear. With the commissioning of the very challenging gravity experiments GBAR, AEGIS and ALPHA-g over the next few years, and the advent of the new low-energy ELENA ring at the AD (CERN Courier December 2016 p16), low-energy antimatter physics at CERN promises a steady stream of groundbreaking results, and perhaps a few surprises.
If we tightly grasp a stone in our hands, we neither expect it to vanish nor leak through our flesh and bones. Our experience is that stone and, more generally, solid matter is stable and impenetrable. Last year marked the 50th anniversary of the demonstration by Freeman Dyson and Andrew Lenard that the stability of matter derives from the Pauli exclusion principle. This principle, for which Wolfgang Pauli received the 1945 Nobel Prize in Physics, is based on ideas so prevalent in fundamental physics that their underpinnings are rarely questioned. Here, we celebrate and reflect on the Pauli principle, and survey the latest experimental efforts to test it.
The exclusion principle (EP), which states that no two fermions can occupy the same quantum state, has been with us for almost a century. In his Nobel lecture, Pauli provided a deep and broad-ranging account of its discovery and its connections to unsolved problems of the newly born quantum theory. In the early 1920s, before Schrödinger’s equation and Heisenberg’s matrix algebra had come along, a young Pauli performed an extraordinary feat when he postulated both the EP and what he called “classically non-describable two-valuedness” – an early hint of the existence of electron spin – to explain the structure of atomic spectra.
At that time the EP met with some resistance and Pauli himself was dubious about the concepts that he had somewhat recklessly introduced. The situation changed significantly after the introduction in 1925 of the electron-spin concept and its identification with Pauli’s two-valuedness, which derived from the empirical ideas of Lande, an initial suggestion by Kronig, and an independent paper by Goudsmit and Uhlenbeck. By introducing the picture of the electron as a small classical sphere with a spin that could point in just two directions, both Kronig, and Goudsmit and Uhlenbeck, were able to compute the fine-structure splitting of atomic hydrogen, although they still missed a critical factor of two. These first steps were followed by the relativistic calculations of Thomas, by the spin calculus of Pauli, and finally, in 1928, by the elegant wave equation of Dirac, which put an end to all resistance against the concept of spin.
However, a theoretical explanation of the EP had to wait for some time. Just before the Second World War, Pauli and Markus Fierz made significant progress toward this goal, followed by the publication in 1940 by Pauli of his seminal paper “The connection between spin and statistics”. This paper showed that (assuming a relativistically invariant form of causality) the spin of a particle determines the commutation relations, i.e. whether fields commute or anticommute, and therefore the statistics that particles obey. The EP for spin-1/2 fermions follows as a corollary of the spin-statistics connection, and the division of particles into fermions and bosons based on their spins is one of the cornerstones of modern physics.
Beguilingly simple
The EP is beguilingly simple to state, and many physicists have tried to skip relativity and find direct proofs that use ordinary quantum mechanics alone – albeit assuming spin, which is a genuinely relativistic concept. Pauli himself was puzzled by the principle, and in his Nobel lecture he noted: “Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency. …The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems to me unavoidable.” Even Feynman – who usually outshone others with his uncanny intuition – felt frustrated by his inability to come up with a simple, straightforward justification of the EP: “It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation… This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world.”
Of special interest
After further theoretical studies, which included new proofs of the spin-statistics connection and the introduction of so-called para-statistics by Green, a possible small violation of the EP was first considered by Reines and Sobel in 1974 when they reanalysed an experiment by Goldhaber and Scharff in 1948. The possibility of small violations was refuted theoretically by Amado and Primakoff in 1980, but the topic was revived in 1987. That year, Russian theorist Lev Okun presented a model of violations of the EP in which he considered modified fermionic states which, in addition to the usual vacuum and one-particle state, also include a two-particle state. Okun wrote that “The special place enjoyed by the Pauli principle in modern theoretical physics does not mean that this principle does not require further and exhaustive experimental tests. On the contrary, it is specifically the fundamental nature of the Pauli principle that would make such tests, over the entire periodic table, of special interest.”
Okun’s model, however, ran into difficulties when attempting to construct a reasonable Hamiltonian, first because the Hamiltonian included nonlocal terms and, second, because Okun did not succeed in constructing a relativistic generalisation of the model. Despite this, his paper strongly encouraged experimental tests in atoms. In the same year (1987), Ignatiev and Kuzmin presented an extension of Okun’s model in a strictly non-relativisitic context that was characterised by a “beta parameter” |β| << 1. Not to be confused with the relativistic factor v/c, β is a parameter describing the action of the creation operator on the one-particle state. Using a toy model to illustrate transitions that violate the EP, Ignatiev and Kuzmin deduced that the transition probability for an anomalous two-electron symmetric state is proportional to β2/2, which is still widely used to represent the probability of EP violation.
This non-relativistic approach was criticized by A B Govorkov, who argued that the naive model of Ignatiev and Kuzmin could not be extended to become a fully-fledged quantum field theory. Since causality is an important ingredient in Pauli’s proof of the spin-statistics connection, however, Govorkov’s objections could be bypassed: later in 1987, Oscar Greenberg and Rabindra Mohapatra at the University of Maryland introduced a quantum field theory with continuously deformed commutation relations that led to a violation of causality. The deformation parameter was denoted by the letter q, and the theory was supposed to describe new hypothetical particles called “quons”. However, Govorkov was able to show that even this sleight of hand could not trick quantum field theory into small violations of the EP, demonstrating that the mere existence of antiparticles – again a true relativistic hallmark of quantum field theory – was enough to rule out small violations. The take-home message was that the violation of locality is not enough to break the EP, even “just a little”.
The connection between the intrinsic spin of particles and the statistics that they obey are at the heart of quantum field theory and therefore should be tested. A violation of the EP would be revolutionary. It could be related either to the violation of CPT, or violation of locality or Lorentz invariance, for example. However, we have seen how robust the EP is and how difficult it is to frame a violation within current quantum field theory. Experiments face no lesser difficulties, as noted as early as 1980 by Amado and Primakoff, and there are very few experimental options with which to truly test this tenet of modern physics.
One of the difficulties faced by experiments is that the identicalness of elementary particles implies that Hamiltonians must be invariant with respect to particle exchange, and, as a consequence, they cannot change the symmetry of any given state of multiple identical particles. Even in the case of a mixed symmetry of a many-particle system, there is no physical way to induce a transition to a state of different symmetry. This is the essence of the Messiah–Greenberg superselection rule, which can only be broken if a physical system is open.
Breaking the rules
The first dedicated experiment in line with this breaking of the Messiah–Greenberg superselection rule was performed in 1990 by Ramberg and Snow, who searched for Pauli-forbidden X-ray transitions in copper after introducing electrons into the system. The idea is that a power supply injecting an electric current into a copper conductor acts as a source of electrons, which are new to the atoms in the conductor. If these electrons have the “wrong” symmetry they can be radiatively captured into the already occupied 1S level of the copper atoms and emit electromagnetic radiation. The resulting X-rays are influenced by the unusual electron configuration and are slightly shifted towards lower energies with respect to the characteristic X-rays of copper.
Ramberg and Snow did not detect any violation but were able to put an upper bound on the violation probability of Β2/2 < 1.7 × 10–26. Following their concept, a much improved version of the experiment, called VIP (violation of the Pauli principle), was set up in the LNGS underground laboratory in Gran Sasso, Italy, in 2006. VIP improved significantly on the Ramberg and Snow experiment by using charge-coupled devices (CCDs) as high-resolution X-ray detectors with a large area and high intrinsic efficiency. In the original VIP setup, CCDs were positioned around a pure-copper cylinder; X-rays emitted from the cylinder were measured without and with current up to 40 A. The cosmic background in the LNGS laboratory is strongly suppressed – by a factor of 106 thanks to the overlying rock – and the apparatus was also surrounded by massive lead shielding.
Setting limits
After four years of data taking, VIP set a new limit on the EP violation for electrons at β2/2 < 4.7 × 10–29. To further enhance the sensitivity, the experiment was upgraded to VIP2, where silicon drift detectors (SDDs) replace CCDs as X-ray detectors. The VIP2 construction started in 2011 and in 2016 the setup was installed in the underground LNGS laboratory, where, after debugging and testing, data-taking started. The SDDs provide a wider solid angle for X-ray detection and this improvement, together with higher current and active shielding with plastic scintillators to limit background, leads to a much better sensitivity. The timing capability of SDDs also helps to suppress background events.
The experimental programme testing for a possible violation of the EP for electrons made great progress in 2017 and had already improved the upper limit set by VIP in the first two months of running time. With a planned duration of three years and alternating measurement with and without current, a two-orders-of-magnitude improvement is expected with respect to the previous VIP upper bound. In the absence of a signal, this will set the limit on violations of the EP at β2/2 < 10–31.
Experiments like VIP and VIP2 test the spin-statistics connection for one particular kind of fermions: electrons. The case of EP violations for neutrinos was also theoretically discussed by Dolgov and Smirnov. As for bosons, constraints on possible statistics violations come from high-energy-physics searches for decays of vector (i.e. spin-one) particles into two photons. Such decays are forbidden by the Landau–Yang theorem, whose proof incorporates the assumption that the two photons must be produced in a permutation-symmetric state. A complementary approach is to apply spectroscopic tests, as carried out at LENS in Florence during the 1990s, which probe the permutation properties of 16O nuclei in polyatomic molecules by searching for transitions between states that are antisymmetric under the exchange of two nuclei. If the nuclei are bosons, as in this case, such transitions, if found, violate the spin-statistics relation.High-sensitivity tests for photons were also performed with spectroscopic methods. As an example, using Bose–Einstein-statistics-forbidden two-photon excitation in barium, the probability for two photons to be in a “wrong” permutation-symmetry state was shown by English and co-workers at Berkeley in 2010 to be less than 4 × 10–11 – an improvement of more than three orders of magnitude compared to earlier results.
To conclude, we note that the EP has many associated philosophical issues, as Pauli himself was well aware of, and these are being studied within a dedicated project involving VIP collaborators, and supported by the John Templeton Foundation. One such issue is the notion of “identicalness”, which does not seem to have an analogue outside quantum mechanics because there are no two fundamentally identical classical objects.
This ultimate equality of quantum particles leads to all-important consequences governing the structure and dynamics of atoms and molecules, neutron stars, black-body radiation and determining our life in all its intricacy. For instance, molecular oxygen in air is extremely reactive, so why do our lungs not just burn? The reason lies in the pairing of electron spins: ordinary oxygen molecules are paramagnetic with unpaired electrons that have parallel spins, and in respiration this means that electrons have to be transferred one after the other. This sequential character to electron transfers is due to the EP, and moderates the rate of oxygen attachment to haemoglobin. Think of that the next time you breathe!
In the autumn of 2016, at a meeting in Dubrovnik, Croatia, trustees of the World Academy of Art and Science discussed a proposal to create a large international research institute for South-East Europe. The facility would promote the development of science and technology and help mitigate tensions between countries in the region, following the CERN model of “science for peace”. A platform for internationally competitive research in South-East Europe would stimulate the education of young scientists, transfer and reverse the brain drain, and foster greater cooperation and mobility in the region.
The South-East Europe initiative received first official support by the government of Montenegro, independent of where the final location would be, thanks to the engagement of Montenegro science minister Sanja Damjanovic, who is also a physicist with a long tradition working at CERN.
On 25 October last year at a meeting at CERN, ministers of science or their representatives from countries in the region signed a Declaration of Intent (DOI) to establish a South-East Europe International Institute for Sustainable Technologies (SEEIIST) with the above objectives. The initial signatories were Albania, Bosnia and Herzegovina, Bulgaria, Kosovo*, The Former Yugoslav Republic of Macedonia, Montenegro, Serbia and Slovenia. Croatia agreed in principle, while Greece participated as an observer. CERN’s role was to provide a neutral and inspirational venue for the meeting.
The signature of the DOI was followed by a scientific forum on 25–26 January at the International Centre for Theoretical Physics (ICTP) in Trieste, Italy, held under the auspices of UNESCO, the International Atomic Energy Agency (IAEA) and the European Physical Society. The forum attracted more than 100 participants ranging from scientists and engineers at universities to representatives of industry, government agencies and international organisations including ESFRI and the European Commission. Its aim was to present two scientific options for SEEIIST: a fourth-generation synchrotron light source that would offer users intense beams from infrared to X-ray wavelengths; and a state-of-the-art patient treatment facility for cancer using protons and heavy ions, also with a strong biomedical research programme. The concepts behind each proposal were worked out by two groups of international experts.
With SEEIIST’s overarching goal to be a world-class research infrastructure, the training of scientists, engineers and technicians is essential. Whichever project is selected, it will require several years of effort, during which people will be trained for the operation of the machines and user communities will also be formed. Capacity-building and technology-transfer activities will further trigger developments for the whole region, such as the development of powerful digital networks and big-data handling.
Reports and discussions from the ICTP forum have provided an important basis for the next steps. Representatives of IAEA declared an interest in helping with the training programme, while European Union (EU) representatives are also looking favourably at the project – potentially providing resources to support the preparation of a detailed conceptual design and eventual concrete proposal.
The initiative is gathering momentum. On 30 January the first meeting of the SEEIIST steering committee, chaired initially by the Montenegro science minister, took place in Sofia, Bulgaria. Sofia was chosen at the invitation of Bulgaria since it currently holds the EU presidency, and the meeting was introduced by Bulgarian president Rumen Radew, who expressed strong interest in SEEIIST and promised to support the initiative. Officials have underlined that a decision between the two scientific options should be taken as soon as possible – a task that we are now working towards.
SEEIIST wouldn’t be the first organisation to be inspired by the CERN model. The European Southern Observatory, European Molecular Biology Laboratory and the recently operational SESAME facility in Jordan – a third-generation light source governed by a council made up of representatives from eight members in the Middle East and surrounding region – each demonstrate the power of fundamental science to advance knowledge and bring people and countries together.
• This designation is without prejudice to positions on status and is in line with UNSC 1244/1999 and the ICJ opinion on the Kosovo Declaration of Independence.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.