The LHCb collaboration presented new results at the 8th International Workshop on Charm Physics (Charm 2016), which took place in Bologna on 5 to 9 September. Among various novelties, the collaboration reported the most precise measurements of the asymmetry between the effective lifetime of the D0 meson (composed of a c u quark pair) and that of its anti-partner, the D0 meson, decaying to final states composed of two charged pions or kaons. Such an asymmetry, referred to as AΓ, differs from zero if and only if the effective lifetimes of these particular D0 and D0 decays are different, signalling the existence of CP-violating effects.
The invariant-mass distribution of D0→ K+K– decays from one of the two analyses.
CP violation is still unobserved in the charm-quark sector, and its effects here are predicted to be very tiny by the Standard Model (well below the 10–3 level in this specific case). Thanks to the unprecedented sample sizes that LHCb is accumulating, it is only now that such a level of precision on these CP-violating observables with charm-meson decays is starting to be accessible.
Charm mesons are produced copiously at the LHC, either directly in the proton–proton collisions or in the decays of heavier beauty particles. Only the former production mechanism was used in this analysis. To determine whether the decaying meson is a D0 or a D0 (since they cannot be distinguished by the π+π– or K+K– common final state), LHCb reconstructed the decay chains D*+→ D0 π+ and D*–→D0π– so that the sign of the charged pion could be exploited to identify which D meson was involved in the decay. Two distinct analysis techniques were developed (see figure). The results of the two analyses are in excellent agreement and are consistent with no CP violation within about three parts in 104. These constitute the most precise measurements of CP violation ever made in the charm sector, with the full Run 2 data set expected to reduce the uncertainties even further.
The correlation of vn as a function of centrality, which is related to the amount of overlap of both lead ions at the time of the collision: solid red points are positive, while negative correlations are shown in blue.
One of the key goals in exploring the properties of QCD matter is to pin down the temperature dependence of the shear-viscosity to entropy-density ratio (η/s). In the limit of a weakly interacting gas, kinetic theory indicates that this ratio is proportional to the mean free path. Many different fluids exhibit a similar temperature dependence for η/s around a critical temperature Tc associated with a phase transition.
Heavy-ion collisions at the LHC create a state of hot and dense matter where quarks and gluons become deconfined (the quark gluon plasma, QGP). It exists within the initial instants of the collision, then as the system cools, the quarks and gluons form a hadronic gas at Tc. The temperature dependence of η/s is expected to follow the trend of other fluids, with a minimum at Tc. The minimum value of η/s is of particular interest because weakly coupled QCD and AdS/CFT models predict different values.
Measurements of vn versus the pseudorapidity of produced charged particles, with lines indicating hydrodynamical calculations tuned with RHIC data.
The ALICE collaboration has recently released results from anisotropic-flow measurements, which provide new constraints for η/s(T). Anisotropic flow results from spatial anisotropies in the initial state that are converted to momentum anisotropies via pressure gradients during the evolution of the system. The magnitudes of momentum anisotropies are quantified by the so-called vn coefficients, where v2 is generated by initial states with an elliptic shape, v3 a triangular shape, etc. The shape of the initial state fluctuates on an event-by-event basis.
Our results show that the average temperature of the system decreases with the pseudorapidity magnitude (figure, above), which means that measurements at forward rapidities are more sensitive to the hadronic phase. Although the model calculations reproduce the general trends of the data, it is clear that other parameterisations of η/s(T) could be explored to better describe the RHIC and LHC data simultaneously.
The correlation of vn as a function of centrality, which is related to the amount of overlap of both lead ions at the time of the collision: solid red points are positive, while negative correlations are shown in blue.
We also measured event-by-event correlations of different vn coefficients in lead–lead collisions (figure, left). It is clear that the v2 and v4 correlations are rather sensitive to different parameterisations. By contrast, the correlation between v2 and v3 is not, and is largely sensitive to how the initial state is modelled. Subsequently, it was found that the agreement improved as the number of degrees of freedom in the initial model was increased. Whether the deviations between data and model for the v2 and v4 correlation are due to the η/s(T) parameterisations or the initial-state modelling will be the subject of future study.
The largest all-sky survey of celestial objects has been compiled by ESA’s Gaia mission. On 13 September, 1000 days after the satellite’s launch, the Gaia team published a preliminary catalogue of more than a billion stars, far exceeding the reach of ESA’s Hipparcos mission completed two decades ago.
Astrometry – the science of charting the sky – has undergone tremendous progress over the centuries, from naked-eye observations in antiquity to Gaia’s sophisticated space instrumentation today. The oldest known comprehensive catalogue of stellar positions was compiled by Hipparchus of Nicaea in the 2nd century BC. His work, which was based on even earlier observations by Assyro-Babylonian astronomers, was handed down 300 years later by Ptolemy in his 2nd century treatise known as the Almagest. Although it listed the positions of 850 stars with a precision of less than one degree, which is about twice the diameter of the Moon, this work was significantly surpassed only in 1627 with the publication of a catalogue of about 1000 stars by the Danish astronomer Tycho Brahe, who achieved a precision of about 1 arcminute by using large quadrants and sextants.
Gaia has an astrometric accuracy about 100 times better than Hipparcos.
The first stellar catalogue compiled with the aid of a telescope was published in 1725 by English astronomer John Flamsteed, listing the positions of almost 3000 stars with a precision of 10–20 arcseconds. The precision increased significantly during the following centuries, with the use of photographic plates by the YaleTrigonometric Parallax Catalogue reaching 0.01 arcsecond in 1995. ESA’s Hipparcos mission, which operated from 1989 to 1993, was the first space telescope devoted to measuring stellar positions. The Hipparcos catalogue, released in 1997, provides the position, parallax and proper motion of 117,955 stars with a precision of 0.001 arcsecond. The “parallax” is a small displacement of the star’s position after a six-month interval, offering a different viewpoint from Earth’s annual orbit around the Sun and allowing the star’s distance to be derived.
While Hipparcos could probe the stars to distances of about 300 light-years, Gaia’s objective is to extend this to a significant fraction of the size of our Galaxy, which spans about 100,000 light-years. To achieve this, Gaia has an astrometric accuracy about 100 times better than Hipparcos. As a comparison, if Hipparcos could measure the angle that corresponds to the height of an astronaut standing on the Moon, Gaia would be able to measure the astronaut’s thumbnail.
Gaia was launched on 19 December 2013 towards the Lagrangian point L2, which is a prime location to look at the sky away from disturbances from the Sun, Earth and Moon. Although the first data release already comprises about a billion stars observed during the first 14 months of the mission, there was not enough time to disentangle the proper motion from the parallax. This could only be computed with higher precision for about two million stars previously observed by Hipparcos.
The new catalogue gives an impression of the great capabilities of Gaia. More observations are needed to make a dynamic 3D map of the Milky Way and to find and characterise possible brightness variations of all these stars. Gaia will then be able to provide the parallax distance of many periodic stars such as Cepheids, which are crucial in the accurate determination of the cosmic-distance ladder.
The Crystal Clear (CC) collaboration was approved by CERN’s Detector Research and Development Committee in April 1991 as experiment RD18. Its objective was to develop new inorganic scintillators that would be suitable for electromagnetic calorimeters in future LHC detectors. The main goal was to find dense and radiation-hard scintillating material with a fast light emission that can be produced in large quantities. This challenge required a large multidisciplinary effort involving world experts in different aspects of material sciences – including crystallography, solid-state physics, luminescence and defects in solids.
From 1991 to 1994, the CC collaboration carried out intensive studies to identify the most adequate scintillator material for the LHC experiments. Three candidates were identified and extensively studied: cerium fluoride (CeF3), lead tungstate (PbWO4) and heavy scintillating glass. In 1994, lead tungstate was chosen by the CMS and ALICE experiments as the most cost-effective crystal compliant with the operational conditions at the LHC. Today, 75,848 lead-tungstate crystals are installed in CMS electromagnetic calorimeters and 17,920 in ALICE. The former contributed to the discovery of the Higgs boson, which was identified in 2012 by CMS and the ATLAS experiment via its decay, among others, into two photons. The CC collaboration’s generic R&D on scintillating materials has brought a deep understanding of cerium ions for scintillating activators and seen the development of lutetium and yttrium aluminium perovskite crystals for both physics and medical applications.
In 1997, the CC collaboration made its expertise in scintillators available to industry and society at large. Among the most promising sectors were medical functional imaging and, in particular, positron emission tomography (PET), due to its growing importance in cancer diagnostics and similarities with the functionality of electromagnetic calorimeters (the principle of detecting gamma rays in a PET scanner is identical to that in high-energy physics detectors).
Following this, CC collaboration members developed and constructed several dedicated PET prototypes. The first, which was later commercialised by Raytest GmbH in Germany under the trademark ClearPET, was a small-animal PET machine used for radiopharmaceutical research. At the turn of the millennium, five ClearPET prototypes characterised by a spatial resolution of 1.5 mm were built by the CC collaboration, which represented a major breakthrough in functional imaging at that time. The same crystal modules were also developed by the CC team at Forschungszentrum Jülich, Germany, to image plants in order to study carbon transport. A modified ClearPET geometry was also combined with X-ray single-photon detectors by CC researchers at CPPM Marseille, offering simultaneous PET and computed-tomography (CT) acquisition, and providing the first PET/CT simultaneous images of a mouse in 2015 (see image above). The simultaneous use of CT and PET allows the excellent position resolution of anatomic imaging (providing detailed images of the structure of tissues) to be combined with functional imaging, which is sensitive to the tissue’s metabolic activity.
After the success of ClearPET, in 2002, CC developed a dedicated PET camera for breast imaging called ClearPEM. This system had a spatial resolution of 1.3 mm and represented the first PET imaging based on avalanche photodiodes, which were initially developed for the CMS electromagnetic calorimeter. The machine was installed in Coimbra, Portugal, where clinical trials were performed. In 2005, a second ClearPEM machine combined with 3D ultrasound and elastography was developed with the aim of providing anatomical and metabolic information to allow better identification of tumours. This machine was installed in Hôpital Nord in Marseille, France, in December 2010 for clinical evaluations of 10 patients, and three years later it was moved to the San Girardo hospital in Monza, Italy, to undertake larger clinical trials, which are ongoing.
In 2011, a European FP7 project called EndoTOFPET-US, which was a consortium of three hospitals, three companies and six institutes, began the development of a prototype for a novel bi-modal time-of-flight PET and ultrasound endoscope with a spatial resolution better than 1 mm and a time resolution of 200 ps. This was aimed at the detection of early stage pancreatic or prostatic tumours and the development of new biomarkers for pancreatic and prostatic cancers. Two prototypes have been produced (one for pancreatic and one for prostate cancers) and the first tests on a phantom-prostate prototype were performed in spring 2015 at the CERIMED centre in Marseille. Work is now ongoing to improve the two prototypes, in view of preclinical and clinical operation.
In addition to the development of ClearPET detectors, members of the collaboration have initiated the development of the Monte Carlo simulation software-package GATE, a GEANT4-based simulation tool allowing the simulation of full PET detector systems.
In 1992, the CC collaboration organised the first international conference on inorganic scintillators and their applications, which led to a global scientific community of around 300 people. Today, this community comes together every two years at the SCINT conferences, the next instalment of which will take place in Chamonix, France, from 18 to 22 September 2017.
To this day, the CC collaboration continues its investigations into new scintillators and understanding their underlying scintillation mechanisms and radiation-hardness characteristics – in addition to the development of detectors. Among its most recent activities is the investigation of key parameters in scintillating detectors that enable very precise timing information for various applications. These include mitigating the effect of “pile-up” caused by the high event rate at particle accelerators operating at high peak luminosities, and also medical applications in time-of-flight PET imaging. This research requires the study of new materials and processes to identify ultrafast scintillation mechanisms such as “hot intraband luminescence” or quantum-confined excitonic emission with sub-picosecond rise time and sub-nanosecond decay time. It also involves investigating the enhancement of the scintillator light collection by using various surface treatments, such as nano-patterning with photonic crystals. CC recently initiated a European COST Action called Fast Advanced Scintillator Timing (FAST) to bring together European experts from academia and industry to ultimately achieve scintillator-based detectors with a time precision better than 100 ps, which provides an excellent training opportunity for researchers interested in this domain.
Among other recent activities of the CC collaboration are new crystal-production methods. Micro-pulling-down techniques, which allow inorganic scintillating crystals to be grown in the shape of fibres with diameters ranging from 0.3 to 3 mm, open the way to attractive detector designs for future high-energy physics experiments by replacing a block of crystals with a bundle of fibres. A Horizon 2020 European RISE Marie Skłodowska-Curie project called Intelum has been set up by the CC collaboration to explore the cost-effective production of large quantities of fibres. More recently, the development of new PET crystal modules has been launched by CC collaborators. These make use of new photodetector silicon photomultipliers and have a high spatial resolution (1.5 mm), depth-of-interaction capability (better than 3 mm) and a fast timing resolution (better than 200 ps).
Future directions
For the past 25 years, the CC collaboration has actively carried out R&D on scintillating materials, and investigated their use in novel ionising radiation-detecting devices (including read-out electronics and data acquisition) for use in particle-physics and medical-imaging applications. In addition to significant progress made in the understanding of scintillation mechanisms and radiation hardness of different materials, the choice of lead tungstate for the CMS electromagnetic calorimeter and the realisation of various prototypes for medical imaging are among the CC collaboration’s highlights so far. It is now making important contributions to understanding the key parameters for fast-timing detectors.
The various activities of the CC collaboration, which today has 29 institutional members, have resulted in more than 650 publications and 72 PhD theses. The motivation of CC collaboration members and the momentum generated throughout its many projects open up promising perspectives for the future of inorganic scintillators and their use in HEP and other applications.
• An event to celebrate the 25th anniversary of the CC collaboration will take place at CERN on 24 November.
Le collisionneur linéaire compact (CLIC) est un collisionneur linéaire à haute luminosité de plusieurs TeV, en développement depuis 1985, pour lequel un rapport préliminaire de conception a été achevé en 2012. Avec la découverte du boson de Higgs en juillet de la même année, l’intérêt indiscutable d’une exploitation du CLIC à une énergie dans le centre de masse plus faible (380 GeV) est devenu évident. Récemment, CLIC a publié une mise à jour de son scénario de base, étape par étape; il met l’accent sur une première étape d’énergie optimisée. Cette première étape est basée sur des performances déjà démontrées de la technologie d’accélération novatrice du CLIC, et elle sera nettement moins onéreuse que celle figurant dans le rapport préliminaire de conception initial.
One of CERN’s main options for a flagship accelerator in the post-LHC era is an electron–positron collider at the high-energy frontier. The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear collider that has been under development since 1985 and currently involves 75 institutes around the world. Being linear, such a machine does not suffer energy losses from synchrotron radiation, which increases strongly with the beam energy in circular machines. Another option for CERN is a very high-energy circular proton–proton collider, which is currently being considered as the core of the Future Circular Collider (FCC) programme. So far, CLIC R&D has principally focused on collider technology that’s able to reach collision energies in the multi-TeV range. Based on this technology, a conceptual design report (CDR) including a feasibility study for a 3 TeV collider was completed in 2012.
With the discovery of the Higgs boson in July of that year, and the fact that the particle turned out to be relatively light with a mass of 125 GeV, it became evident that there is a compelling physics case for operating CLIC at a lower centre-of-mass energy. The optimal collision energy is 380 GeV because it simultaneously allows physicists to study two Higgs-production processes in addition to top-quark pair production. Therefore, to fully exploit CLIC’s scientific potential, the collider is foreseen to be constructed in several stages corresponding to different centre-of-mass energies: the first at 380 GeV would be followed by stages at 1.5 and 3 TeV, allowing powerful searches for phenomena beyond the Standard Model (SM).
While a fully optimised collider at 3 TeV was described in the CDR in 2012, the lower-energy stages were not presented at the same level of detail. In August this year, however, the CLIC and CLICdp (CLIC detector and physics study) collaborations published an updated baseline-staging scenario that places emphasis on an optimised first-energy stage compatible with an extension to high energies. The performance, cost and power consumption of the CLIC accelerator as a function of the centre-of-mass energy were addressed, building on experience from technology R&D and system tests. The resulting first-energy stage is based on already demonstrated performances of CLIC’s novel acceleration technology and will be significantly cheaper than the initial CDR design.
CLIC physics
An electron–positron collider provides unique opportunities to make precision measurements of the two heaviest particles in the SM: the Higgs boson (125 GeV) and the top quark (173 GeV). Deviations in the way the Higgs couples to the fermions, the electroweak bosons and itself are predicted in many extensions of the SM, such as supersymmetry or composite Higgs models. Different scenarios lead to specific patterns of deviations, which means that precision measurements of the Higgs couplings can potentially discriminate between different new-physics scenarios. The same is true of the couplings of the top quark to the Z boson and photon. CLIC would offer such measurements as the first step of its physics programme, and full simulations of realistic CLIC detector concepts have been used to evaluate the expected precision and to guide the choice of collision energy.
The principal Higgs production channel, Higgstrahlung (e+e–→ ZH), requires the centre-of-mass energy to be equal to the sum of the Higgs- and Z-boson masses plus a few tens of GeV. For an electron–positron collider such as CLIC, Higgsstrahlung has a maximum cross-section at a centre-of-mass energy of around 240 GeV and decreases as a function of energy. Because the colliding electrons and positrons are elementary particles with a precisely known energy, Higgsstrahlung events can be identified by detecting the Z boson alone as it recoils against the Higgs boson. This can be done without looking at the decay of the Higgs boson, and hence the measurement is completely independent of possible unknown Higgs decays. This is a unique capability of a lepton collider and the reason why the first energy stage of CLIC is so important. The most powerful method with which to measure the Higgsstrahlung cross-section in this way is based on events where a Z boson decays into hadrons, and the best precision is expected at centre-of-mass energies around 350 GeV. (At lower energies it is more difficult to separate signal and background events, while at higher energies the measurement is limited by the smaller signal cross-section and worse recoil mass resolution.)
The other main Higgs-production channel is WW fusion (e+e−→ Hveve). In contrast to Higgsstrahlung, the cross-section for this process rises quickly with centre-of-mass energy. By measuring the rates for the same Higgs decay, such as H → bb, in both Higgsstrahlung and WW-fusion events, researchers can significantly improve their knowledge of the Higgs decay width – which is a challenging measurement at hadron colliders such as the LHC. A centre-of-mass energy of 380 GeV at the first CLIC energy stage is ideal for achieving a sizable contribution of WW-fusion events.
So far, the energy of electron–positron colliders has not been high enough to allow direct measurements of the top quark. At the first CLIC energy stage, however, properties of the top quark can be obtained via pair-production events (e+e−→ tt). A small fraction of the collider’s running time would be used to scan the top pair-production cross-section in the threshold region around 350 GeV. This would allow us to extract the top-quark mass in a theoretically well-defined scheme, which is not possible at hadron colliders. The value of the top-quark mass has an important impact on the stability of the electroweak vacuum at very high energies.
With current knowledge, the achievable precision on the top-quark mass is expected to be in the order of 50 MeV, including systematic and theoretical uncertainties. This is about an order of magnitude better than the precision expected at the High-Luminosity LHC (HL-LHC).
The couplings of the top quark to the Z boson and photon can be probed using the top-production cross-sections and “forward-backward” asymmetries for different electron-beam polarisation configurations available at CLIC. These observables lead to expected precisions on the couplings which are substantially better than those achievable at the HL-LHC. Deviations of these couplings from their SM expectations are predicted in many new physics scenarios, such as composite-Higgs scenarios or extra-dimension models. It was recently shown, using detailed detector simulations, that although higher energies are preferred, this measurement is already feasible at an energy of 380 GeV, provided the theoretical uncertainties improve in the coming years. The expected precisions depend on our ability to reconstruct tt– events correctly, which is more challenging at 380 GeV compared to higher energies because both top quarks decay almost isotropically.
Combining all available knowledge therefore led to the choice of 380 GeV for the first-energy stage of the CLIC programme in the new staging baseline. Not only is this close to the optimal value for Higgs physics around 350 GeV but it would also enable substantial measurements of the top quark. An integrated luminosity of 500 fb–1 is required for the Higgs and top-physics programmes, which could take roughly five years. The top cross-section threshold scan, meanwhile, would be feasible with 100 fb–1 collected at several energy points near the production threshold.
Stepping up
After the initial phase of CLIC operation at 380 GeV, the aim is to operate CLIC above 1 TeV at the earliest possible time. In the current baseline, two stages at 1.5 TeV and 3 TeV are planned, although the exact energies of these stages can be revised as new input from the LHC and HL-LHC becomes available. Searches for beyond-the-SM phenomena are the main goal of high-energy CLIC operation. Furthermore, additional unique measurements of Higgs and top properties are possible, including studies of double Higgs production to extract the Higgs self-coupling. This is crucial to probe the Higgs potential experimentally and its measurement is extremely challenging in hadron collisions, even at the HL-LHC. In addition, the full data sample with three million Higgs events would lead to very tight constraints on the Higgs couplings to vector bosons and fermions. In contrast to hadron colliders, all events can be used for physics and there are no QCD backgrounds.
Two fundamentally different approaches are possible to search for phenomena beyond the SM. The first is to search directly for the production of new particles, which in electron–positron collisions can take place almost up to the kinematic limit. Due to the clean experimental conditions and low backgrounds compared to hadron colliders, CLIC is particularly well suited for measuring new and existing weakly interacting states. Because the beam energies are tunable, it is also possible to study the production thresholds of new particles in detail. Searches for dark-matter candidates, meanwhile, can be performed using single-photon events with missing energy. Because lepton colliders probe the coupling of dark-matter particles to leptons, searches at CLIC are complementary to those at hadron colliders, which are sensitive to the couplings to quarks and gluons.
The second analysis approach at CLIC, which is sensitive to even higher mass scales, is to search for unexpected signals in precision measurements of SM observables. For example, measurements of two-fermion processes provide discovery potential for Z´ bosons with masses up to tens of TeV. Another important example is the search for additional resonances or anomalous couplings in vector-boson scattering. For both indirect and direct searches, the discovery reach improves significantly with increasing centre-of-mass energy. If new phenomena are found, beam polarisation might help to constrain the underlying theory through observables such as polarisation asymmetries.
The CLIC concept
CLIC will collide beams of electrons and positrons at a single interaction point, with the main beams generated in a central facility that would fit on the CERN site. To increase the brilliance of the beams, the particles are “cooled” (slowed down and reaccelerated continuously) in damping rings before they are sent to the two high-gradient main linacs, which face each other. Here, the beams are accelerated to the full collision energy in a single pass and a magnetic telescope consisting of quadrupoles and different multipoles is used to focus the beams to nanometre sizes in the collision point inside of the detector. Two additional complexes produce high-current (100 A) electron beams to drive the main linacs – this novel two-beam acceleration technique is unique to CLIC.
The CLIC accelerator R&D is focused on several core challenges. First, strong accelerating fields are required in the main linac to limit its length and cost. Outstanding beam quality is also essential to achieve a high rate of physics events in the detectors. In addition, the power consumption of the CLIC accelerator complex has to be limited to about 500 MW for the highest-energy stage; hence a high efficiency to generate RF power and transfer it into the beams is mandatory. CLIC will use high-frequency (X-band) normal-conducting accelerating structures (copper) to achieve accelerating gradients at the level of 100 MV/m. A centre-of-mass energy of 3 TeV can be reached with a collider of about 50 km length, while 380 GeV for CLIC’s first stage would require a site length of 11 km, which is slightly larger than the diameter of the LHC. The accelerator is operated using 50 RF pulses of 244 ns length per second. During each pulse, a train of 312 bunches is accelerated, which are separated by just 0.5 ns. To generate the accelerating field, each CLIC main-linac accelerating structure needs to be fed with an RF power of 60 MW. With a total of 140,000 structures in the 3 TeV collider, this adds up to more than 8 TW.
Because it is not possible to generate this peak power at reasonable cost with conventional klystrons (even for the short pulse length of 244 ns), a novel power-production scheme has been developed for CLIC. The idea is to operate a drive beam with a current of 100 A that runs parallel to the main beam via power extraction and transfer structures. In these structures, the beam induces electric fields, thereby losing energy and generating RF power, that is transferred to the main-linac accelerating structures. The drive beam is produced as a long (146 μs) high-current (4 A) train of bunches and is accelerated to an energy of about 2.4 GeV and then sent into a delay loop and combiner-ring complex where sets of 24 consecutive sub-pulses are used to form 25 trains of 244 ns length with a current of about 100 A. Each of these bunch-trains is then used to power one of the 25 drive-beam sectors, which means that the initial 146 μs-long pulse is effectively compressed in time by a factor of 600, and therefore its power is increased by the same factor.
To demonstrate this novel scheme, a test facility (CTF3) was constructed at CERN since 2001 that reused the LEP pre-injector building and components as well as adding many more. The facility now consists of a drive-beam accelerator, the delay loop and one combiner ring. CTF3 can produce a drive-beam pulse of about 30 A and accelerate the main beam with a gradient of up to 145 MV/m. A large range of components, feedback systems and operational procedures needed to be developed to make the facility a success, and by the end of 2016 it will have finished its mission. Further beam tests at SLAC, KEK and various light sources remain important. The CALIFES electron beam facility at CERN, which is currently being evaluated for operation from 2017, can provide a testing ground for high-gradient structures and main-beam studies. More prototypes for CLIC’s main-beam and drive-beam components are being developed and characterised in dedicated test facilities at CERN and collaborating institutes. The resulting progress in X-band acceleration technology also generated important interest in the Free Electron Laser (FEL) community, where it may allow for more compact facilities.
To achieve the required luminosities (6 × 1034 cm–2 s–1 at 3 TeV), nanometre beam sizes are required at CLIC’s interaction point. This is several hundred times smaller than at the SLC, which operated at SLAC in the 1990s and was the first and only operational linear collider, and therefore requires novel hardware and sophisticated beam-based alignment algorithms. A precision pre-alignment system has been developed and tested that can achieve an alignment accuracy in the range of 10 μm, while beam-based tuning algorithms have been successfully tested at SLAC and other facilities. These algorithms use beams of different energies to diagnose and correct the offset of the beam-position monitors, reducing the effective misalignments to a fraction of a micron. Because the motion of the ground due to natural and technical sources can cause the beam-guiding quadrupole magnets to move, knocking the beams out of focus, the magnets will be stabilised with an active feedback system that has been developed by a collaboration of several institutes, and which has already been demonstrated experimentally.
CLIC’s physics potential has been illustrated through the simulation and reconstruction of benchmark physics processes in two dedicated detector concepts. These are based on the SiD and ILD detector concepts developed for the International Linear Collider (ILC), an alternative machine currently under consideration for construction in Japan, and have been adapted to the experimental environment at the higher-energy CLIC. Because the high centre-of-mass energies and CLIC’s accelerator technology lead to relatively high beam-induced background levels for a lepton collider, the CLIC detector design and the event-reconstruction techniques are both optimised to suppress the influence of these backgrounds. A main driver for the ILC and CLIC detector concepts is the required jet-energy resolution. To achieve the required precision, the CLIC detector concepts are based on fine-grained electromagnetic and hadronic calorimeters optimised for particle-flow analysis techniques. A new study is almost complete, which defines a single optimised CLIC detector for use in future CLIC physics benchmark studies. The work by CLICdp was crucial for the new staging baseline (especially for the choice of 380 GeV) because the physics potential as a function of energy can only be estimated with the required accuracy using detailed simulations of realistic detector concepts.
The new staged design
To optimise the CLIC accelerator, a systematic design approach has been developed and used to explore a large range of configurations for the RF structures of the main linac. For each structure design, the luminosity performance, power consumption and total cost of the CLIC complex are calculated. For the first stage, different accelerating structures operated at a somewhat lower accelerating gradient of 72 MV/m will be used to reach the luminosity goal at a cost and power consumption similar to earlier projects at CERN – while also not inflating the cost of the higher-energy stages. The design should also be flexible enough to take advantage of projected improvements in RF technology during the construction and operation of the first stage.
When upgrading to higher energies, the structures optimised for 380 GeV will be moved to the beginning of the new linear accelerator and the remaining space filled with structures optimised for 3 TeV operation. The RF pulse length of 244 ns is kept the same at all stages to avoid major modifications to the drive-beam generation scheme. Data taking at the three energy stages is expected to last for a period of seven, five and six years, respectively. The stages are interrupted by two upgrade periods each lasting two years, which means that the overall three-stage CLIC programme will last for 22 years from the start of operation. The duration of each stage is derived from integrated luminosity targets of 500 fb–1 at 380 GeV, 1.5 ab–1 at 1.5 TeV and 3 ab–1 at 3 TeV.
An intense R&D programme is yielding other important improvements. For instance, the CLIC study recently proposed a novel design for klystrons that can increase the efficiency significantly. To reduce the power consumption further, permanent magnets are also being developed that are tunable enough to be able to replace the normal conducting magnets. The goal is to develop a detailed design of both the accelerator and detector in time for the update of the European Strategy for Particle Physics towards the end of the decade.
Mature option
With the discovery of the Higgs boson, a great physics case exists for CLIC at a centre-of-mass energy of 380 GeV. Hence particular emphasis is being placed on the first stage of the accelerator, for which the focus is on reducing costs and power consumption. The new accelerating structure design will be improved and more statistics on the structure performance will be obtained. The detector design will continue to be optimised, driven by the requirements of the physics programme. Technology demonstrators for the most challenging detector elements, including the vertex detector and main tracker, are being developed in parallel.
Common studies with the ILC, which is currently being considered for implementation in Japan, are also important, both for accelerator and detector elements, in particular for the initial stage of CLIC. Both the accelerator and detector parameters and designs, in particular for the second- and third-energy stages, will evolve according to new LHC results and studies as they emerge.
CLIC is the only mature option for a future multi-TeV high-luminosity linear electron–positron collider. The two-beam technology has been demonstrated in large-scale tests and no show stoppers were identified. CLIC is therefore an attractive option for CERN after the LHC. Once the European particle-physics community decides to move forward, a technical design will be developed and construction could begin in 2025.
Our understanding of nature’s fundamental constituents owes much to particle colliders. Notable discoveries include the W and Z bosons at CERN’s Super Proton Synchrotron in the 1980s, the top quark at Fermilab’s Tevatron collider in the 1990s and the Higgs boson at CERN’s LHC in 2012. While colliding particles at ever higher energies is still one of the best ways to search for new phenomena, experiments at lower energies can also address fundamental-physics questions.
The Physics Beyond Colliders kick-off workshop, which was held at CERN on 6–7 September, brought together a wide range of physicists from the theory, experiment and accelerator communities to explore the full range of research opportunities presented by the CERN complex. The considered timescale for such activities reaches as far as 2040, corresponding roughly to the operational lifetime of the LHC and its high-luminosity upgrade. The study group has been charged with pulling together interested parties and exploring the options in appropriate depth, with the aim of providing input to the next update to the European Strategy for Particle Physics towards the end of the decade.
As the name of the workshop and study group suggests, a lot of interesting physics can be tested in experiments that are complementary to colliders. Ideas discussed at the September event ranged from searching for particles with masses far below an eV up to more than 1015 eV, to prospects for dark matter and even dark-energy studies.
Theoretical motivation
Searches for electric and magnetic dipole moments in elementary particles are a rich experimental playground, and the enormous precision of such experiments allows a wide range of new physics to be tested. The long-standing deviation of the muon magnetic moment (g-2) from the Standard Model prediction could indicate the presence of relatively heavy supersymmetric particles, but also the presence of relatively light “dark photons”, which are also a possible messenger to the dark-matter sector. A confirmation, or not, of the original g-2 measurement and experimental tests of other models will provide important input to this issue.
Electric dipole moments are inherently linked to the violation of charge–parity (CP) symmetry, which is a necessary ingredient to explain the origin of the baryon asymmetry of the universe. While CP violation has been observed in weak interactions, it is notably absent in strong interactions. For example, no electric dipole moment of the neutron has been observed so far. Eliminating this so-called strong-CP problem gives significant motivation for hypothesising the existence of a new elementary particle called the axion. Indeed, axion-like particles are not only natural dark-matter candidates but they turn out to be one of the features that are abundant in well-motivated extensions of the Standard Model, such as string theory. Axions could help to explain a number of astrophysical puzzles such as dark matter. They may also be connected to inflation in the very early universe and to the generation of neutrino masses, and potentially are even involved with the hierarchy problem.
Neutrinos are also the source of a large range of puzzles, but also opportunities. Interestingly, essentially all current experiments and observations – including that of dark matter – can be explained by a very minimal extension of the Standard Model: the addition of three right-handed neutrinos. In fact, theorists’ ideas range far beyond that, motivating the existence of whole sectors of weakly coupled particles below the Fermi scale.
Ambitions may even lead to tackling one of the most challenging of questions: dark energy. While the effective couplings between ordinary matter and dark energy must be quite small, there is still significant room for observable effects in low-energy experiments, for example using atom interferometry.
Experimental opportunities
It is clear that CERN’s priority over the coming years is the full exploitation of the LHC – first in its present guise and then, from 2026, as the High-Luminosity LHC (HL-LHC). The HL-LHC places stringent demands on intensity and related characteristics, and a major upgrade of the LHC injectors is planned during Long Shutdown 2 (LS2) beginning in 2019 to provide beams in the HL-LHC era. Despite this, the LHC doesn’t actually use many protons. This leaves the other facilities at CERN open to exploit the considerable beam-production capabilities of the accelerator complex.
CERN already has a diverse and far-sighted experimental programme based on the LHC injectors. This spans the ISOLDE radioactive beam facility, the neutron time-of-flight facility (nTOF), the Antiproton Decelerator (AD), the High-Radiation to Materials (HiRadMat) facility, the plasma-wakefield experiment AWAKE, and the North and East experimental areas. CERN’s proton-production capabilities are already heavily used and will continue to be well-solicited in the coming years. A preliminary forecast shows that there is potential capacity to support one more major SPS experiment after the injector upgrade.
The AD is a classic example of CERN’s existing non-collider-based facilities. This unique antimatter factory has several experiments studying the properties of antiprotons and anti-hydrogen atoms in detail. Here, in the experimental domain, the time constant for technological evolution is much shorter than it is for large high-energy detectors. The AD is currently being upgraded with the ELENA ring, which will increase by two orders of magnitude the trapping efficiency of anti-hydrogen atoms and will allow different experiments to operate in parallel. After LS2, ELENA will serve all AD experiments and will secure CERN’s antimatter research into the next decade. The ISOLDE and nTOF facilities also offer opportunities to investigate fundamental questions such as the unitarity of the quark-mixing matrix, parity violation or the masses of the neutrinos.
The three main experiments of the North Area – NA61, COMPASS and NA62 – have well-defined programmes until the time of LS2 and all have longer term plans. After completion of its search for a QCD critical point, NA61 plans to further study QCD deconfinement with emphasis on charm signals. It will also remain a unique facility to constrain hadron production in primary proton targets for future neutrino beams in the US and Japan. The Common Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS) experiment, meanwhile, intends to further study the hadron structure and spectroscopy with RF-separated beams of higher intensity in order to study fundamental physics linked to quantum chromodynamics.
An independent proposal submitted to the workshop involved using muon beams from the SPS to make precision measurements of μ–e elastic scattering, which could reduce by a factor of two the present theoretical hadronic uncertainty on g-2 for future precision experiments. Once NA62 reaches its intended precision on its measurement of the rare decay K+→π+νν, the collaboration plans comprehensive measurements in the K sector in addition to one year of operation in beam-dump mode to search for heavy neutral leptons such as massive right-handed neutrinos. In the longer term, NA62 aims to study the rare decay K0→π0νν, which would require a similar but expanded apparatus and a high-intensity K0 beam. In general, rare decays might reveal deviations from the Standard Model that indicate the presence of new heavy particles that alter the decay rate.
Fixed ambitions
The September workshop heard proposals for new ambitious fixed-target facilities that would complement existing experiments at CERN. A completely new development at CERN’s North Area is the proposed SPS beam-dump facility (BDF). Beam dump in this context implies a target that absorbs all incident protons and contains most of the cascade generated by the primary-beam interaction. The aim is for a general-purpose fixed-target facility, which in the initial phase will facilitate a general search for weakly interacting “hidden” particles. The Search for HIdden Particles (SHiP) experiment plans to exploit the unique high-energy, high-intensity features of the SPS beam to perform a comprehensive investigation of the dark sector in the few-GeV mass range (CERN Courier March 2016 p25). A complementary approach, based on observing missing energy in the products of high-energy interactions, is currently being explored by NA64 on an electron test beam, and the experiment team has proposed to extend its programme to muon and hadron beams in the future.
From an accelerator perspective, the BDF is a challenging undertaking and will involve the development of a new extraction line and a sophisticated target and target complex with due regard to radiation-protection issues. More generally, the foreseen North Area programme requires high intensity and slow extraction from the SPS, and this poses some serious accelerator challenges. A closer look at these reveals the need for a concerted programme of studies and improvements to minimise extraction beam loss and associated activation of hardware with its attendant risks.
Fixed-target experiments with LHC beams could be carried out using either crystal extraction or an internal gas jet, and initially these might operate in parasitic mode upstream from existing detectors (LHCb or ALICE). Combined with the high LHC beam energy, an internal gas target would open up a new kinematic range to hadron and heavy-ion measurements, while beam extraction using crystals was proposed to measure the magnetic moments of short-lived baryons.
New facilities to complement fixed-target experiments are also under consideration. A small all-electric storage ring would provide a precision measurement of the proton electric dipole moment (EDM) and could test for new physics at the 100 TeV scale, while a mixed electric/magnetic ring would extend such measurements to the deuteron EDM. The physics motivation for these facilities is strong, and from an accelerator standpoint such storage rings are an interesting challenge in their own right (CERN Courier September 2016 p27).
A dedicated gamma factory is another exciting option being explored. Partially stripped ions interacting with photons from a laser have the potential to provide a powerful source of gamma rays. Driven by the LHC, such a facility would increase by seven orders of magnitude the intensity currently achievable in electron-driven gamma-ray beams. The proposed nuSTORM project, meanwhile, would provide well-defined neutrino beams for precise measurements of the neutrino cross-sections and represent an intermediate step towards a neutrino factory or a muon collider.
Last but not least, there are several non-accelerator projects that stand to benefit from CERN’s technological expertise and infrastructure, in line with the existing CAST and OSQAR experiments. CAST (CERN Axion Solar Telescope) uses one of the LHC dipole magnets to search for axions produced in the Sun, while OSQAR attempts to produce axions in the laboratory. Researchers working on IAXO, the next-generation axion helioscope foreseen as a significantly more powerful successor to CAST, have expressed great interest in co-operating with CERN on the design and running of the experiment’s large toroidal magnet. The high-field magnets developed at CERN would also increase the reach of future axion searches in the laboratory as a follow-up of OSQAR at CERN or ALPS at DESY. DARKSIDE, a flagship dark-matter search to be sited in Gran Sasso, also has technological synergies with CERN in the cryogenics, liquid-argon and silicon-photomultiplier domains.
Next steps
Working groups are now being set up to assess the physics case of the proposed projects in a global context, and also their feasibility and possible implementation at CERN or elsewhere. A follow-up Physics Beyond Colliders workshop is foreseen in 2017, and the final deliverable is due towards the end of 2018. It will consist of a summary document that will help the European Strategy update group to define its orientations for non-collider fundamental-particle-physics research in the next decade.
Did you expect that gravitational waves would be discovered during your lifetime?
Yes, and I thought it quite likely it would come from two colliding black holes of just the sort that we did see. I wrote a popular book called Black Holes and Time Warps: Einstein’s Outrageous Legacy, published in 1994, and I wrote a prologue to this book during my honeymoon in Chile in 1984. In that prologue, I described the observation of two black holes, both weighing 25 solar masses, spiralling together and merging and producing three solar masses of energy and gravitational waves, and that’s very close to what we’ve seen. So I was already in the 1980s targeting black holes as the most likely kind of source; for me this was not a surprise, it was a great satisfaction that everything came out the way I thought it probably would.
Can you summarise how an instrument such as LIGO could observe such a weak and rare phenomenon?
The primary inventor of this kind of gravitational-wave detector is Ray Weiss at MIT. He not only conceived the idea, in parallel with several other people, but he, unlike anybody else, identified all of the major sources of noise that would have to be dealt with in the initial detector and he invented ways to deal with each of those. He estimated how much noise would remain after the experiment did what he proposed to limit each noise source, and concluded that the sensitivity that could be reached would be good enough. There was a real possibility of seeing the waves that I as a theorist and colleagues were predicting. Weiss wrote a paper in 1972 describing all of this and it is one of the most powerful papers I’ve ever read, perhaps the most powerful experiment-related paper. Before I read it, I had heard about his idea and concluded it was very unlikely to succeed because the required sensitivities were so great. I didn’t have time to really study it in depth, but it turned out I was wrong. I was sceptical until I had discussions with Weiss and others in Moscow. I then became convinced, and decided that I should devote most of the rest of my career to helping them succeed in the detection of gravitational waves.
How will the new tool of “multi-messenger astronomy” impact on our understanding of the universe?
Concerning the colliding black hole that we’ve seen so far, astronomers who rely on electromagnetic signals have not seen anything coming from them. It’s conceivable that in the future something may be seen because disturbances caused when two black holes collide and merge can lead to X-ray or perhaps optical emissions. We also expect to see many other sources of gravitational waves. Neutron stars orbiting each other are expected to collide and merge, which is thought to be a source of gamma-ray bursts that have already been seen. We will see black holes tear apart and destroy a companion neutron star, again producing a very strong electromagnetic emission as well as neutrino emission. So the co-ordinated gravitational and electromagnetic observation and neutrino observations will be very powerful. With all of these working together in “multi-messenger” astronomy, there’s a great richness of information. That really is the future of a large portion of this field. But part of this field will be things like black holes, where we see only gravitational waves.
Do gravitational waves give us a bearing on gravitons?
Although we are quite sure gravitational waves are carried by gravitons, there is no chance to see individual gravitons based on the known laws of physics. Just as we do not see individual photons in a radio wave because there are so many photons working together to produce the radio wave, there are even more gravitons working together to produce gravitational waves. In technical terms, the mean occupation number of the gravitational-wave field that is seen is absolutely enormous, close to 1040. With so many gravitons there is no hope, unfortunately, to see individual gravitons.
Will we ever reconcile gravity with the three other forces?
I am quite sure gravity will be reconciled with the other three forces. I think it is quite likely this will be done through some version of string theory or M theory, which many theorists are now working on. When it does happen, the resulting laws of quantum gravity will allow us to address questions related to the nature of the birth of the universe. It would also tell us whether or not it is possible to build time machines to go backward in time, what is the nature of the interior of a black hole, and address many other interesting questions. This is a tremendously important effort, by far the most important research direction in theoretical physics today and recent decades. There’s no way I could contribute very much there.
Regarding future large-scale research infrastructures, such as those proposed within CERN’s Future Circular Collider programme, what are the lessons to be learnt from LIGO?
Maybe the best thing to learn is having superb management of large physics budgets, which is essential to make the project succeed. We’ve had excellent management, particularly with Barry Barish, who transformed LIGO and took over as director when we were just about ready to begin construction (Robbie Waught, who had helped us write a proposal to get the funding from the NSF and Congress, also got two research teams at Caltech and MIT to work together in an effective manner). Barry created the modern LIGO and he is an absolutely fantastic project director. Having him lead us through that transition into the modern LIGO was absolutely essential to our success, plus a very good experiment idea and a superb team, of course.
You were an adviser to the blockbuster film Interstellar. Do you have any more science and arts projects ahead?
I am 76. I was a conventional professor for almost 50 years, and I decided for my next 50 years that I want to do something different. So I have several different collaborations: one on a second film; collaborations in a multimedia concert about sources of gravitational waves with Hans Zimmer and Paul Franckman, who did the music and visual effects for Interstellar; and collaborations with Chapman University art professor Lia Halloran on a book with her paintings and my poetry about the warped side of the universe. I am having great fun entering collaborations between scientists and artists and I think, at this point of my life, if I have a total failure with trying to write poetry, well that’s alright: I’ve had enough success elsewhere.
As far back as 1939, the US educator Abraham Flexner penned a stirring paean to basic research in Harper’s Magazine under the title “The usefulness of useless knowledge.” Flexner, perhaps being intentionally provocative, pointed out that Marconi’s contribution to the radio and wireless had been practically negligible. He went on to argue that the 1865 work of James Clerk Maxwell on the theoretical underpinnings of electricity and magnetism, and the subsequent experimental work of Heinrich Hertz on the detection of electromagnetic waves, was done with no concern about the practical utility of the work. The knowledge they sought, in other words, was never targeted to a specific application. Without it, however, there could have been no radio, no television and no mobile phones.
The history of innovation is full of such examples. It is practically impossible to find a piece of technology that cannot be traced back to the work of scientists motivated purely by a desire to understand the world. But basic research goes further. There is something primordial about it. Every child is a natural scientist imbued with curiosity, vivid imagination and a desire to learn. It is what sets us apart from any other species, and it is what has provided the wellspring of innovation since the harnessing of fire and the invention of the wheel. Children are always asking questions: why is the sky blue? What are we made of? It is by investigating questions like these that science has advanced, and because it can inspire children to grow up into future scientists or scientifically aware citizens.
Education and training are among CERN’s core missions. Over the years we have developed programmes that reach everyone from primary-school children to professional physicists, accelerator scientists and computer scientists. We also keep tabs on the whereabouts of young people passing through CERN, and it is enriching to follow their progress. Around 1000 people per year receive higher degrees from universities around the world for work carried out at CERN. Basic research therefore not only inspires young people to study science, it also provides a steady stream of qualified people for business and industry, where their high-tech, international experience allows them to make a positive impact.
Turning to the UN’s admirably ambitious Global Goals for Sustainable Development, which officially came into force on 1 January 2016 and will last for 15 years as part of the Agenda 2030 programme, the focus on science and technology is positive and encouraging. It testifies to a deeper understanding of the importance of science in driving progress that benefits all peoples and helps to overcome today’s most pressing development challenges. But Agenda 2030’s potential can only be fulfilled through sustained commitment and funding by governments. If we are to tackle issues ranging from eliminating poverty and hunger to providing clean and affordable energy, we need science and we need people to be scientifically aware.
Places like CERN are a vitally important ingredient in the innovation chain. We contribute to the kind of knowledge that not only enriches humanity, but also provides ideas that become the technologies of the future. Some of CERN’s technology has immediate impact on society, such as the World Wide Web and the application of particle accelerators to cancer therapy and many other fields. We also train young people. All this is possible because governments support science, technology, engineering and mathematics (STEM) education and basic research, but we should do more. The scientific community, including CERN, urged Agenda 2030 to consider a minimum GDP percentage devoted by every nation to STEM education and basic research. This is particularly important in times of economic downturn, when private funding naturally concentrates on short-term payback and governments focus on domains that offer immediate economic return, at the expense of longer-term investment in fundamental science.
Useless knowledge, as Flexner called it, is at the basis of human development. Humankind’s continuing pursuit of it will make the UN’s development goals achievable.
The aim of this book is to show how software packages such as MATLAB can be extremely useful for studying cosmology problems by means of complex simulations. Thanks to the greatly improved accuracy of cosmological data and the increased computing power available, the calculation and graphic tools offered by this software can be profitably employed to study physics problems and compare different models.
A theory that successfully describes the universe and its evolution in terms of only six fundamental parameters has been developed. It accounts for the Big Bang (BB), cosmic microwave background (CMB) radiation and the evolution of matter to the present day. However, the model cannot explain some experimental results. The inflation hypothesis, which postulates the existence of a scalar field that caused an exponential expansion of the very early universe, can solve some of these open problems.
This book provides a basic exposition of BB cosmology and the inflationary model using MATLAB tools for visualisation and to develop the reader’s understanding of the parametric dependence of the observables. Different models are compared, including one that assumes the Higgs field as the scalar inflationary field. In this way, readers can gain experience in using various MATLAB tools (including symbolic mathematics, numerical-solution methods and plots) and also apply them to other problems.
The international CREMA (Charge Radius Experiment with Muonic Atoms) collaboration has measured the radius of the deuteron more accurately than ever before, finding that it is significantly smaller than previously thought. The result, which was obtained using laser spectroscopy of muonic deuterium at the Paul Scherrer Institute (PSI) in Switzerland, is consistent with a 2010 measurement of the proton radius by the same group, which also showed a significantly smaller value than expected.
The 2010 result, which found a proton radius of 0.84087±0.00039 fm versus the CODATA value of 0.8751±0.0061 fm, formed the basis of what has been dubbed the proton-radius puzzle. The new measurement of the deuteron’s size gives rise to an analogous mystery. If the results hold firm, they could force physicists to adjust the Rydberg constant, which is currently known to the eleventh decimal place, and perhaps imply the existence of an as-yet-unknown force beyond the Standard Model.
Consisting of one proton and one neutron, the deuteron is the simplest compound nucleus. Its properties, such as the root-mean-square charge radius and polarisability, therefore serve as important benchmarks for understanding nuclear forces and structure. Using the most intense source of muons available, provided by the PSI proton accelerator, the CREMA team injected around 300 low-energy muons per second into an experimental chamber filled with gaseous deuterium molecules. Here, muons eject electrons from the molecules, which break up to form muonic deuterium. A complex pulsed laser system was then used to raise muonic-deuterium atoms from the metastable 2s state into the next excited state, 2p, after which the muons fall back to the ground state and emit an X-ray photon. Because the energy levels of the muonic atom strongly depend on the size of the nucleus, measuring the 2s–2p energy splitting in muonic deuterium by means of laser spectroscopy reveals the size of the deuteron with unprecedented precision.
Based on measurements of three 2s–2p transitions, the team found a value of 2.12562±0.00078 fm for the deuteron radius. This is 2.7 times more accurate but 7.5σ smaller than the CODATA-2010 value of 2.1424±0.0021 fm. The value is also 3.5σ smaller than the radius obtained by electronic deuterium spectroscopy. When combined with the electronic isotope shift, says the team, this yields a proton radius similar to the one measured from muonic hydrogen and thereby amplifies the proton-radius puzzle.
“You could say that the mystery has now doubly confirmed itself,” says lead-author Randolf Pohl of the University of Mainz, Germany. “After our first study came out in 2010, I was afraid some veteran physicist would get in touch with us and point out our great blunder. But the years have passed, and so far nothing of the kind has happened.”
As to the possible cause of the discrepancy, physicists remain cautious. “Naturally, it can’t be that the deuteron – any more than the proton – has two different sizes,” says CREMA-member Aldo Antognini of the PSI. The most likely explanation would be experimental imprecision, he says. For example, there could be an error with the hydrogen spectroscopy, which was used in some of the earlier measurements of both the proton and deuteron’s size. “If it should actually turn out that the hydrogen spectroscopy is giving a false – that is, minimally shifted – value, that would mean that the Rydberg constant must be minimally changed,” he says.
Currently, research groups in Munich, Paris, Toronto and Amsterdam are working to obtain more accurate measurements via hydrogen spectroscopy, and their results are expected in the coming years. The CREMA collaboration has also recently studied muonic helium-3 and helium-4 ions, and expects at least a five-fold reduction in uncertainties in their charge radii compared with the electron-scattering results. Next, the team plans to target the magnetic properties of the proton by measuring the so-called Zemach radius, which is the limiting quantity when comparing experiment and theory of the 1s hyperfine splitting in regular hydrogen.
“If all of the relevant experiments are correct, there must be some physics beyond the Standard Model going on,” says Gerald A Miller of the University of Washington, who was not involved in the PSI study. “In particular, the muon–proton and electron–proton interactions differ in ways that cannot be accounted for by the electron–muon mass difference, and that statement is strengthened by the newly published result.”
Further experiments should show whether the proton-radius measurements based on hydrogen atoms are less accurate than originally stated, he adds. One is CREMA’s measurement of the helium-4 radius using muonic atoms, while another is the MUon proton Scattering Experiment (MUSE) at PSI, which compares muon– to electron–proton scattering for both charges. “Given enough experiments, the proton-radius puzzle will be solved in a few years,” says Miller. “We can all speculate about the final result, but it’s more scientific to wait for results.”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.