In December, the Laboratory for Underground Nuclear Astrophysics (LUNA) experiment reported the first direct observation of sodium production in giant red stars, one of the nuclear reactions that are fundamental to the formation of the elements that make up the universe.
LUNA is a compact linear accelerator for light ions (maximum energy 400 keV). A unique facility, it is installed in a deep-underground laboratory and shielded from cosmic rays. The experiment aims to study the nuclear reactions that take place inside stars, where elements that make up matter are formed and then driven out by gigantic explosions and scattered as cosmic dust.
For the first time, LUNA has observed three low-energy resonances in the neon-sodium cycle, the 22Ne(p,γ)23Na reaction, responsible for sodium production in red giants and energy generation. LUNA recreates the energy ranges of nuclear reactions and, with its accelerator, goes back in time to one hundred million years after the Big Bang, when the first stars formed and the processes that gave rise to the huge variety of elements in the universe started.
This result is an important piece in the puzzle of the origin of the elements in the universe, which LUNA has been studying for 25 years. Stars assemble atoms through a complex system of nuclear reactions. A very small fraction of these reactions have been studied at the energies existing inside of the stars, and a large part of those few cases have been observed using LUNA.
A high-purity germanium detector with relative efficiency up to 130% was used for this particular experiment, together with a windowless gas target filled with enriched gas. The rock surrounding the underground facility at the Gran Sasso National Laboratory and additional passive shielding protected the experiment from cosmic rays and ambient radiation, making the direct observation of such a rare process possible.
Pluto was considered to be the ninth planet of the solar system, until it was relegated to a “dwarf planet” by the International Astronomical Union (IAU) in 2006. It was judged to be too small among many other trans-Neptunian objects to be considered a real planet. Almost 10 years later, two astronomers have now found indications of the presence of a very distant heavy planet orbiting the Sun. While it is still to be detected, it is already causing a great deal of excitement in the scientific community and beyond.
Pluto was discovered in 1930 by a young American astronomer, Clyde Tombaugh, who tediously looked at innumerable photographic plates to detect an elusive planet moving relative to background stars. With the progressive discovery – since the 1990s – of hundreds of objects orbiting beyond Neptune, Pluto is no longer alone in the outer solar system. It even lost its status of the heaviest trans-Neptunian object with the discovery of Eris in 2003. This forced the IAU to rethink the definition of a planet and led to the exclusion of Pluto from the strict circle of eight planets.
Eris is not the only massive trans-Neptunian object found by Mike Brown, an astronomer of the California Institute of Technology (Caltech), US, and colleagues. There are also Quaoar (2002), Sedna (2003), Haumea (2004) and Makemake (2005), all only slightly smaller than Pluto and Eris. Despite these discoveries, almost nobody during recent years would have thought that there could still be a much bigger real planet in the outskirts of our solar system. But this is what Mike Brown and one of his colleagues, the theorist Konstantin Batygin, now propose.
The evidence comes from an unexpected clustering of perihelion positions and orbital planes of a group of objects just outside of the orbit of Neptune
The two astronomers deduced the existence of a ninth planet through mathematical modelling and computer simulations, but have not yet observed the object directly. The evidence comes from an unexpected clustering of perihelion positions and orbital planes of a group of objects just outside of the orbit of Neptune, in the so-called Kuiper belt. All six objects with the most elongated orbits – with semi-major axes greater than 250 AU – share similar perihelion positions and pole orientations. The combined statistical significance of this clustering is 3.8σ, assuming that Sedna and the five other peculiar planetoids have the same observational bias as other known Kuiper-belt objects.
Batygin and Brown then show that a planet with more than about 10 times the mass of the Earth in a distant eccentric orbit anti-aligned with the six objects would maintain the peculiar configuration of their orbits. This possible ninth planet would rotate around the Sun about 20 times further out than Neptune, therefore completing one full orbit only approximately once every 10,000 years. Batygin’s simulations of the effect of this new planet further predict the existence of a population of small planetoids in orbits perpendicular to the plane of the main planets. When Brown realised that such peculiar objects exist and have indeed already been identified, he became convinced about the existence of Planet Nine.
Observers now know along which orbit they should look for Planet Nine. If it happens to be found, this would be a major discovery: the third planet to be discovered since ancient times after Uranus and Neptune and, as with the latter, it would have been first predicted to exist via calculations.
In recent years, evidence for the existence of dark matter from astrophysical observations has become indisputable. Although the nature of dark matter remains unknown, many theoretically motivated candidates have been proposed. Among them, the most popular ones are Weakly Interacting Massive Particles (WIMPs) with predicted masses in the range from a few GeV/c2 to TeV/c2 and with interaction strengths roughly on the weak scale.
WIMPs are being searched for using three complementary techniques: indirectly, by detecting the secondary products of WIMP annihilation or decay in celestial bodies; by producing WIMPs at colliders, foremost the LHC; and by direct detection, by measuring the energy of recoiling nuclei produced by collisions with WIMPs in low-background detectors.
On 11 November 2015, the most sensitive detector for the direct detection of WIMPs, XENON1T, was inaugurated at the Italian Laboratori Nazionali del Gran Sasso (LNGS) – the largest underground laboratory in the world. XENON1T, led by Elena Aprile of Columbia University, was built and is operated by a collaboration of 21 research groups from France, Germany, Italy, Israel, the Netherlands, Portugal, Sweden, Switzerland, the United Arabic Emirates and the US. In total, about 130 physicists are involved.
XENON1T is the current culmination of the XENON programme of dark matter direct-detection experiments. Starting with the 25 kg XENON10 detector about 10 years ago, the second phase of the experiment, XENON100 (CERN Courier October 2013 p13) with 161 kg, has been tremendously successful: in the summer of 2012, the XENON collaboration published results from a search for spin-independent WIMP–nucleon interactions that provided the most stringent constraints on WIMP dark matter, until superseded by the LUX experiment (CERN Courier December 2013 p8) with a larger target.
XENON100 has since then also provided a series of other important results, such as constraints on the spin-dependent WIMP nucleon cross-section, constraints on solar axions and galactic axion-like particles and, more recently, searches for annual rate modulations, which exclude WIMP–electron scattering that could have provided a dark-matter explanation of the signal observed by DAMA/LIBRA (CERN Courier November 2015 p10).
Low background is key
The new XENON1T detector has an estimated sensitivity that is a factor of 100 better than XENON100. This will be reached after about two years of data taking. With only one week of data-taking, XENON1T will be able to reach the current LUX limit, opening up a new phase in the search for dark matter in early 2016.
The XENON detectors are dual-phase time-projection chambers (TPCs) filled with liquid xenon (LXe) as the target material. Interactions of particles in the liquefied xenon give rise to prompt scintillation light and ionisation. The ionised electrons are drifted in a strong electric field and extracted into the gas above the liquid where a secondary scintillation signal is produced. Both scintillation signals are read out by arrays of photomultiplier tubes (PMTs) placed above and below the target volume. The position of the interaction vertex can be reconstructed in 3D by using the hit pattern on the upper PMT array and the time delay between the prompt and secondary scintillation signal. The position reconstruction facilitates self-shielding by only selecting events that interact with the inner “fiducial” volume of the detector. Because of their small cross-section, WIMPs will interact only once in the detector, so the background (e.g. from neutrons) can be reduced further by selecting single-scatter interactions. Beta and gamma backgrounds are reduced by selecting events with a ratio of secondary-to-prompt signal that is typical for nuclear recoils.
The XENON1T detector is filled with about 3.5 tonnes of liquid xenon in total. Its TPC – 1 m high and 1 m in diameter in a cylindrical shape, laterally defined by highly reflective Teflon – is the largest liquid-xenon TPC ever built. Specially designed copper field-shaping electrodes ensure the uniformity of the drift field for the desired field strength of 1 kV/cm. The TPC’s active volume contains 2 tonnes of LXe viewed by two arrays of 3 inch PMTs – 121 at the bottom immersed in LXe and 127 on the top in the gaseous phase. The xenon gas is liquefied and kept at a temperature of about –95 °C by a system of pulse-tube refrigerators. The xenon gas is stored and can be recovered in liquid phase in a custom-designed stainless-steel sphere that can hold up to 7.6 tonnes of xenon in high-purity conditions. Figure 3 shows the XENON1T detector and service building situated in Hall B at LNGS. Figure 1 shows XENON collaborators active in assembling the TPC in a clean room above ground at LNGS.
The expected WIMP–nucleon interaction rate is less than 10 events in 1 tonne of xenon per year. Background rejection is therefore the key to success for direct-detection experiments. Externally induced backgrounds can be minimised by exploiting the self-shielding capabilities. In addition, the detector is surrounded by a cylindrical water vessel 10 m high and 9.6 m in diameter. It is equipped with PMTs to tag muons that could induce neutrons, with an efficiency of 99.9%.
For a detector the size of XENON1T, radioactive impurities in the detector materials and the xenon itself become the biggest challenge for background reduction. Extensive radiation-screening campaigns, using some of the world’s most sensitive germanium detectors, have been conducted, and high-purity PMTs have been specially developed by Hamamatsu in co-operation with the collaboration. Contamination of the xenon by radioactive radon (mainly 222Rn) and krypton (85Kr), which dominate the target-intrinsic background, led to the development of cryogenic-distillation techniques to suppress the abundance of these isotopes to unprecedented low levels.
The best scenario
After about two years of data taking, XENON1T will be able to probe spin-independent WIMP–nucleon cross-sections of 1.6 × 10–47 cm2 (at a WIMP mass of 50 GeV/c2), see figure 2. In popular scenarios involving supersymmetry, XENON1T will either discover WIMPs or will exclude most of the theoretically relevant parameter space. Following the inauguration, the first physics run is envisaged to start early this year.
Most of the infrastructure, for example the outer cryostat, the Cherenkov muon veto, the xenon cryogenics, the purification and storage systems and the data-acquisition system, has been dimensioned for a larger experiment, named XENONnT, which is designed to contain more than 7 tonnes of LXe. A new TPC, about 40% larger in diameter and height and equipped with about 400 PMTs, will replace the XENON1T TPC. The goal for XENONnT is to achieve another order of magnitude improvement in sensitivity within a few years of data taking. XENONnT is scheduled to start data taking in 2018.
As Run 2 at the LHC gains momentum, a combined analysis of data sets from Run 1 by the ATLAS and CMS collaborations has provided the sharpest picture yet on the Higgs boson properties (ATLAS 2015, CMS 2015).Three years after the announcement in July 2012 of the discovery of a new boson, the two collaborations are closing the books on measurements of Higgs properties by performing a combined Run 1 analysis, which includes data collected in 2011 and 2012 at centre-of-mass energies of 7 and 8 TeV, respectively. This analysis follows hot on the heels of the combined measurement of the Higgs boson mass, mH = 125.09±0.24 GeV, published in May by ATLAS and CMS (ATLAS and CMS 2015).
The new results are the culmination of one and a half years of joint work by the ATLAS and CMS collaborators involved in the activities of the LHC Higgs Combination Group. For this combined analysis, some of the original measurements dating back to 2013 were updated to account for the latest predictions from the Standard Model. A comprehensive review of all of the experimental systematic and theoretical uncertainties was also conducted to account properly for correlations. The analysis presented technical challenges, because the fits involve more than 4200 parameters that represent systematic uncertainties. The improvements that were made to overcome these challenges will now make their way into data-analysis tools, such as ROOT, that are widely used by the high-energy particle-physics community.
The results of the combination present a picture that is consistent with the individual results. The combined signal yield relative to the Standard Model expectation is measured to be 1.09±0.11, and the combination of the two experiments leads to an observation of the H → τ+τ– decay at the level of about 5.5σ – the first observation of the direct decay of the Higgs boson to fermions. Thanks to the combined power of the data sets from ATLAS and CMS, the analysis yields unprecedented measurements of the properties of the Higgs boson, with a precision that enables the search for physics beyond the Standard Model in possible deviations of the measurements from the model’s predictions. The figure shows clearly the increased precision obtained when combining the ATLAS and CMS analyses.
The combined analysis is performed for many benchmark models that the LHC Higgs Cross-Section Working Group proposed, so as to be able to explore the various different effects of physics models that go beyond the Standard Model. As Run 2 gains momentum, the two collaborations are looking forward to reaping the benefits of the increase in centre-of-mass energy to 13 TeV, which will make some of the most interesting processes, such as the production of Higgs bosons in association with top quarks, more accessible than ever. However, even with the first results from Run 2, this set of combined results from 7 and 8 TeV collisions in Run 1 will continue to provide the sharpest picture of the Higgs boson’s properties for some time to come.
After 15 years of measurement and another eight years of scrutinizing and calculations, the H1 and ZEUS collaborations have published the most precise results to date about the innermost structure and behaviour of the proton. The two collaborations, which took data at DESY’s electron–proton collider, HERA, from 1992 to 2007, have combined nearly 3000 measurements of inclusive deep-inelastic cross-sections (H1, ZEUS 2015). With its completion, the paper secures the legacy of the HERA data.
Within the framework of perturbative QCD, the proton is described in terms of parton-density functions, which provide the probability of scattering from a parton, either a gluon or a quark. The H1 and ZEUS collaborations have also produced the first QCD analysis of the data, encompassed in the HERAPDF2.0 sets of parton-distribution functions (PDFs), which form a significant part of the paper. The combined data presented in the new publication will be the basis of all analyses of the structure of the proton for years to come.
As figure 1 depicts, in deep-inelastic scattering, a boson – γ, Z0 or W± – acts as a probe of the structure of the proton by interacting with its constituents, through neutral-current (γ, Z0) or charged-current (W±) reactions. Of course, this picture is simplified: the proton is a dynamic structure of quarks and gluons, but by measuring deep-inelastic scattering over a wide kinematic range, this internal structure can be mapped precisely. The variables used to do this are the squared four-momentum, Q2, of the exchanged boson, and Bjorken x, xBj, the fraction of the proton’s momentum carried by the struck quark.
A wealth of data
The data, taken over the 15-year lifetime of the HERA accelerator, correspond to a total luminosity of about 1 fb–1 of deep-inelastic electron–proton and positron–proton scattering. All of the data used were taken with an electron/positron beam energy of 27.5 GeV, with roughly equal amounts of data for electron–proton and positron–proton scattering being recorded. HERA initially operated with a proton-beam energy of 820 GeV, which was increased subsequently to 920 GeV; these data constitute the bulk of the combined measurements. Towards the end of HERA’s run, special data samples with a proton-beam energy of 575 GeV and 460 GeV were taken and are also included. The data were combined separately for the e+p and e–p runs and for the different centre-of-mass energies. Overall, 41 separate data sets were used in the combination, spanning 0.045 < Q2 < 50,000 GeV2 and 6 × 10–7 < xBj < 0.65, i.e. six orders of magnitude in each variable. The initial measurements consisted of 2937 published cross-sections in total, which were combined to produce 1307 final combined cross-section measurements. These results supersede the previous paper with combined measurements of deep-inelastic scattering cross-sections in which only data up to the year 2000 were combined (CERN Courier January/February 2008 p30).
The procedure for combining the data involved a careful treatment of the various uncertainties between all of the data sets. In particular, the correlations of the various sources were assessed, and those uncertainties deemed to be point-to-point correlated were accounted for as such in the averaging of the data based on a χ2 minimization method. The resulting χ2 is 1687 for 1620 degrees of freedom, demonstrating excellent compatibility of the multitude of data sets. Figure 2 illustrates the power of the data combination. It displays a selection of the data in bins of the photon virtuality, Q2, and for fixed values of xBj, showing separately individual data sets from several different analyses. A combined data point can be the combination of up to eight individual measurements. The improvement in precision is striking, as is seen more clearly in the close-up on some of the points. An indication of the precision of the combined data is that the total uncertainties are close to 1% for the bulk region of 3 < Q2 < 500 GeV2.
As well as showing the precision of the data and power of the combination, the cross-section dependence for the different values of xBj demonstrates the dynamic structure of the proton in a striking way. For xBj = 0.08, the cross-section dependence is reasonably flat as a function of Q2. This is known as Bjorken scaling, and is expected from the simple parton model in which inelastic electron–proton scattering is viewed as a sum of elastic electron–parton scattering, where the partons are free point-like objects. At lower values of xBj, the cross-section rises increasingly more steeply with increasing Q2 and decreasing xBj. This effect is known as scaling violation, and is indicative of the density of gluons in the proton increasing.
The increased density and rise of the cross-section can also be observed by considering the proton-structure function F2 (which is closely related to the cross-section) plotted versus xBj at fixed Q2, as in figure 3. The strong rise of F2 with decreasing xBj was one of the most important discoveries at HERA. Previous experiments, which were with fixed targets, could not constrain this behaviour, because the data were at low values of Q2 and high values of xBj. The figure also shows how the rise towards low xBj is steeper with increasing Q2. At higher Q2, the exchanged boson effectively probes smaller distances, and so can see more of the inner structure of the proton and hence resolves more and more gluons.
Parton distributions
The proton structure of quarks and gluons is often parameterized in terms of the PDFs, which correspond to the probability of finding a gluon or a quark of a given flavour with momentum fraction x in the proton, given the scale μ of the hard interaction. The behaviour of the PDFs with scale is predicted by QCD, but the absolute values need to be determined from fits to data. Using the HERA data, the PDFs can be extracted, while at the same time the evolution as a function of the scale is tested. This analysis is performed at leading order, next-to-leading order (NLO) and next-to-next-to-leading order, yielding the HERAPDF2.0 family of PDFs.
Figure 3 compares the predictions of the PDF analysis at NLO with the measurements of the structure functions. In general, the QCD predictions describe the data well, although this becomes poorer at low Q2, indicating inadequacies in the theory used at these low scales. Such precise knowledge of the PDFs is also of highest importance for physics at the LHC at CERN, because the uncertainties stemming from the knowledge of the PDFs are increased for proton–proton collisions compared with deep-inelastic scattering.
The QCD analysis can also be extended to include data from the production of charm quarks and jets at HERA. Charm production is measured again as a function of xBj and Q2, however with the condition of detecting a charm meson in the final state. Jet production is measured in the Breit frame, where jets with non-zero transverse momentum are expected from hard QCD processes only. By including the charm and jet data, the analysis becomes particularly sensitive to the strong-coupling constant, αs(MZ), whereas without jet data the coupling constant is strongly correlated with the normalization of the gluon density. The combined analysis of inclusive data, charm data and jet data at NLO results in an experimentally very precise measurement of the strong-coupling constant, αs(MZ) = 0.1183±0.0009 (exp.), with significantly larger uncertainties of +0.0039–0.0033 related to the model and theory.
It is also interesting to look at data from HERA on neutral-current (NC) and charged-current (CC) scattering that is differential in Q2 but integrated over xBj, as shown in figure 4 both for e+p and e–p. At small Q2, the cross-sections for NC are much larger than for CC, whereas at large Q2, in the order of the vector-boson mass squared, they become similar in size. This is a direct visualization of the electroweak unification: the CC process is mediated by weak forces, whereas photon exchange dominates the NC cross-section. Looking in more detail, the NC cross-sections for e+p and e–p are almost identical at small Q2 but start to diverge as Q2 grows. This is owing to γ–Z0 interference, which has the opposite effect on the e+p and e–p cross-sections. The CC cross-sections also differ between e+p and e–p scattering, with two effects contributing: the helicity structure of the W± exchange and the fact that CC e–p scattering probes the u-valence quarks, whereas d-valence quarks are accessed in CC e+p.
In summary, the HERA collider experiments H1 and ZEUS have combined their precision data on deep-inelastic scattering, reaching a precision of almost 1% in the double-differential cross-section measurements. It is the largest coherent data set on proton structure, spanning six orders of magnitude in the kinematic variables xBj and Q2. A QCD analysis of the HERA data alone results in a set of parton-density functions, HERAPDF2.0, without the need for data from other experiments. Also, using HERA jet and charm data, the strong-coupling constant is measured together with proton PDFs. QCD and electroweak effects are probed at high precision in the same data set, providing beautiful demonstrations of the validity of the Standard Model.
The Japanese/German BASE collaboration at CERN’s Antiproton Decelerator (AD) has compared the charge-to-mass ratios of the antiproton and proton with a fractional precision of 69 parts in a trillion (ppt). This high-precision measurement was achieved by comparing the cyclotron frequencies of antiprotons and negatively charged hydrogen ions in a Penning trap. The result is consistent with charge–parity–time-reversal (CPT) invariance, which is one of the cornerstones of the Standard Model of particle physics, and constitutes the most precise test comparing baryons and antibaryons performed to date.
In their experiment, the BASE collaboration has profited from techniques pioneered in the 1990s by the TRAP collaboration at the Low Energy Antiproton Ring at CERN. The advanced cryogenic Penning-trap system used in BASE consists of four traps, two of which were used in this measurement – a measurement trap and a reservoir trap (figure 1). When the experiment receives a pulse of 5.3 MeV antiprotons from the AD, they strike the degrader structure, which is designed to slow them down, and release hydrogen. Negatively charged hydrogen ions (H–) can form in the process, producing a composite cloud with the antiprotons that is shuttled to the reservoir trap. BASE has developed techniques to extract single antiprotons and negative hydrogen ions from this cloud whenever needed. Moreover, the reservoir has a lifetime of more than a year, making the BASE experiment almost independent from AD cycles.
Using this extraction technique, and taking the timing from the AD cycle, BASE prepares a single antiproton in the measurement trap, while an H– ion is held in the downstream park electrode, as shown in figure 1. The cyclotron frequency of the antiproton is then measured in exactly 120 s, which corresponds to one AD cycle. The particles are subsequently exchanged by performing appropriate potential ramps, and the cyclotron frequency of the H– ion is measured. Thus, a single comparison of the charge-to-mass ratios takes only 240 s. This is much faster than in previous experiments, enabling BASE to perform about 6500 ratio comparisons in 35 days of measurement time (figure 2). The result is a value of the ratio-comparison: (q/m)p-/(q/m)p – 1 = 1(69) × 10–12, thus confirming CPT at the level of ppt.
The high sampling rate has also enabled the first high-resolution study of diurnal variations in a baryon/antibaryon comparison, which could be introduced by Lorentz-violating cosmic-background fields. The measurement sets constraints on such variations at the level of less than 720 ppt. In addition, by assuming that CPT invariance holds, the measurement can be interpreted as a test of the weak equivalence principle using baryonic antimatter. If matter respects weak equivalence while antimatter experiences an anomalous coupling to the gravitational field, this gravitational anomaly would contribute to a possible difference in the measured cyclotron frequencies. Thus, by following these assumptions, the result from BASE can be used to set a limit on the gravitational anomaly parameter, αg: |αg – 1| < 8.7 × 10–7.
The main goal for the BASE experiment, which was approved in June 2013, is to measure the magnetic moment of the antiproton with a precision of parts per billion. Using the double Penning trap system, the collaboration recently performed the most precise measurement of the magnetic moment of the proton.
The STAR collaboration at the Brookhaven National Laboratory (BNL) has published new evidence indicative of a “chiral magnetic wave” rippling through the quark–gluon plasma created in high-energy gold–gold collisions at the Relativistic Heavy Ion Collider (RHIC).
Heavy-ion collisions at RHIC and the LHC involve many spectators – nucleons that are not involved in any direct collision. The charged spectators – protons – have an important influence because they can produce a magnetic field of some 1014 T. In principle, this can lead to a collective excitation in the hot dense matter produced, the chiral magnetic wave. It results from the separation both of electric charge and of chiral charge, that is, right or left “handedness”, but only in a chirally symmetric phase. The phenomenon is predicted to manifest itself as an electric quadrupole moment of the collision system, where the “poles” and “equator” of the system acquire, respectively, additional positive and negative particles. This in turn influences differently the elliptic flow of positive and negative particles, decreasing the former and increasing the latter.
To look for this effect, STAR measured the elliptic flow, v2, of π+ and π– produced in gold–gold collisions at mid-rapidity, as a function of the event-by-event charge asymmetry, ACH, over a range of energies. The team found that v2 increased linearly with ACH for π–, but decreased for π+. At the highest energy, √sNN = 200 GeV, the slope of the difference in v2 between the π+ and π– as a function of ACH depends on the centrality of the collision in a manner consistent with calculations that incorporate the chiral magnetic wave. The team also found a similar result for energies down to √sNN = 27 GeV, with no obvious dependence on beam energy. The researchers note that none of the conventional models they have considered appear to explain the observations.
At the Flavor Physics and CP violation (FPCP) conference in Nagoya, the LHCb collaboration presented a measurement of the rate of B0 → D*+τ–ντ relative to the related decay B0→ D*+μ–νμ. The first measurement of any B → τX decay at a hadron collider, it also indicates a tantalizing anomaly.
In the Standard Model, the ratio of these two branching fractions differs from unity only as a result of effects related to the mass of the much heavier τ lepton. The ratio R(D*) = BR(B0 → D*+τ–ντ)/BR(B0 → D*+μ–νμ) is therefore precisely calculable in the Standard Model as equal to 0.252±0.003.
Lepton universality dictates that the electroweak coupling strength of the electron, muon and tau are identical, with the three flavours distinguished only by their respective masses. So the observation of decays with differing rates to each lepton flavour, after accounting for mass effects, would be a clear sign of physics beyond the Standard Model. Owing to the large τ mass, the semitauonic B0 → D*+τ–ντ decay rate is particularly sensitive to the charged Higgs bosons predicted by many extensions of the Standard Model. Previous measurements have consistently been above predictions, making new results hotly anticipated.
LHCb has analysed 3 fb–1 of data from Run 1 of the LHC to measure R(D*) using the τ → μνμντ decay, which allows both the semitauonic and semimuonic mode to be reconstructed in the same final state. The two decays are distinguished via a fit to the decay kinematics, reconstructed using the visible decay products and an approximation for the rest frame of the B (see figure). In addition to the B0 → D*+τ–ντ and B0 → D*+μ–νμ decays, the D*+μ–X final state also receives large contributions from several background processes. The modelling of these backgrounds in LHCb is constrained using control samples in data, strongly controlling uncertainties due to theoretical models. The result presented of 0.336±0.027±0.033 is in close agreement with a result from BaBar in 2012, and is 2.1σ away from the Standard Model prediction.
Between the results from LHCb, BaBar and the Belle collaboration – which also presented updated results at the conference – a tantalizing picture is emerging in this channel. LHCb already has plans for complementary measurements in the decays B– → D0τ–ντ and Λ0b → Λ+c τ–ντ with the LHC Run 1 data set, and data from Run 2 is expected to allow for exciting improvements.
Studies of the production of top quarks in the forward region at the LHC are potentially of great interest in terms of new physics. Not only does the process have an enhanced sensitivity to physics beyond the Standard Model (owing to sizable contributions from quark–antiquark and gluon–quark scattering), but measurements of the forward production of top-quark pairs (tt) can be used to constrain the gluon parton distribution function (PDF) at large momentum fraction. Reducing the uncertainty on this PDF will increase the precision of many Standard Model predictions, especially those that serve as backgrounds to searches for new high-mass particles.
Top quarks decay almost exclusively to a W boson and a b-quark jet. The LHCb collaboration has already made high-precision measurements of W-boson production, and recently demonstrated the ability to identify, or tag, jets originating from b and c quarks (LHCb 2015a). Now, the collaboration had combined these two abilities in a study of W-boson production in association with b and c jets (LHCb 2015b), using a subset of these data samples to observe top-quark production for the first time in the forward region (LHCb 2015c). The data show a large excess of events compared with the Standard Model’s W+b-jet prediction in the absence of top-quark production (see figure).
LHCb measured the top-quark production cross-sections in a reduced fiducial region chosen to enhance the relative top-quark content of the W+b-jet final state. Within this region, the inclusive top-quark production cross-sections, which include contributions from both tt and single-top production, are σ(top) [7 TeV] = 239±53(stat.)±38(syst.) fb and σ(top) [8 TeV] = 289±43(stat.)±46(syst.) fb. These values are in agreement with the Standard Model predictions of 180+51–41 (312+83–68) fb at 7(8) TeV obtained at next-to-leading order using MCFM, the Monte Carlo programme for femtobarn processes.
In the LHC’s Run 2, the higher beam energy should lead to a greatly increased cross-section and acceptance for top-quark production. This will allow LHCb to measure precisely both tt and single-top production, and so provide important constraints on the gluon PDF as well as potential signs for physics beyond the Standard Model.
Over the past decade and more, cosmology on one side and particle physics on the other have approached what looks like a critical turning point. The theoretical models that for many years have been the backbone of research carried out in both fields – the Standard Model for particle physics and the Lambda cold dark matter (ΛCDM) model for cosmology – are proving insufficient to describe more recent observations, including those of dark matter and dark energy. Moreover, the most important “experiment” that ever happened, the Big Bang, remains unexplained. Physicists working at both extremes of the scale – the infinitesimally small and the infinitely large – face the same problem: they know that there is much to search for, but their arms seem too short to reach still further distances. So, while researchers in the two fields maintain their specific interests and continue to build on their respective areas of expertise, they are also looking increasingly at each other’s findings to reconstitute the common mosaic.
Studies on the nature of dark matter are the most natural common ground between cosmology and particle physics. Run 2 of the LHC, which has just begun, is expected to shed some light on this area. Indeed, while the main outcome of Run 1 was undoubtedly the widely anticipated discovery of a Higgs boson, Run 2 is opening the door to uncharted territory. In practical and experimental terms, exploring the properties and the behaviour of nature at high energy consists in understanding possible signals that include “missing energy”. In the Standard Model, this energy discrepancy is associated with neutrinos, but in physics beyond the Standard Model, the missing energy could also be the signature of many undiscovered particles, including the weakly interacting massive particles (WIMPs) that are among the leading candidates for dark matter. If WIMPs exist, the LHC’s collisions at 13 TeV may reveal them, and this will be another huge breakthrough. Because supersymmetry has not yet been ruled out, the high-energy collisions might also eventually unveil the supersymmetric partners of the known particles, at least the lighter ones. Missing energy could also account for the escape of a graviton into extra dimensions, or a variety of other possibilities. Thanks to the LHC’s Run 1 and other recent studies, the Standard Model is so well known that future observation of an unknown source of missing energy could be confidently linked to new physics.
Besides the search for dark matter, another area where cosmology and particle physics meet is in neutrino physics. The most recent result that collider experiments have published for the number of standard (light) neutrino types is Nν = 2.984±0.008 (ALEPH et al. 2006). While the search for a fourth right-handed neutrino is continuing with ground-based experiments, satellite experiments have shown that they can also have their say. Indeed, recent results from ESA’s Planck mission yield Neff = 3.04±0.18 for the effective number of relativistic degrees of freedom, and the sum of neutrino masses is constrained to Σmν < 0.17 eV. These values, derived from Planck’s data of temperature and polarization CMB anisotropies in combination with data from baryonic acoustic oscillation experiments, are consistent with standard cosmological and particle-physics predictions in the neutrino sector (Planck Collaboration 2015a). Although these values do not completely rule out a sterile neutrino, especially if thermalized at a different background temperature, its existence is disfavoured by the Planck data (figure 1).
Ground-based experiments have observed the direct oscillation of neutrinos, which proves that these elusive particles have a nonzero mass.
Working out absolute neutrino masses is no easy task. Ground-based experiments have observed the direct oscillation of neutrinos, which proves that these elusive particles have a nonzero mass. However, no measurement of absolute masses has been performed yet, and the strongest upper limit (about one order of magnitude more accurate than direct-detection measurements) on their sum comes from cosmology. Because neutrinos are the most abundant particles with mass in the universe, the influence of their absolute mass on the formation of structure is as big as their role in many physics processes observed at small scales. The picture in the present Standard Model might suggest (perhaps naively) that the mass distribution among the neutrinos could be similar to the mass distribution among the other particles and their families, but only experiments such as KATRIN – the Karslruhe Tritium Neutrino experiment – are expected to shed some light on this topic.
In recent years, cosmologists and particle physicists have shown a common interest in testing Lorentz and CPT invariances. The topic seems to be particularly relevant for theorists working on string theories, which sometimes involve mechanisms that lead to a spontaneous breaking of these symmetries. To find possible clues, satellite experiments are probing the cosmic microwave background (CMB) to investigate the universe’s birefringence, which would be a clear signature of Lorentz invariance and, therefore, CPT violation. So far, the CMB experiments WMAP, QUAD and BICEP1 have found a value of α – the rotation angle of the photon-polarization plane – consistent with zero. Results from Planck on the full set of observations are expected later this year.
Since its discovery in 2012, the Higgs boson found at the LHC has been in the spotlight for physicists studying both extremes of the scale. Indeed, in addition to its confirmed role in the mass mechanism, recent papers have discussed its possible role in the inflation of the universe. Could a single particle be the Holy Grail for cosmologists and particle physicists alike? It is a fascinating question, and many studies have been published about the particle’s possible role in shaping the early history of the universe, but the theoretical situation is far from clear. On one side, the Higgs boson and the inflaton share some basic features, but on the other side, the Standard Model interactions do not seem sufficient to generate inflation unless there is an anomalously strong coupling between the Higgs boson and gravity. Such strong coupling is a highly debated point among theoreticians. Also in this case, the CMB data could help to rule out or disentangle models. Recent full mission data from Planck clearly disfavour natural inflation compared with models that predict a smaller tensor-to-scalar ratio, such as the Higgs inflationary model (Planck Collaboration 2015b). However, the question remains open, and subject to new information coming from the LHC’s future runs and from new cosmological missions.
AMS now has results based on more than 6 × 1010 cosmic-ray events.
In the meantime, astroparticle physics is positioning itself as the area where both cosmology and particle physics could find answers to the open questions. An event at CERN in April provided a showcase for experiments on cosmic rays and dark matter, in particular the latest results from the Alpha Magnetic Spectrometer (AMS) collaboration on the antiproton-to-proton ratio in cosmic rays and on the proton and helium fluxes. Following earlier measurements by PAMELA – the Payload for Antimatter Matter Exploration and Light nuclei Astrophysics – which took data in 2006–2011, AMS now has results based on more than 6 × 1010 cosmic-ray events (electrons, positrons, protons and antiprotons, as well as nuclei of helium, lithium, boron, carbon, oxygen…) collected during the first four years of AMS-02 on board the International Space Station. With events at energies up to many tera-electron-volts, and with unprecedented accuracy, the AMS data provide systematic information on the deepest nature of cosmic rays. The antiproton-to-proton ratio measured by AMS in the energy range 0–500 GeV shows a clear discrepancy with existing models (figure 2). Anomalies are also visible in the behaviour of the fluxes of electrons, positrons, protons, helium and other nuclei. However, although a large part of the scientific community tends to interpret these observations as a new signature of dark matter, the origin of such unexpected behaviour cannot be easily identified, and discussions are still ongoing within the community.
It may seem that the universe is playing hide-and-seek with cosmologists and particle physicists alike as they probe both ends of the distance scale. However, the two research communities have a new smart move up their sleeves to unveil its secrets – collaboration. Bringing together the two ends of the scales probed by the LHC and by Planck will soon bear its fruits. Watch this space!
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.