Comsol -leaderboard other pages

Topics

ATLAS reveals more strangeness in the proton

The excellent theoretical understanding of the production of electroweak W and Z gauge bosons in proton–proton collisions at the LHC makes these “standard-candle” processes ideal for studying the detailed performance of the ATLAS detector, and thus improves the precision on measurements. Specifically, differences in the couplings of the W+, W, Z and γ* bosons to quarks and antiquarks appear as differences in rapidity distributions that reveal additional information about the structure of the proton.

Protons are often considered to be composed of two up quarks and one down quark, but when probed at small distances they reveal additional content. This includes a “sea” of up and down quarks, strange quarks from the heavier second generation of particles, and the gluons that bind the quarks together into the proton.

ATLAS

The ATLAS collaboration has now shed light on the least-known component of the proton – its content of strange quarks – based on sub-per-cent measurements of the kinematic dependencies of the W and Z boson cross-sections using LHC data recorded in 2011 at an energy of 7 TeV. Previous determinations of the strange-quark content of the proton were based on neutrino scattering, in which charged-current interaction muons from the fragmentation of charm quarks were detected. Contrary to theoretical expectations, these data revealed a suppression of strange quarks relative to the up and down quarks.

Gaining further insight into the proton structure using inclusive W and Z boson production required significant experimental improvements, with painstaking calibration efforts revealing detection efficiencies in real and simulated data at the per-mille level using both the electron and muon channels. Indeed, thanks to these studies, the ATLAS data provided a new test of electron–muon universality in the weak-interaction sector that is in excellent agreement with the Standard Model at the sub-per-cent level.

The combined electron and muon data, including the correlations of systematic uncertainties, were compared to predictions performed at next-to-next-to leading order (NNLO) in QCD and next-to-leading order in electroweak theory. Using various parton distribution functions, the comparisons revealed significant tensions between measurement and theory. Interpreting HERA-inclusive deep-inelastic-scattering data including the ATLAS data in an NNLO QCD fit pointed to a new sensitivity to the strangeness suppression factor Rs = (s + s)/(d + u), as shown in the figure. The data confirm with significantly improved precision the previous ATLAS determination of an unsuppressed strange-quark content (shown as ATLAS-epWZ12) based on 2010 data.

The result may have important implications for further precision measurements of Standard Model parameters, in particular the mass of the W boson and the weak-mixing angle, since these are affected by the second generation of quarks. The ATLAS measurement challenges the current paradigm of a suppression of the strange- compared to other light-quark distributions, but the quest continues.

ALICE measures shape of the QGP fireball at freeze-out

Heavy-ion collisions at LHC energies create a hot and dense medium of deconfined quarks and gluons, known as the quark–gluon plasma (QGP). The QGP fireball first expands, cools and then freezes out into a collection of final-state hadrons. Correlations between the free particles carry information about the space–time extent of the emitting source, and are imprinted on the final-state spectra due to a quantum-mechanical interference effect. To measure these correlations and to determine the space–time parameters of the source, physicists utilise Hanbury Brown and Twiss (HBT) interferometry, a technique first used in astronomy for determining the angular sizes of stars. Using azimuthally differential HBT interferometry, the ALICE collaboration has recently measured the shape of the fireball at freeze-out.

ALICE

In a non-central collision, the nuclear overlap region is almond shaped with the longer axis oriented perpendicular to the reaction plane (defined by the impact parameter and the beam direction). The spatial anisotropies in the initial state are converted, via pressure gradients, to momentum anisotropies, leading to anisotropic particle flow. The magnitudes of the momentum anisotropies are quantified by the so-called vn coefficients, where the second harmonic coefficient (v2) is generated from the system’s approximately elliptic shape. This is usually called elliptic flow, and the direction of the strongest component of elliptic flow is defined as the elliptic-flow plane.

The HBT radius, measured as a function of the pair-emission azimuth relative to the elliptic-flow plane, exhibits oscillations and thus provides information on the eccentricity of the source at freeze-out, when the particles cease to interact. The source eccentricity at freeze-out can be estimated from oscillations of the HBT radius at low pion-pair transverse momentum. ALICE has measured the pion HBT-radius oscillations for different transverse-momentum ranges as a function of centrality in lead–lead collisions at an energy of 2.76 TeV per nucleon pair and plotted the results as a function of the initial eccentricity (see figure on previous page).

The final eccentricities are significantly below the initial eccentricities due to a larger expansion in the in-plane direction. The freeze-out eccentricities measured by ALICE are smaller than those measured at RHIC energies, likely reflecting the longer lifetime of the system at the LHC. Hydrodynamic calculations performed for similar centralities and pair transverse-momentum ranges as in the ALICE experiment show a similar trend, but predict smaller final-source eccentricity corresponding to a more spherical source.

The final-state source eccentricity remains positive for all the pair transverse-momentum ranges, indicating that even after a stronger expansion in the in-plane direction, the pion source at freeze-out is still elongated in the out-of-plane direction. In the future, the ALICE collaboration intends to measure the azimuthal dependence of the HBT radii relative to the higher-harmonic (n ≥ 3) flow planes, which is directly sensitive to anisotropies in the system’s collective velocity fields.

Rare decay puts Standard Model on the spot

The decay rate of the B0s meson to two muons is a flagship measurement in flavour physics. It is extremely rare and well predicted in the Standard Model (SM), with a branching fraction of (3.65±0.23) × 10–9. It proceeds via a loop diagram that involves the heaviest known particles: the Z and W bosons and the top quark. Any unknown heavier particles that exist are likely to also contribute to this decay, which makes it a very sensitive probe of physics beyond the SM. After three decades of unsuccessful searches, the observation of the decay was first announced in a joint paper in Nature in 2015 by the CMS and LHCb collaborations using LHC data from Run 1.

LHCb

Recently the LHCb collaboration reported an improved analysis of this decay with data from 2015 and 2016 added to the Run-1 sample. Work during the long shutdown allowed significant improvements to be made in background rejection, which increased the experiment᾿s sensitivity. The B0s → μ+μ peak is clearly visible in the resulting mass plot, with a small bump possibly due to the B0 meson to its left (see figure, top). The significance of the former is 7.8σ, corresponding to the first observation of this decay by a single experiment. At just 1.6σ, the B0 peak is not significant.

Using the well-known decays B0→ K+π and B+→ J/ψK+ to calibrate and normalise the efficiencies, the B0s → μ+μ branching fraction is measured to be (3.0±0.6) × 10–9, which is the most precise measurement to date. Although consistent with the SM, the experimental precision still has to improve before it matches the present theoretical accuracy.

For the first time, LHCb also measured the effective lifetime of the B0s → μ+μ decay. The Bs meson system has much in common with that of the K0 meson, in that it exhibits a heavier long-lived state and a lighter shorter-lived state. Only the former is allowed to decay into μ+μ in the SM, but that may not be the case in other scenarios. The contributions of the two states can be disentangled by fitting a single exponential to the lifetime distribution (figure, below). The fitted effective lifetime is consistent within 1σ with the hypothesis of only the heavier state contributing, and within 1.4σ of the opposite. While this result does not yet tell us anything about new physics, it allows the sensitivity to be extrapolated to larger data samples. With the 300 fb–1 integrated-luminosity target of the LHCb phase-II upgrade, the two states could be disentangled at the 5σ level and thus provide a new and important test of the SM.

BaBar casts further doubt on dark photons

Dark photons, are hypothetical low-mass spin-1 particles that couple to dark matter but have vanishing couplings with normal matter. Such a boson, which may be associated with a U(1) gauge symmetry in the dark sector and mix kinetically with the Standard Model photon, offers an explanation for puzzling astrophysical observations such as the positron abundance in cosmic rays reported by the PAMELA satellite. Dark photons have also been invoked as possible explanations to the muon g-2 anomaly.

Based on single-photon events in 53 fb1 of e+e collision data collected at the PEP-II B factory in SLAC, California, the BaBar collaboration has now completed a thorough search for these particles (Aʹ) via the process e+eγ Aʹ. The search was based on the assumption that the dark photon decays almost entirely to dark-matter particles and therefore that no energy would be deposited in the BaBar detector from its decay products. Finding no evidence for such processes, the analysis places 90% confidence-level upper limits on the coupling strength of Aʹ to e+e for dark photons lighter than 8 GeV. In particular, the BaBar limits exclude values of the Aʹ coupling suggested by the dark-photon interpretation of the muon g-2 anomaly, as well as a broad range of parameters for dark-sector models (see figure).

“This paper is the final word from BaBar on a search where the dark photon decays invisibly,” says BaBar spokesperson Michael Roney. “But we are continuing to search for dark photons and other dark-sector particles that have visible decay modes.”

The BaBar result follows another direct search for sub-GeV dark photons carried out recently by CERN’s NA64 experiment, in which electrons incident on an active target probe the process e Z  e Z Aʹ. Again, no evidence for such decays was found, and NA64 was able to exclude dark photons with a mass less than around 0.1 GeV.

“The thing is, there are dark photons and dark photons,” says theorist  Sean Carroll of Caltech, who has worked on dark-photon models. “In contrast to massless dark photons, which are analogous to ordinary photons, this experiment constrains a slightly different idea of dark force-carrying particles that are associated with a broken symmetry, which therefore get a mass and then can decay. They are more like ‘dark Z bosons’ than dark photons.”

Gravitational lens challenges cosmic expansion

Using galaxies as vast gravitational lenses, an international group of astronomers has made an independent measurement of how fast the universe is expanding. The newly measured expansion rate is consistent with earlier findings in the local universe based on more traditional methods, but intriguingly remains higher than the value derived by the Planck satellite – a tension that could hint at new physics.

The rate at which the universe is expanding, defined by the Hubble constant, is one of the fundamental quantities in cosmology and is usually determined by techniques that use Cepheid variables and supernovae as points of reference. A group of astronomers from the H0LiCOW collaboration led by Sherry Suyu of the Max Planck Institute for Astrophysics in Germany, ASIAA in Taiwan and the Technical University of Munich, used gravitational lensing to provide an independent measurement of this constant. The gravitational lens is made of a galaxy that deforms space–time and hence bends the light travelling from a background quasar, which is an extremely luminous and variable galaxy core. This bending results in multiple images, as seen from Earth, of the same quasar that are almost perfectly aligned with the lensing galaxy (see image).

While being simple in theory, in practice the new technique is rather complex. A straightforward equation relates the Hubble constant to the length of the deflected light rays between the quasar and Earth. Since the brightness of a quasar changes over time, astronomers can see the different images of the quasar flicker at different times, and the delays between them depend on the lengths of the paths the light has taken. Deriving the Hubble constant therefore depends on very precise modelling of the distribution of the mass in the lensing galaxy, as well as on several hundred accurate measurements of the multiple images of the quasar to derive its variability pattern over many years.

A possible explanation of this discrepancy… could involve an additional source of dark radiation in the early universe.

This complexity explains why the measurement of the Hubble constant – reported in a separate publication by H0LiCOW collaborator Vivien Bonvin from the EPFL in Switzerland and co-workers – relies on a total of four papers by the H0LiCOW collaboration. The obtained value of H0 = 71.9±2.7 km s–1 Mpc–1 is in excellent agreement with other recent determinations in the local universe using classical cosmic-distance ladder methods. One of these, by Adam Riess and collaborators, finds an even higher value of the Hubble constant (H0 = 73.2±1.7 km s–1 Mpc–1) and has therefore triggered a lot of interest in recent months.

The reason is that such values are in tension with the precise determination of the Hubble constant by the Planck satellite. Assuming standard “Lambda Cold Dark Matter” cosmology, the Planck collaboration derived from the cosmic-microwave-background radiation a value of H0 = 67.9±1.5 km s–1 Mpc–1 (CERN Courier May 2013 p12). The discrepancy between Planck’s probe of the early universe and local values of the Hubble constant could be an indication that we are missing a vital ingredient in our current understanding of the universe.

A possible explanation of this discrepancy, according to Riess and colleagues, could involve an additional source of dark radiation in the early universe, corresponding to a significant increase in the effective number of neutrino species. It will be interesting to follow this debate in the coming years, when new observing facilities and also new parallax measurements of Cepheid stars by the Gaia satellite will reduce the uncertainty of the Hubble constant determination to a per cent or less.

The two-loop explosion

Studying matter at the highest energies possible has transformed our understanding of the microscopic world. CERN’s Large Hadron Collider (LHC), which generates proton collisions at the highest energy ever produced in a laboratory (13 TeV), provides a controlled environment in which to search for new phenomena and to address fundamental questions about the nature of the interactions between elementary particles. Specifically, the LHC’s main detectors – ATLAS, CMS, LHCb and ALICE – allow us to measure the cross-sections of elementary processes with remarkable precision. A great challenge for theorists is to match the experimental precision with accurate theoretical predictions. This is necessary to establish the Higgs sector of the Standard Model of particle physics and to look for deviations that could signal the existence of new particles or forces. Pushing our current capabilities further is key to the success of the LHC physics programme.

Underpinning the prediction of LHC observables at the highest levels of precision are perturbative computations of cross-sections. Perturbative calculations have been carried out since the early days of quantum electrodynamics (QED) in the 1940s. Here, the smallness of the QED coupling constant is exploited to allow the expressions for physical quantities to be expanded in terms of the coupling constant – giving rise to a series of terms with decreasing magnitude. The first example of such a calculation was the one-loop QED correction to the magnetic moment of the electron, which was carried out by Schwinger in 1948. It demonstrated for the first time that QED was in agreement with the experimental discovery of the anomalous magnetic moment of the electron, ge-2 (the latter quantity was dubbed “anomalous” precisely because, prior to Schwinger’s calculation, it did not agree with predictions from Dirac’s theory). In 1957, Sommerfeld and Petermann computed the two-loop correction, and it took another 40 years until, in 1996, Laporta and Remiddi computed analytically the three-loop corrections to ge-2 and, 10 years later, even the four- and five-loop corrections were computed numerically by Kinoshita et al. The calculation of QED corrections is supplemented with predictions for electroweak and hadronic effects, and makes ge-2 one of the best known quantities today. Since ge-2 is also measured with remarkable precision, it provides the best determination of the fine-structure constant with an error of about 0.25 ppb. This determination agrees with other determinations, which reach an accuracy of 0.66 ppb, showcasing the remarkable success of quantum field theory in describing material reality.

In the case of proton–proton collisions at the LHC, the dominant processes involve quantum chromodynamics (QCD). Although in general the calculations are more complex than in QED due to the non-abelian nature of this interaction, i.e. the self-coupling of gluons, the fact that the QCD coupling constant is small at the high energies relevant to the LHC means that perturbative methods are possible. In practice, all of the Feynman diagrams that correspond to the lowest-order process are drawn by considering all possible ways in which a given final state can be produced. For instance, in the case of Drell–Yan production at the LHC, the only lowest-order diagram involves an incoming quark and an incoming antiquark from the proton beams, which annihilate to produce a Z, γ* or a W boson, which then decays into leptons. Using the Feynman rules, such pictorial descriptions can be turned into quantum-mechanical amplitudes. The cross-section can then be computed as the square of the amplitude, integrated over the phase space and appropriately summing and averaging over quantum numbers.

This lowest-order description is very crude, however, since it does not account for the fact that quarks tend to radiate gluons. To incorporate such higher-order quantum corrections, next-to-leading order (NLO) calculations that describe the radiation of one additional gluon are required. This gluon can either be real, giving rise to a particle that is recorded by a detector, or virtual, corresponding to a quantum-mechanical fluctuation that is emitted and reabsorbed. Both contributions are divergent because they become infinite in the limit when the energy of the gluon is infinitesimally small, or when the gluon is exactly collinear to one of the emitting quarks. When real and virtual corrections are combined, however, these divergences cancel out. This is a consequence of the so-called Kinoshita–Lee–Nauenberg theorem, which states that low-energy (infrared) divergences must cancel in physical (measurable) quantities.

Even if divergences cancel in the final result, a procedure to handle divergences in intermediate steps of the calculations is still needed. How to do this at the level of NLO corrections has been well understood for a number of years. The first successes of NLO QCD calculations came in the 1990s with the comparison of Drell–Yan particle-production data recorded by CERN’s SPS and Fermilab’s Tevatron experiments to leading-order and NLO QCD predictions, which had first been computed in 1979 by Altarelli, Ellis and Martinelli. The comparison revealed unequivocally that NLO corrections are required to describe Drell–Yan data, and marked the first great success of perturbative QCD (figure 1).

Things have changed a lot since then. Today, NLO corrections have been calculated for a large class of processes relevant to the LHC programme, and several tools have been developed to even compute them in a fully automated way. As a result, the problem of NLO QCD calculations is considered solved and comparing these to data has become standard in current LHC data analysis. Thanks to the impressive precision now being attained by the LHC experiments, however, we are now being taken into the complex realm of higher-order calculations.

The NNLO explosion

The new frontier in perturbative QCD is the calculation of next-to-next-to-leading order (NNLO) corrections. At the level of diagrams, the picture is once again pretty simple: at NNLO level, it is not just one extra particle emission but two extra emissions that are accounted for. These emissions can be two real partons (quarks or gluons), a real parton and a virtual one, or two virtual partons.

The first NNLO computation for a collider process concerned “inclusive” Drell–Yan production, by Hamberg, Van Neerven and Matsuura in 1991. Motivated by the SPS and Tevatron data, and also by the planned LHC and SSC experiments, this was a pioneering calculation that was performed analytically. The second NNLO calculation, in 2002, was for inclusive Higgs production in gluon–gluon fusion by Harlander and Kilgore. Inclusive calculations refer only to the total cross-section for producing a Higgs boson or a Drell–Yan pair without any restriction on where these particles end up, which is not measurable because detectors do not cover the entire phase space such as the region close to the beam.

The first “exclusive” NNLO calculations, which allow kinematic cuts to be applied to the final state, started to appear in 2004 for Drell–Yan and Higgs production. These calculations were motivated by the need to predict quantities that can be directly measured, rather then relying on extrapolations to describe the effects of experimental cuts. The years 2004–2011 saw more activity, but limited progress: all calculations were essentially limited to “2  1” scattering processes, in essence Higgs and Drell–Yan production, as well as Higgs production in association with a Drell–Yan pair. From a QCD point of view, the latter process is simply off-shell Drell–Yan production in which the vector boson radiates a Higgs. A few 2  2 calculations started to appear in 2012, most notably top-pair production and the production of a pair of vector bosons. It is only in the past two years, however, that we have witnessed an explosion of NNLO calculations (figure 2). Today, all 2  2 Standard Model LHC scattering processes are known to NNLO, thanks to remarkable progress in the calculation of two-loop integrals and in the development of procedures to handle intermediate divergences.

Compared to NLO calculations, NNLO calculations are substantially more complex. Two main difficulties must be faced: loop integrals and divergences. Two-loop integrals have been calculated in the past by explicitly performing the multi-dimensional integration, in which each loop gives rise to a “D-dimensional” integration. For simple cases, analytical expressions can be found, but in many cases only numerical results can be obtained for these integrals. The complexity increases with the number of dimensions (i.e. the number of loops) and with the number of Lorentz-invariant scales involved in the process (i.e. the number of particles involved, and in particular the number of massive particles).

Recently, new approaches to these loop integrals have been suggested. In particular, it has been known since the late 1990s that integrals can be treated as variables entering a set of differential equations, but solutions to those equations remained complicated and could be found only on a case-by-case basis. A revolution came about just three years ago when it was realised that the differential equations can be organised in a simple form that makes finding solutions, i.e. finding expressions for the wanted two-loop integrals, a manageable problem. Practically, the set of multi-loop integrals to be computed can be regarded as a set of vectors. Decomposing these vectors in a convenient set of basis vectors can lead to significant simplifications of the differential equations, and concrete criteria were proposed for finding an optimal basis. The very important NNLO calculations of diboson production have benefitted from this technology.

Currently, when only virtual massless particles are involved and up to a total of four external particles are considered, the two-loop integral problem is considered solved, or at least solvable. However, when massive particles circulate in the loop, as is the case for a number of LHC processes, the integrals give rise to a new class of functions, elliptic functions, and it is not yet understood how to solve the associated differential equations. Hence, for processes with internal masses we still face a conceptual bottleneck. Overcoming this will be very important for Higgs studies at large transverse momentum, where the top loop to which the Higgs couples is resolved. The calculation of these integrals is today an area with tight connections to more formal and mathematical areas, leading to close collaborations between the high-energy physics and the mathematical/formal-oriented communities.

The second main difficulty in NNLO calculations is that, as at NLO, individual contributions  are divergent in the infrared region, i.e. when particles have a very small momentum or become collinear with respect to one another, and the structure of these singularities is now considerably more complex because of the extra particle radiated at NNLO. All singularities cancel when all contributions are combined, but to have exclusive predictions it is necessary to cancel the singularities before performing integrations over the phase space. Compared to NLO, where systematic ways to treat these intermediate divergences have been known for many years, the problem is more difficult at NNLO because there are more divergent configurations and different divergences overlap. The past few years have seen remarkable developments in the understanding and treatment of infrared singularities in NNLO computations of cross-sections, and a range of methods based on different physical ideas have been successfully applied.

Beyond NNLO

Is the field of precision calculations close to coming to an end? The answer is, of course, no. First, while the problem of cancelling singularities is in principle solved in a generic way, in practice all methods have been applied to 2  2 processes only, and no 2  3 cross-section calculation is foreseen in the near future. For instance, the very important processes of three-jet production or Higgs production in association with a top-quark pair are known to NLO accuracy only. Similarly, two-loop pentagon integrals required for the calculation of 2  3 scatterings are at the frontier of what can be done today. Furthermore, most of the existing NNLO computer codes require extremely long runs on large computer farms, with typical run times of several CPU years. It could be argued that this is not an issue in an age of large computer farms and parallel processing and when CPU time is expected to become cheaper over the years, however, the number of phenomenological studies that can be done with a theory prediction is much larger when calculations can be performed quickly on a single machine. Hence, in the coming years NNLO calculations will be scrutinised and compared in terms of their performances. Ultimately, only one or a few of the many existing methods to perform integrals and to treat intermediate divergences is likely to take over.

Given how hard and time-consuming NNLO calculations are, we should also ask if it is worth the effort. A comparison with data for the diboson (WZ) production process at different LHC beam energies to NLO and NNLO calculations (figure 3) provides an indication of the answer. It is clear that LHC data already indicate a clear preference for NNLO QCD predictions and that, once more data are accumulated, NLO will likely be insufficient. While it is early days for NNLO phenomenology, the same conclusion applies to other measurements examined so far.

In the past, accurate precision measurements have provided a strong motivation to push the precision of theoretical predictions. On the other side, very precise theory predictions have stimulated even more precise measurements. Today, the accuracy reached by LHC measurements is by far better than what anybody could have predicted when the LHC was designed. For instance, the Z transverse momentum spectrum reaches an accuracy of better than a per cent over a large range of transverse momentum values, which will be important to further constrain parton-distribution functions, and the mass of the W boson, which enters precision tests of the Standard Model, is measured with better than 20 MeV accuracy. In the future, one should expect that high-precision theoretical predictions will push the experimental precision beyond today’s foreseeable boundaries. This will usher in the next phase in perturbative QCD calculations: next-to-next-to-next-to-leading order, or N3LO.

Today we have two pioneering calculations beyond NNLO: the N3LO calculation of inclusive Higgs production (in the large top-mass approximation), and the N3LO calculation of inclusive vector-boson-fusion Higgs production. Both calculations are inclusive over radiation, exactly in the same way that the first NNLO calculations were. These calculations are now suggestive of a good convergence of the perturbative expansion, meaning that the N3LO correction is very small and that the N3LO result lies well within the theoretical uncertainty band of the NNLO result. Turning these calculations into fully exclusive predictions is the next theoretical challenge.

Looking forward to photon–photon physics

As its name suggests, the Large Hadron Collider (LHC) at CERN smashes hadrons into one another – protons, to be precise. The energy from these collisions gets converted into matter, producing new particles that allow us to explore matter at the smallest scales. The LHC does not fire protons into one another individually; instead, they are circulated in approximately 2000 bunches each containing around 100 billion protons. When two bunches are focused magnetically to cross each other in the centre of detectors such as CMS and ATLAS, only 30 or so protons actually collide. The rest continue to fly through the LHC unimpeded until the next time that two bunches cross.

Occasionally, something very different happens. If two protons travelling in opposite directions pass very close to one another, photons radiated from each proton can collide and produce new particles. The two parent protons remain completely intact, continuing their path in the LHC, but the photon–photon interaction removes a fraction of their initial energy and causes them to be slightly deflected from their original trajectories. By identifying the deflected protons, one can determine whether such photon interactions took place and effectively turn the LHC into a photon collider. It is also possible for the two protons to exchange pairs of gluons, which is another interesting process.

The idea of tagging deflected protons has been pursued at previous colliders, and also at the LHC back in 2012 and 2015 using only low-intensity beams. The proposal to pursue this type of physics with the LHC’s CMS and/or ATLAS experiments was first presented many years ago, but the project (under the name FP420) did not materialise.

A new project called the CMS-TOTEM Precision Proton Spectrometer (CT-PPS) has now taken up the challenge of making photon–photon physics possible at the LHC when operating at nominal luminosity. While CMS is a general-purpose detector for LHC physics, CT-PPS uses two sets of detectors placed 200 m either side of the CMS interaction point to measure protons in the forward direction. A parallel project called ATLAS Forward Physics (AFP) is also being developed by ATLAS, and both experiments aim to be in operation throughout this year’s LHC proton–proton run.

Light collisions

Despite photons being electrically neutral, the Standard Model (SM) allows two photons to interact via the exchange of virtual charged particles. Several final states are possible (figure 1), including a pair of photons. The latter process (γγ → γγ, or “light-by-light scattering”) has been known since the development of quantum electrodynamics (QED) and tested indirectly in several experiments, but the first direct evidence came last year from ATLAS in low-luminosity measurements of lead–lead collisions (CERN Courier December 2016 p9). Since the probability of emitting photons scales with the square of the electrical charge, the cross-section for lead–lead collisions is significantly higher than for proton–proton collisions. By searching for two photons and nothing else in the central detector and using kinematics cut to suppress backgrounds, the invariant mass of the two photons was in the region of 10 GeV. The measured cross-section was compatible with the QED prediction and, since no deviations are expected in this low-mass range, the ATLAS result was interesting but somewhat expected.

In forward experiments such as CT-PPS and AFP, however, the high-luminosity proton collisions allow a much higher mass region to be probed – between 300 GeV and 2 TeV in the case of CT-PPS. Proton tagging is possible because centrally produced high-mass systems cause the protons to lose enough energy to be deflected into the CT-PPS detectors. The study of photon interactions in this region could therefore provide new insights about the electroweak interaction, in particular the quartic gauge couplings predicted by the SM. These are interactions where two photons annihilate upon collision to produce two W bosons, implying four particles at the same vertex in a Feynman diagram (figure 1). Deviations from the SM prediction would point to new physics in the same way as the observations of deviations from the quartic coupling in Fermi’s beta-decay theory in the 1930s were the forerunner to the discovery of the W boson 50 years later.

If there are new particles with masses above 300 GeV, CT-PPS could also improve CMS’s general discovery potential. For example, diphoton resonances at high mass have a very clean signature almost free of any background. Thus, in addition to precision electroweak tests, forward experiments such as CT-PPS provide an important cross-check of “bumps” in invariant mass distributions by offering complementary information about the production mechanism, coupling and quantum numbers of a possible new resonance. An example of this complementarity concerns the now-infamous 750 GeV bump in the diphoton invariant-mass distributions from the LHC’s 2015 data set. Although the bump turned out to be a statistical effect, it provided strong motivation to advance the CT-PPS physics programme at the time. Were similar bumps to be observed by CMS and ATLAS in future, CT-PPS and AFP will play an important role in determining whether a real resonance is responsible for the excesses seen in the data.

Forward thinking

Given their potential for revealing new physics, photon–photon collisions have been a topic of some interest for many decades. For example, photon–photon collisions were studied at CERN’s Large Electron Positron (LEP), while studies at DESY’s HERA and Fermilab’s Tevatron colliders concentrated on interactions of protons through the exchange of gluons to probe quantum chromodynamics in the non-perturbative regime. The LHC achieves a much higher energy and luminosity than LEP, but at the price of colliding particles that are not elementary. Therefore, the elementary interactions between gluons and quarks do not have well-defined energies and the interaction products include the remnants of the two protons, making physics analyses more difficult in general.

Proton-tagged photon collisions at the LHC, on the other hand, are very clean. Since photons are elementary particles and there are no proton remnants, the photon–photon collision energy at the LHC is precisely defined by the kinematics of the two tagged protons. In conjunction with CT-PPS, CMS can therefore probe anomalous quartic couplings with much better sensitivity than before.

The physics we are interested in corresponds to the process pp  ppX, where the “pp” part is measured by the CT-PPS detectors and the system “X” is measured in the other CMS sub-detectors. In the case of the quartic coupling γγWW, for instance, the process is pp  ppWW. The two photons that merge into two W are not measured directly, but energy-momentum conservation allows all of the kinematic properties of the WW pair to be deduced much more precisely from the CT-PPS proton measurements than could be achieved from the measurements of W decay products with the CMS detector alone.

The CT-PPS detectors are located on either side of CMS, 200 m from the interaction point. They rely on objects called Roman Pots (RP), which are cylinders that allow small detectors to be moved into the LHC beam pipe so that they sit a mere few mm from the beam. The RPs of TOTEM are designed to operate under special LHC runs with a small number of collisions per second. However, the physics goals of CT-PPS require the RPs to operate during normal CMS data-taking, when the LHC provides a much higher number of collisions per second. The first and most important goal of the CT-PPS project was therefore to demonstrate that the detectors could operate successfully only a few millimetres from the LHC’s high-intensity beams. The final demonstration happened between April and May 2016, and the green light for CT-PPS operation in regular high-luminosity LHC running was given the following month.

Success so far

The CT-PPS project redesigned the RPs to suit these harsh operating conditions. In collaboration with LHC teams, they also conducted a thorough programme of RP insertions at increasingly closer distances to the beam, measuring its impact on beam monitors. Great care must be taken not to disrupt the beam, since if the protons start to scrape the RPs there would be an increase in secondary particles that would trigger a beam dump. In 2016, CT-PPS used non-final detectors to collect 15.2 fb–1 of data integrated in the CMS data set. CT-PPS has proven for the first time the feasibility of operating a near-beam proton spectrometer at high luminosity on a regular basis and has paved the way for other such spectrometers.

CT-PPS is also facing big challenges in the development of the final detectors. The tracking detectors have a surface area of just 2 cm2 and reside in two RPs located 10 m apart on either side of the collision point (for a total of four stations). Six planes of silicon pixels on each station will detect the track of the flying protons to provide direction information, and the magnetic field of the LHC’s magnets will serve as the proton-deflecting field. The devices themselves have to sustain exceedingly high radiation fluxes given their proximity to the beam: a proton fluence in excess of 5 × 1015 particles/cm2 is expected after an integrated luminosity of 100 fb–1. CMS’s own tracker will not face these radiation conditions until the HL-LHC enters operation in the mid 2020s.

From 2017 onwards, CT-PPS will be using new 3D pixel technology that has been developed in view of upgrades to the CMS tracker and therefore provide valuable experience with the new sensors. The project also relies on high-precision timing detectors. CT-PPS matches the primary vertex of the collision measured in the central detector with the vertex position obtained from the difference of the time-of-arrival to the two protons, so that it can reject the background from spurious collisions piling up in the same bunch crossing. A time precision of 20 ps makes it possible to estimate the z-vertex with the 3 mm accuracy needed to reduce the background sufficiently. The timing detectors had used diamond sensors in 2016 and will add silicon low-gain avalanche diodes this year. Again, the experience acquired in a high-rate and high-radiation environment will be most valuable for CMS upgrades for HL-LHC.

Meanwhile, the ATLAS collaboration installed one arm of the AFP experiment in early 2016 and has taken data in special low-luminosity runs to study diffraction. The second AFP arm, with horizontal RP stations similar to those of CT-PPS, has also since been installed and its four-layer 3D silicon pixel detectors and new Cherenkov-based time-of-flight detectors are being assembled. They will be installed and commissioned before the LHC restarts in May this year. Like CT-PPS, AFP aims to participate in high-luminosity running throughout the year, with both operating in tandem to enhance the LHC’s search for new physics.

A 30-year adventure with heavy ions

Collision in ALICE

Heavy-ion and proton–proton collisions at ultrarelativistic energies provide a unique system with which to investigate the dynamics of matter in the early universe. By generating an incredibly hot and dense “fireball” of fundamental particles, such collisions allow us to recreate the extreme conditions of the universe during its first tens of microseconds of existence.

Given that the universe did not become transparent until roughly 370,000 years after the Big Bang, this epoch in our history lies completely out of reach to observational astronomy. According to the Standard Model of particle physics, the emergence of elementary particles and forces took place via a succession of symmetry-breaking mechanisms at different energy scales as the universe expanded and cooled. In the early universe, matter was made of freely roaming quarks – which formed the quark–gluon plasma (QGP) – in addition to leptons and gauge bosons. The QGP cooled down until hadrons including baryons such as neutrons and protons were formed. Photons continued interacting with charged particles until most of the matter became bound in neutral atoms, after which they were set free to form today’s cosmic microwave background.

During the past 30 years, a succession of collider experiments and impressive theoretical achievements have driven immense progress in the field of high-energy heavy-ion physics. Not only do these results shed new light on the dynamics of matter in the early universe, they probe fundamental predictions about the strong nuclear force governed by quantum chromodynamics (QCD).

Surprises galore

We have come a long way from the early belief in the 1970s that this early phase in the universe, recreated by colliding heavy ions at continuously increasing energies, comprised a gas of quarks and gluons. This is what was expected following asymptotic freedom, a feature of QCD that explains how the interaction between two quarks becomes asymptotically weaker as the distance between them decreases. But it took three major colliders on both sides of the Atlantic to find out what was really going on during these extreme initial moments.

The first big result came from CERN in 2000, when it was announced that heavy-ion collisions generated by the Super Proton Synchrotron (SPS) had created a new state of matter. CERN’s then Director-General, Luciano Maiani, worded the discovery as follows: From the combined data presented by the seven CERN experiments dedicated to the heavy-ion programme has emerged the clear picture that a new state of colour-deconfined matter has been created in the early stage of the collision that develops into a collective expansion of the fireball in the later stages.

This finding confirmed a fundamental prediction of QCD: above a critical temperature, quarks are no longer confined in hadrons. The CERN announcement was, however, only the beginning of our exploration into strongly interacting matter. The same year, the baton was passed to the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in the US. Just five years after the CERN announcement, the remarkable data collected at RHIC demonstrated that a change of paradigm for strongly interacting matter was needed. The QGP that had been created in RHIC’s STAR and PHENIX experiments did not have the properties of a perfect gas. Rather, it showed all the properties of a perfect liquid: a strongly interacting fluid with minimal mean free path.

With RHIC continuing to produce data, in 2010 CERN rejoined the heavy-ion programme with the newly operational Large Hadron Collider (LHC) and the dedicated heavy-ion experiment ALICE,  ATLAS, CMS and, more recently, LHCb. This machine marked a factor 25 jump in collision energy compared with RHIC, and its experiments confirmed with unprecedented precision the STAR and PHENIX findings. The LHC also offered new opportunities to explore deconfined matter in great detail, with the goal of understanding how the dynamics of matter emerge from the fundamental properties of the strong interaction and from the quark infrastructure of particles. More recently, and surprisingly, LHC data are pointing to unexpected similarities between observables measured in heavy-ion collisions and those measured in proton–lead or in high-multiplicity proton–proton collisions, perhaps hinting at yet another change of paradigm.

The past 30 years have been an arduous path where every step both reveals more knowledge to us while simultaneously generating new riddles. To mark the important achievements so far and to discuss the long and thrilling future of heavy-ion physics, more than 400 physicists met at CERN on 9 November last year to review what can be considered as one of the most vigorous fields at the forefront of the high-energy physics programme.

A fitting celebration

Although accelerators had been working with electrons and protons for many decades, it was in 1974 when the Bevalac at Lawrence Berkeley Laboratory accelerated the first ions to relativistic energies (approximately 2 GeV per nucleon) and led to further programmes at BNL and CERN. The Bevalac beams were not energetic enough to create the necessary energy densities for the QGP to form, and it required the ingenuity of accelerator physicists and the remarkable development of electron cyclotron resonance (ECR) sources during the 1980s to take the decisive step toward “ultrarelativistic” energies.

The idea to launch an experimental heavy-ion programme at CERN came shortly after the SPS had enabled the discovery of the W and Z bosons in 1983. As the then CERN Director-General Herwig Schopper recalled at the November workshop, the 1980s were not the best time to initiate new projects. The CERN budget was severely cut and the laboratory was very much focused on the construction of the Large Electron–Positron Collider (LEP). Despite this, Schopper bravely decided to give heavy-ion physics a chance. He was motivated by arguments put forth by Reinhardt Stock, Hans Specht, Rudolf Bock, William Willis and several other leading physicists, but the main arguments that convinced him came from Tsung Dao Lee during a VIP lunch at CERN. Schopper recalled: “I knew [Lee] from the parity-violation experiment. He had no direct personal interest and his physics motivation sounded convincing. The main argument he put forward was to find the theoretically predicted quark–gluon plasma, which played an important role in the development of the universe.”

When, in October 1986, oxygen-16 ions were successfully accelerated by the SPS and fired into a fixed target of gold, the heavy-ion programme began with a disparate ensemble of detectors recuperated from earlier high-energy experiments. Six different experiments were hatched, each with a different profile adapted to hunt the variety of observables predicted to accompany the QGP phase transition: WA80 “Plastic Ball”; the NA34/2 HELIOS; the NA35 streamer chamber; NA36; WA85/94; and the NA38 muon-pair spectrometer.

In 1987, together with the increase of energy and the acceleration of sulphur beams, second-generation experiments containing innovative detector technologies were launched. Among these were: NA49 with an ambitious time projection chamber; CERES and its double ring imaging Cherenkov detectors; NA57 and its silicon tracking; and NA44, which contained a focusing spectrometer that made use of cesium-iodide photocathodes for the first time. The number of aficionados of this new and intriguing field of investigation grew rapidly from a few hundred initial physicists to the several thousand from all over the world who work on today’s LHC, SPS and RHIC heavy-ion facilities.

State of the art

Today, heavy-ion science is a thriving field of research, and it is notable that it is the common denominator in the physics programmes of all four major LHC experiments. On one hand, we have entered a phase of precision measurements of the QGP properties, while on the other hand the surprising similarities between proton–proton and proton–nucleon collisions observed at the LHC lead us to question if the same dynamics are at work in light and heavy systems. As demonstrated at the Quark Matter 2017 conference (see “Highlights from Quark Matter 2017” in Faces and Places), many new results are generating discussion. On the experimental side, these are based on a wealth of high-quality data collected both at LHC and RHIC for a variety of collision systems and energies, coupled with inventive analysis tools. On the theory side, particular progress has been made in relativistic hydrodynamics calculations. Among many new and creative theoretical concepts is the non-perturbative formulation of string theory, which along with the “AdS/CFT” correspondence provides tools to perform calculations for the QGP in the strongly coupled regime.

Macroscopic properties of the QGP such as its density and viscosity can now be determined with increasing precision by studying how the QGP, modelled by hydrodynamics, transports a perturbation. Measurements include the value of high-order flow coefficients and nonlinear mode mixing, while the value of η/S (shear viscosity over entropy density) and its temperature dependence have been pinned down within a factor two or less to 1/4π – which is the conjectured minimal value for a perfect quantum fluid.

The microscopic structure of the QGP remains to be established, with the help of hard probes to provide the required resolving power. Here, jet quenching has already become a mundane phenomenon with which to study the content and dynamics of the QGP. In turn, the same studies also hint at the ability of the QGP to resolve the partonic shower. Quarkonia states, another hard probe, have also revealed rich dynamics. Their collision energy and transverse-momentum-dependent production can be understood in terms of two competing mechanisms: suppression due to resonance melting by colour screening and regeneration due to coalescence of free heavy-flavour quarks, both providing evidence for deconfinement. In addition, a flow signal has been measured for open and hidden charmed mesons, raising the question of whether charm quarks participate in the collective dynamics of the medium.

In general, the composition of the final hadronic state of the collision is quite well explained, assuming hadrons are formed in a thermalised state with a temperature that closely matches the temperature predicted for the QGP phase transition to the hadronic phase. Surprisingly, fragile objects such as light nuclei appear to be produced and to survive at temperatures several times larger than their binding energy. The possibility that nuclei were formed at the phase transition to hadrons, and not later via coalescence, would be an interesting complement to baryogenesis.

Strong future

As far as the next decades are concerned, in view of the achievements realised in the past years and the remaining open questions, it is a safe bet that heavy-ion physics will continue to be a vigorous field of research on both sides of the Atlantic. What are the relevant degrees of freedom of the QGP: perturbative partons, pseudo-particles, collective excitation of colour fields? Which dynamics drive the collision towards the formation of the QGP on timescales of a trillionth of a trillionth of a second, and in systems as small as a proton–proton collision? At which energy and which size does collectivity and statistical behaviour step in, and is chiral symmetry restored in the QGP?

These are some of the unanswered questions in the heavy-ion field. Existing and planned facilities that offer varying collision systems combined with ever more sophisticated detectors and strong collaborations between the theory and experiment communities are key to answering them. Based on what we are seeing currently, heavy-ion veteran Reinhard Stock commented that we could be about to enter a new paradigm with impact across high-energy physics. That would perhaps reveal QCD to be some sort of low-energy limit to a more fundamental theory.

Accelerating gender equality

When I started working at CERN in 1976, women were a relatively rare sight. The few women who did work here generally held administrative roles, many having started with the incongruous job title of “scanning girls”, regardless of the age at which they had been recruited. Back then it was quite normal to walk into a workshop and find pictures of naked females on the walls, and everyday sexism was common. I recall once being told that women couldn’t possibly do night shifts in the control room. The reason, a male colleague explained, was to avoid mysterious calls in the middle of the night: “What if there was a problem and she has to call a physicist? What would his wife think?”

Such attitudes were not just true of CERN, of course, and things have changed significantly since then. Even as recently as 1995, less than three per cent of CERN research and applied physicists were female, whereas today that number is around 18 per cent. Similar increases have been seen across engineering and technician roles, and CERN now has its first female Director-General.   

It was in 1996 that CERN launched its equal opportunities (EO) programme. I was appointed as the first EO officer, and the following year an EO advisory panel was created. Many a meeting was taken up by educating male colleagues about the lasting effects of sexist behaviour through the personal experiences of their female counterparts. The EO programme adopted a four-pronged strategy focusing on recruitment, career development, work environment and harassment. On recruitment we took a firm stand against quotas, recommending instead thorough monitoring that would ensure reasonable proportions of qualified women were shortlisted for interview.

Equitable recruitment practices that we take for granted today were then the subject of much debate. The multicultural nature of CERN brought added complexity, as people’s notions of acceptable behaviour varied greatly. We were often accused of exaggerating the need for gender-neutral language or reproached for no longer having a sense of humour. Although some women colleagues found themselves in the uncomfortable situation of wishing to support EO initiatives while not wishing to risk the perception of tokenism or positive discrimination, many became vital allies in moving the EO agenda forward. Whether it was a question of work–life balance or simply accepting women in all job categories, a great deal of effort was invested to overcome resistance born of years of habit. It has only been over time that the proportion of female scientists at CERN has risen to match the numbers in society, as reflected by our world-wide user community.

CERN’s EO programme itself has also evolved into today’s diversity programme, which was launched in 2010 together with a newly created ombudsperson function and a formal harassment investigation panel. The CERN code of conduct was also produced at this time. The growing numbers of female colleagues in all fields at CERN is living proof that we have come a long way in the last two decades. But gender equality means more than just gender parity. While continuing our efforts to encourage female students to pursue science and to employ our colleagues through equitable recruitment practices, we should ask if we are doing everything possible to promote a mindset that enables all our colleagues to contribute as equals.

The last six years have seen approximately equal numbers of male and female visitors to the ombud office. However, when mapped against the corresponding staff-member populations, there are proportionally three-to-four times more women than men consulting the ombudsperson. A similar pattern is seen in other international organisations where women are a minority, and is mirrored by the proportionally higher number of females who participate in CERN’s “diversity in action” workshops. Although the issues raised by women are essentially the same as those faced by their male colleagues, a closer examination reveals examples of stereotyping and unconscious bias that suggests ours is not yet a completely level playing field.

Not only is it difficult for the majority to recognise the insidious barriers of organisational culture faced by minority groups, it is sometimes equally difficult for those within the minority to bring these aspects to light. If we are to ensure that our work environment is equally supportive to all, the experience of women needs to be shared with a wider audience including their male colleagues. We all need to join forces to assure CERN’s ongoing commitment to diversity.

Exotic hadrons bend the rules

Fifty years have passed since Dick Dalitz presented his explicit constituent-quark model at the 1966 International Conference on High Energy Physics in Berkeley, US. Murray Gell Mann and George Zweig independently introduced the quark concept in 1964, and the idea had also been anticipated by André Petermann in a little-known paper received by Nuclear Physics in 1963. But it was Dalitz who developed the model and considered excitations of quarks by analogy with the behaviour of nucleons in atomic nuclei. His primary focus was on the spectroscopy of baryons, which were interpreted as bound states of three quarks. Dalitz realised that the restrictions enforced by the Pauli exclusion principle led to a distinct pattern of supermultiplets. Today, this simple model remains in excellent agreement with experiments, in particular for mesons that comprise a quark–antiquark pair.

Despite its success in matching empirical data, the theoretical underpinning of this non-relativistic model for light hadrons has always been unclear. One of the remarkable features of hadron spectroscopy is that, half a century after the invention of the constituent-quark model, the particle data tables are filled with states that fit with a non-relativistic spectrum almost to the exclusion of anything else. Quarks are but a few MeV in mass, and are therefore surely relativistic when confined within the 1 fm radius of a proton, yet the constituent-quark model treats them as if relativity plays no role.

In the case of mesons, which fit the quark model arguably even better than baryons, this incongruity is especially significant. When Dalitz spoke in 1966, it made sense to emphasise baryons because they outnumbered the known mesons at that time. Following the discovery of charm and heavy flavours in the late 1970s, however, the spectroscopy of mesons flourished and the correlations among a meson’s spin (J), parity (P) and charge conjugation (C) were also found to be in accord with those of a non-relativistic system.

Following Dalitz’s description of the baryon spectrum, Greenberg, Nambu, Lipkin and others noted that the model’s ad-hoc correlation of baryon spins with the constraints of the Pauli principle required some novel degree of freedom, which we call “colour”. The advent of quantum chromodynamics (QCD) in the 1970s provided the rationale for this concept, explaining the existence of quark–antiquark or three-quark combinations in terms of colour-singlet clusters. But QCD did not explain the non-relativistic pattern of states. Feynman, who in his final years devoted his attention to this issue, asserted: “The [non-relativistic] quark model is correct as it explains so much data. It is for theorists to explain why.” Today, physicists still await this explanation. Yet the empirical guide of the quark model is so well established that hadrons outside of this straitjacket are deemed “exotic”.

Although the restriction to colour singlets within QCD explains the existence of qq and qqq hadrons, it raised the question of why the spectroscopy of QCD is so meagre. Colour singlets also allow combinations of pairs of quarks and antiquarks (“tetraquark” mesons), four quarks and an antiquark (“pentaquark” baryons), in addition to states comprised solely of gluons (“glueballs”). Furthermore, combinations called “hybrids” in which the gluonic fields entrapping the quark and antiquark are themselves excited are also theoretically possible within QCD (figure 1). Glueballs, tetraquarks and hybrid mesons, predicted in the late 1970s, can form correlations among a meson’s J, P and C quantum numbers that are forbidden by the non-relativistic model. Indeed, it is the lack of any empirical evidence for such exotic states in the meson spectrum that helped to establish the constituent-quark model in the first place. It is therefore ironic that searches for such states at modern experiments are now being used to establish the dynamic role of gluonic excitations in hadron spectroscopy.

Although QCD is well tested to high precision in the perturbative regime, where it is now an essential tool in the planning and interpretation of experiments, its implications for the strong-interaction limit are far less understood. Forty years after its discovery, and notwithstanding the advent of lattice QCD, hadron physics is still led by empirical data, from which clues to novel properties in the strong interactions may emerge. The search for exotic hadrons is an essential part of this strategy, and in recent years several new hadrons have been discovered that do not fit well within the traditional quark model.

Strange sightings

With hindsight, one of the first clues to the existence of quarks came in the 1950s from measurements of cosmic-ray interactions in the atmosphere, which revealed hadrons with unusual production and decay properties. These “strange” hadrons, we now know, contain one or more strange quarks or strange antiquarks, yet history has left us with a perverse convention whereby strange quarks are deemed to carry negative strangeness, and strange antiquarks are positive. Thus mesons can have one unit of strangeness, in either positive or negative amounts, while baryons can have strangeness –1, –2 or –3 (antibaryons, in turn, can have positive strangeness).

A baryon with positive strangeness (or an antibaryon with negative strangeness) is therefore classed as exotic. The minimal configuration for such a baryon would involve four quarks together with the strange antiquark, giving a total of five and the technically incorrect name of “pentaquark”. A claim to have found such a state – the θ(1540) – made headlines nearly two decades ago but is now widely disregarded. The scepticism was not that a pentaquark exists, since QCD can accommodate such a state, but that it appeared to be anomalously stable. More recently, the LHCb experiment at CERN’s Large Hadron Collider (LHC) reported decays of the Λb pentaquark-like baryon that revealed similar structures with a mass of around 4.4 GeV (CERN Courier September 2015 p5). These have normal strong-interaction lifetimes and have been interpreted as clusters of three quarks plus a charm–anticharm pair. Whether these are genuinely compact pentaquarks, or instead bound states of a charmed baryon and a meson or some other dynamic artefact, they do appear to qualify as “exotic” in that they do not fit easily into a traditional three-constituent picture.

There have also been interesting meson sightings at lepton colliders in recent decades. Electron–positron annihilation above energies of 4 GeV in numerous experiments reveals a series of peaks in the total cross-section that are consistent with radial excitations of the fundamental cc J/ψ meson: the ψ(2S), ψ(4040), ψ(4160) and ψ(4415), which are non-exotic and fit within the non-relativistic spectrum. Evidence for exotic mesons has come from data on specific final states, notably those containing a J/ψ with one or more pions, which have revealed several novel states. Historically, the first clue for an exotic charmonium meson of this type above a mass of 4 GeV came around a decade ago from the BaBar experiment at SLAC in the US. Analysing the process e+e J/ψππ, researchers there found a clear resonant-like structure dubbed Y(4260), which has no place in the qq spectrum because its mass lies between the ψ(4160) and ψ(4415) cc states. More remarkably, this state decays into charmonium and pions with a standard strong-interaction width of the order of 100 MeV rather than 100 keV, which is more typical for such a channel.

The clue to the nature of this meson appears to be that the mass of the Y meson (4260 MeV) is near the threshold for the production of DD1 – the combination of pseudoscalar (D) and axial (D1) charmed mesons (figure 2). This is the first channel in e+e annihilation where charmed meson pairs can be produced with no orbital angular momentum (i.e. via S-wave processes). Thus at threshold there is no angular-momentum barrier against a DD1 pair being created effectively at rest, and rearranging their constituents into the form of J/ψ and light flavours (the latter then seeding pions). Thus the structure could simply be a threshold effect rather than a true resonance, or an exotic “molecule” made of D and D1 charmed mesons.

The decay of the Y(4260) into J/ψππ reveals a manifestly exotic structure. The J/ψπ± channel is electrically charged with a pronounced peak called Z(3900), as reported by both the BESIII experiment in China and Belle in Japan in 2013. Another sharp peak observed by BESIII – the Z(4020) – appears in the flavour-exotic channel containing a pion and a charmonium meson. Since it can carry electric charge, this state must contain ud (or du) in addition to its cc content, and therefore cannot be explained as a bound state of a single quark and antiquark. In principle, these states should be accessible in decays of B mesons, but there is no sign of them so far.

Nonetheless, B decays are a source of further exotic structures. For example, the invariant-mass spectrum of B  K π±ψ(2S) contains a structure called the Z(4430) observed by Belle and LHCb in the ψ(2S)π invariant-mass spectrum, which contains both hidden charm and isospin and hence must contain (at least) two quarks and two antiquarks. These features first need to be established as genuine and not artefacts associated with some specific production process. Their appearance and decay in other channels would help in this regard, while the observation of analogous signals for other combinations of flavour may also signpost the underlying dynamics. If real, these states are the product of charmonium cc and light-quark basis states (a summary of charmonium candidates can be seen in figure 3).

Proceed with caution

It is clear that peaks are being found that cannot be interpreted as qqq or qq clusters. But one should not leap to the conclusion that we have discovered some fundamentally novel state built from, say, diquarks and antidiquarks or, for baryons, a pentaquark. A qq qq “tetraquark”, for example, looks less exotic when trivially rewritten as qqqq, which is suggestive of two bound conventional mesons. Indeed, these could be the two mesons in the invariant mass of which the peak was seen. Unless the peak is seen in different channels, and ideally in different production mechanisms, one should be cautious.

For example, when three or more hadrons are produced in a single decay it is common to discover peaks in invariant-mass spectra just above the two-body thresholds. These are not resonances, although papers on the arXiv preprint server are full of models built on the assumption that they are. Instead, the peaks likely arise due to competition between two effects. First, phase space opens up for the production of the two-body channel, but as the invariant mass increases, the chance of this exclusive two-body mode dies off because the probability for the wavefunctions of the two hadrons to overlap decreases. Any peak seen within a few hundred MeV of such a threshold is most likely to be the accidental result of this phenomenon. Such “cusps” have been proposed as explanations of several recent exotic candidates, such as the Z(3900) and Z(10610) spotted at BESIII and Belle, among others. Whether the tetraquark candidates X(4274), X(4500) and X(4700) recently observed at LHCb, in addition to the X(4140) found by the CDF experiment at Fermilab in 2009, herald the birth of a new QCD spectroscopy or are examples of more mundane dynamics such as cusps, is also the subject of considerable debate. In short, if a peak occurs above a two-body threshold in a single channel: beware.

Enter the deuson

More interesting for exotic-hadron studies are peaks that lie just below threshold. Such states are well known in the baryon sector, the deuteron being a good example. The nuclear force driven by pion exchange that binds neutrons and protons inside the atomic nucleus should also occur between pairs of mesons, at least for those that are stable on the timescale of the strong interaction. Thus on purely phenomenological and conservative grounds, we should anticipate meson molecules (or, by analogy with the deuteron, “deusons”), which would take us beyond the simple quark-model spectroscopy. The Y(4260) could be an example of such a state, since both DD1 and D*D0 S-wave thresholds lie in this region and pion exchange may play a role in linking the two channels (figure 4). If these states are indeed deusons then there should also be partners with isospin. Establishing whether these structures are singletons or have siblings is therefore another important step in identifying their dynamical origins.

The first sign of deusons may be expected in the axial-vector channel formed from a pseudoscalar and vector charmed (or bottom) meson. This is because pion exchange can occur between a pair of vector mesons or as an exchange force between a pseudoscalar-vector combination, but not within a state of two pseudoscalars as this would violate parity conservation. The enigmatic state X(3872), which was first observed in B decays by Belle in 2003 and occurs at the D0 D*0 + cc threshold, has long been a prime candidate for a deuson. If so, there should be analogous states in the BB* as well as charm-bottom flavour mixtures and perhaps siblings with two units of charm or bottom. Whether these states have charged partners is one of many model-dependent details. That some of these states should occur seems unavoidable, however, and if doubly charmed states exist they should be produced at the LHC.

Whereas for baryons the attractive forces arise in the exchange or “t channel”, for pairs of mesons there can also be contributions due to qq annihilation in the direct s-channel. In QCD this can also mask the search for glueballs: for example, the scalar glueball of lattice QCD predicted at a mass of around 1.5 GeV mixes with the nonet of scalar qq states in this very region. The pattern of these scalars empirically is consistent with such dynamics.

Scalar mesons are interesting not least because the theoretical interest in multiquark or molecular states originated in such particles 40 years ago, after Robert Jaffe noticed that the chromo-magnetic QCD forces are powerfully attractive in the nonet of light-flavoured scalar mesons. Intriguingly, this idea has remained consistent with the observed nonet of scalars below 1 GeV ever since. The main question that remains unresolved is to what extent these states are dominantly formed from coloured diquarks and their antidiquarks, or are better described as molecular states formed from colour-singlet π and K mesons.

LHCb in particular has shown that it is possible to identify light scalars among the decay debris of heavy-flavoured mesons, offering a new opportunity to investigate their nature and dynamics. Indeed, the kinematic reach of the LHC potentially enables a multitude of information to be obtained about heavy-flavoured mesons in both conventional and exotic combinations. We might therefore hope that information about exotic mesons will be extended into different flavour sectors to help identify the source of the binding.

Remarkably robust

In general, the simple qq picture of mesons appears to remain remarkably robust so long as there are no nearby prominent channels for pair production of hadrons in the S-wave channel. “Exotic” mesons and baryons seem to correlate with some S-wave channel sharing quantum numbers with a nominal qq state and causing the appearance of a state near the corresponding S-wave threshold. In some of these cases, but not all, the familiar forces of conventional nuclear physics play a role, and the multi-particle events at the LHC have the kinematic reach to include all combinations of non-strange, strange, charm and bottom mesons. How many of these can in practice be identified is the challenge, but identifying the dynamics of states “beyond qq” may depend on it.

In conclusion, these exotic states need to be studied in different production mechanisms and in a variety of decay channels. A genuine resonant state should appear in different modes, whereas a structure that appears in a single production mechanism and a unique decay channel is suggestive of some dynamical feature that is not truly resonant. While interesting in its own right, such a state is not “exotic” in the sense of hadron spectroscopy.

As for truly exotic states, there are different levels of exoticity. For flavoured hadrons: the least exotic are meson analogues of nuclei – “deusons” driven by pion exchange between pairs of mesons. Next are “hybrids”: states anticipated in QCD where the gluonic degrees of freedom are excited in the presence of quarks and/or antiquarks. Finally, the most exotic of all would be colour-singlet combinations of compact diquarks, which are allowed in principle by QCD and would lead to a rich spectroscopy. At present their status is like the search for extraterrestrial life: while one feels that in the richness of nature such entities must exist, they seem reluctant to reveal themselves.

bright-rec iop pub iop-science physcis connect