The CDF and DØ collaborations at Fermilab have found evidence for the production of a Higgs-like particle decaying into a pair of bottom and antibottom quarks, independent of the recently announced Higgs-search results from the LHC experiments. The result, accepted for publication in Physical Review Letters, will help in determining whether the new particle discovered at the LHC is the long-sought Higgs particle predicted in the Standard Model.
Fermilab’s Tevatron produced proton–antiproton collisions until its shutdown in 2011; the LHC produces proton–proton collisions. In their analyses, the teams at both colliders search for all potential Higgs decay modes to ensure that no Higgs-boson event is missed. While the Standard Model does not predict the mass of the Higgs boson, it does predict that the Standard Model Higgs boson favours decaying into a pair of b quarks if the mass is below 135 GeV. A heavier Higgs would decay most often into a pair of W bosons.
The CDF and DØ teams have analysed the full Tevatron data set – accumulated over the past 10 years. Both collaborations developed substantially improved signal and background separation methods to optimize their search for the Higgs boson, with hundreds of scientists from 26 countries actively engaged in the search.
After careful analysis and multiple verifications, on 2 July CDF and DØ announced a substantial excess of events in the data beyond the background expectation in the mass region between 120 GeV and 135 GeV, which is consistent with the predicted signal from a Standard Model Higgs boson. Two days later, the ATLAS and CMS collaborations announced the observation in collisions at the LHC of a new boson with a mass of about 125 GeV.
At both of the Tevatron and the LHC, b jets are produced in large amounts, drowning out the signal expected when a Standard Model Higgs boson decays to two b quarks. At the Tevatron, the most successful way to search for a Higgs boson in this final state is to look for those produced in association with a W or Z boson. The small signal and large background require that the analysis includes every event that is a candidate for a Higgs produced with a W or Z boson. Furthermore, the analysis must separate the events that are most signal-like from the rest.
In the past two years, the CDF and DØ Higgs-search analysis teams improved the expected Higgs sensitivity of these experiments by almost a factor of two by separating the analysis into multiple search channels, adding acceptance for final decay products as well as developing innovative ways for improving particle-identification methods. Combined with a Tevatron data set of 10 fb–1, these efforts led to the extraction of about 20 Higgs-like events that are not compatible with background-only predictions. These events are consistent with the production and decay of Higgs bosons created by the Tevatron. The signal has a statistical significance of 3.1 σ.
Ultra-high-energy cosmic-ray particles constantly bombard the atmosphere at energies far beyond the reach of the LHC. The Pierre Auger Observatory was constructed with the aim of understanding the nature and characteristics of these particles using precise measurements of cosmic-ray-induced extensive air showers up to the highest energies. These studies allow Auger to measure basic particle interactions, recently in an energy range equivalent to a centre-of-mass energy of √s = 57 TeV.
The structure of an air shower is complex and depends in a critical way on the features of hadronic interactions. Detailed observations of air showers in combination with astrophysical interpretations can provide specific information about particle physics up to √s = 500 TeV. This corresponds to an energy of 1020 eV for a primary proton in the laboratory system.
The depth in the atmosphere at which a cosmic-ray air shower reaches its maximum size, Xmax, correlates with the atmospheric depth at which the primary cosmic-ray particle interacted. The distribution of the measured Xmax values for the most deeply penetrating showers exhibits an exponential tail, the slope of which can be directly related to the interaction length of the initiating particle. This, in turn, provides the inelastic proton–air cross-section. The proton–proton (pp) cross-section is then inferred using an extended Glauber calculation with parameters derived from accelerator measurements that have been extrapolated to cosmic-ray energies. This Auger analysis is an extension of a method first used in the Fly’s Eye experiment in Utah (Baltrusaitis et al. 1984).
The composition of the highest-energy cosmic rays – whether they are protons or heavier nuclei – is not known and the purpose of the Auger analysis is to help in understanding it. The analysis targets the most deeply penetrating particles and so is rather insensitive to the nuclear mix. As long as there are at least some primary protons, then it is their cross-section that is measured. Moreover, to minimize systematic uncertainties, the Pierre Auger collaboration has chosen the cosmic-ray energy range of 1018–1018.4 eV (√sNN ˜ 57 TeV) in which protons appear to constitute a significant contribution to the overall flux. The largest uncertainty arises from a possible helium contamination, which would tend to yield too large a proton inelastic cross-section.
The figure shows the experimental result, which is to be published in Physical Review Letters (Abreu et al. 2012). It confirms the cross-section extrapolations implemented in interaction models that predict a moderate growth of the cross-section beyond LHC energies and is in agreement with the ln2(s) rise of the cross-section expected from the Froissart bound.
The studies of central heavy-ion collisions at the LHC by the ALICE, ATLAS and CMS experiments show that partons traversing the produced hot and dense medium lose a significant fraction of their energy. At the same time, the structure of the jet from the quenched remnant parton is essentially unmodified. The radiated energy reappears mainly at low and intermediate transverse momentum, pT, and at large angles with respect to the centre of the jet cone. The ALICE collaboration has studied this pT region in PbPb collisions at a centre-of-mass energy √sNN = 2.76 TeV by using two-particle angular correlations, with some interesting results.
In the analysis, the associated particles are counted as a function of their difference in azimuth (Δφ) and pseudorapidity (Δη) with respect to a trigger particle in bins of trigger transverse momentum, pT,trig, and associated transverse momentum, pT,assoc. With the aim of studying potential modifications of the near-side peak, correlations independent of Δη are subtracted by an η-gap method: the correlation found in 1 < |Δη| < 1.6 (as a function of Δφ) is subtracted from the region in |Δη| < 1. Figure 1 shows an example in one pT bin: only the near-side peak remains, while by construction the away-side (not shown) is flat.
ALICE studies the shape of the near-side peak by extracting both its rms value (which is a standard deviation, σ, for a distribution centred at zero) in the Δη and Δφ directions and the excess kurtosis (a statistical measure of the “peakedness” of a distribution). The near-side peak shows an interesting evolution towards central collisions: it becomes eccentric.
Figure 2 presents the rms as a function of centrality in PbPb collisions as well as the one for pp collisions (shown at a centrality of 100). Towards central collisions the σ in Δη (lines) increases significantly, while the σ in Δφ (data points) remains constant within uncertainties. This is found for all of the pT bins studied, from 1 < pT,assoc < 2 GeV/c, 2 < pT,trig < 3 GeV/c to 2 < pT,assoc < 3 GeV/c, 4 < pT,trig < 8 GeV/c (Grosse-Oetringhaus 2012).
The observed behaviour is qualitatively consistent with a picture where longitudinal flow distorts the jet shape in the η-direction (Armesto et al. 2004). The extracted rms and also the kurtosis (not shown here) are quantitatively consistent (within 20%) with Monte Carlo simulations with A MultiPhase Transport Code (AMPT) (Lin et al. 2005). This Monte Carlo correctly reproduces collective effects such as “flow” at the LHC, which stem from parton–parton and hadron–hadron rescattering in the model.
This observation suggests an interplay of the jet with the flowing bulk in central heavy-ion collisions at the LHC. The further study of the low and intermediate pT region is a promising field for the understanding of jet quenching at the LHC, which in turn is a valuable probe of the fundamental properties of quark–gluon plasma.
The XENON collaboration has announced the result of analysis of data taken with the XENON100 detector during 13 months of operation at INFN’S Gran Sasso National Laboratory. It provides no evidence for the existence of weakly interacting massive particles (WIMPs), the leading candidates for dark matter. The two events observed are statistically consistent with one expected event from background radiation. Compared with their previous result from 2011, the sensitivity has again been improved by a factor of 3.5. This constrains models of new physics with WIMP candidates even further and it helps to target future WIMP searches.
XENON100 is an ultrasensitive device. It uses 62 kg of ultrapure liquid xenon as a WIMP target and simultaneously measures ionization and scintillation signals that are expected from rare collisions between WIMPs and the nuclei of xenon atoms. The detector is operated deep underground at the Gran Sasso National Laboratory, to shield it from cosmic rays. To avoid false events occurring from residual radiation from the detector’s surroundings, only data from the inner 34 kg of liquid xenon are taken as candidate events. In addition, the detector is shielded by specially designed layers of copper, polyethylene, lead and water to reduce the background noise even further.
In 2011, the XENON100 collaboration published results from 100 days of data-taking. The achieved sensitivity already pushed the limits for WIMPs by a factor 5 to 10 compared with results from the earlier XENON10 experiment. During the new run, a total of 225 live days of data were accumulated in 2011 and 2012, with lower background and hence improved sensitivity. Again, no signal was found.
The two events observed are statistically consistent with the expected background of one event. The new data improve the bounds to 2.0 × 10–45 cm2 for the elastic interaction of a WIMP mass of 50 GeV. This is another factor of 3.5 compared with the earlier results and cuts significantly into the expected WIMP parameter region. Measurements are continuing with XENON100 and a still more sensitive, 100-tonne experiment, XENON1T, is currently under construction.
The XENON collaboration consists of scientists from 15 institutions in China, France, Germany, Israel, Italy, the Netherlands, Portugal, Switzerland and the US.
MoEDAL, the “magnificent seventh” LHC experiment, held its first Physics Workshop in CERN’s Globe of Science and Innovation on 20 June. This youngest LHC experiment is designed to search for the appearance of new physics signified by highly ionizing particles such as magnetic monopoles and massive long-lived electrically charged particles from a number of theoretical scenarios.
Philippe Bloch of CERN commenced the meeting, stressing CERN’s support for the MoEDAL programme. He spoke of the key role that smaller, well motivated “high-risk” experiments such as MoEDAL play in expanding the physics reach of the LHC and reminded the audience that “one cannot predict with certainty where the next discovery will be made”.
Nobel laureate Gerard ’t Hooft began the morning’s theory talks with a reprise of his work on the monopole in grand unified theories (GUTs), elegantly showing how the beautiful monopole mathematics plays an important role in QCD and other fundamental theories. Arttu Rajantie of Imperial College London deftly recounted the story of “Monopoles from the Cosmos and the LHC”, concentrating on more recent theoretical scenarios, such as that of the electroweak “Cho-Maison” monopole, which are detectable at the LHC because they involve particles that are much lighter than the GUT monopole, with masses in the range 1 TeV/c2.
John Ellis and Nikolaos Mavromatos of King’s College London then changed the emphasis from magnetic to electric charge. Ellis described supersymmetry (SUSY) scenarios with massive stable particles (MSPs), such as sleptons, stops, gluinos and R-hadrons, which should be observable by MoEDAL. Mavromatos characterized the numerous non-SUSY scenarios that could give rise to MSPs, such as D-particles, Q-balls, quirks, doubly charged Higgs etc., all of which MoEDAL could detect.
In the afternoon, Albert de Roeck of CERN and Philippe Mermod of the University of Geneva laid out the significant progress made by CMS and ATLAS, respectively, in the quest for new physics revealed by highly ionizing particles. James Pinfold, of the University of Alberta and MoEDAL spokesperson, made the physics case for MoEDAL. He pointed out how its often-superior sensitivity to monopoles and massive slowly moving charged particles expanded the physics reach of the LHC in a complementary way. The MoEDAL collaboration, with 18 institutes from 10 countries, is still a “David” compared with the LHC “Goliaths” but its potential physics impact is second to none.
No workshop dealing with magnetic monopoles would be complete without an account of the search for cosmic monopoles. The two main experiments in this arena – MACRO, installed underground at the Gran Sasso National Laboratory in Italy, and the SLIM experiment, at the high-altitude Mount Chacaltaya Laboratory in Bolivia – were presented by Zouleikha Sahnoun of the SLIM collaboration. These two experiments still have the world’s best limits for GUT and intermediate-mass monopoles. Returning to Earth, David Milstead of Stockholm University described a project to search for trapped monopoles at the LHC. Importantly, this initiative is complementary to that of both MoEDAL and the main LHC experiments.
Why has the monopole not been seen in previous searches at accelerators? Vincete Vento of the University of Valencia offered an ingenious explanation. Monopoles are hiding in monopolium, a bound state of a monopole and an antimonopole, a suggestion that Paul Dirac made in his 1931 paper. Vento went on to describe a couple of ways that MoEDAL might detect monopolium.
In the last talk of the workshop, John Swain of Northeastern University presented the remarkable speculation that at the LHC the neutral Higgs boson could predominantly decay into a nucleus–antinucleus pair. He sketched, and nimbly defended, a theoretical justification for this surprising suggestion. Certainly, such a decay mode would be easily detectable by MoEDAL.
The clear message of the workshop is that MoEDAL has a potentially revolutionary physics programme aimed exclusively at the search for new physics, with the minimum of theoretical prejudices and the maximum exploitation of experimental search techniques. After all, in the words of J B S Haldane: “… the universe is not only queerer than we suppose, but queerer than we can suppose.”
When atomic nuclei collide at high energies, they are expected to “melt” into a quark–gluon plasma (QGP) – a hot and dense medium made out of partons (quarks and gluons). At the LHC, many of the observed properties of the produced matter are consistent with this picture, similar to earlier findings by experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and at CERN’s Super Proton Synchrotron. The quantitative characterization of this medium is still far from complete, but with more than an order of magnitude increase in the collision energy, the LHC is providing a tremendous opportunity to extend the studies. In particular, the higher energy collisions create much greater abundances of rare probes of the hot matter – such as jets (groups of high transverse-momentum (pT) particles emitted within a narrow cone), or bound states of heavy quark–antiquark pairs.
With its flawless performance in the heavy-ion run, the LHC exceeded the projected luminosity for lead–lead (PbPb) collisions in 2011, allowing the CMS experiment to record an integrated luminosity of 150 μb–1 at a nucleon–nucleon centre-of-mass energy of √sNN = 2.76 TeV, the luminosity being approximately 20 times more than in 2010. This new data set gives the CMS collaboration the opportunity to perform a detailed investigation of the medium using probes that are available for the first time in heavy-ion collisions, in a physics programme that partially overlaps but largely complements and extends the range of heavy-ion research conducted by the ALICE and ATLAS collaborations at the LHC. This article describes some of the heavy-ion results that CMS has obtained so far, with an emphasis on unique findings from the high-luminosity data.
The CMS heavy-ion programme is multifaceted, based on the diverse capabilities of the CMS detector and the broad interests and expertise of the members of the collaboration. The key to its success lies in the careful planning, support, expertise and hard work of the entire CMS collaboration. A well optimized triggering strategy with robust algorithms was in place for the 2011 run, allowing CMS to take maximum advantage of the delivered luminosity. A detailed inspection of each heavy-ion event was performed by the level-1 and high-level trigger systems, and the most interesting events containing rare signals were written to tape.
Properties of the bulk medium
Using the 2010 data, the LHC experiments were able to characterize the bulk properties of the partonic medium. The CMS collaboration performed detailed studies of soft-particle production by measuring the charged-particle multiplicity, transverse energy flow, azimuthal asymmetry in charged-particle and neutral-pion production, and two-particle correlations. The number of produced particles changes by orders of magnitude depending on whether the collision is “head-on” (central) or peripheral. The centrality of the PbPb collisions is characterized by the energy deposited in the forward calorimeters of the CMS detector, covering small polar angles with respect to the beamline (i.e. the pseudorapidity interval 3 < |η| < 5.2), with the most central collisions leaving the largest amount of energy in the detector. The events are then categorized based on this energy into percentile intervals of the total inelastic hadronic PbPb cross-section (the 0–20% centrality class meaning the 20% most central collisions, etc). Quantitatively, the centrality is usually characterized by the number of nucleons participating in the actual collision (i.e. those in the overlap zone of the two nuclei) denoted by Npart.
As experiments at RHIC had previously observed, the hot matter produced at the LHC exhibits strong collective-flow behaviour. In off-centre collisions the initial nuclear overlap zone is spatially asymmetrical with an approximately ellipsoidal shape. This asymmetry leads to instantaneous pressure gradients that are more effective in pushing particles out from the collision zone along the minor axis of the ellipse, rather than perpendicular to it. As a result, the matter produced in the collision undergoes anisotropic expansion, which is observed as a collective flow of particles with distinct azimuthal asymmetry. A Fourier analysis of the azimuthal angular distribution of the final-state particles reveals important aspects of the collision dynamics, and provides constraints to the equation of state and the viscosity (resistance to flow) of the medium.
For head-on PbPb collisions, CMS estimates the energy density per unit volume to be about 14 GeV/fm3 at a time of 1 fm/c after the collision, which is about 100 times larger than the density of normal nuclear matter and 2.6 times greater than obtained at the highest RHIC energy. A significant increase of the mean transverse energy per particle is similarly observed. Despite this increase, the trends in the collective flow and correlation measurements show relatively modest changes compared with RHIC, indicating that the general properties of the matter produced at the LHC, as observed through the study of soft particles, are consistent with a strongly interacting partonic medium.
Jet quenching
A key diagnostic tool that provides information about the density and composition of the medium produced in high-energy heavy-ion collisions comes from the measurements of high transverse-momentum jets. These “hard probes” result from relatively rare violent scatterings of the quarks and gluons that comprise the incoming nuclei. Since the production cross-sections of these energetic partons are calculable using the well established techniques of perturbative QCD, they have long been recognized as particularly useful “tomographic” probes of the hot medium.
The majority of the produced jets originate in the scattering of gluons or light quarks (up, down or strange), which are expected to lose energy while propagating through the medium. Less frequently the outgoing parton is a heavy charm or bottom quark that may also interact – although possibly less strongly – with the medium. Of particular interest are the events that produce hard-scattering probes that do not interact strongly, such as prompt photons or weak bosons, as they provide precise constraints on the energy of the recoiling parton and enable a controlled measurement of the parton energy loss in the medium. Multiple complementary measurements involving different probes can be performed using CMS, because of the detector’s high resolution, granularity, large acceptance, high-rate read-out capability and triggering.
The enormous energy loss of the partons propagating through the hot and dense medium became immediately apparent in the online event displays of the first PbPb collisions in the LHC, which revealed strikingly unbalanced dijet events and photon–jet events (figure 1). Subsequently, both ATLAS and CMS published detailed studies of the dijet transverse-momentum asymmetry. CMS expanded on this initial observation with a comprehensive set of measurements aiming not only to quantify the amount of lost energy, but also to answer the question: “Where does the lost energy go?”
The data from jet-track correlations indicate that the large energy lost by the partons is transferred to soft hadrons, which are scattered relatively far away in rapidity from the jet axis. To investigate the possible modifications of the jet structure, measurements of the jet shapes and fragmentation functions are also pursued. The high-luminosity data set collected in 2011 allows for further characterization of the dijet momentum-
imbalance, by studying jets up to unprecedented values of transverse momenta. CMS has recently published a paper on this dijet momentum-imbalance, which is found to persist in central collisions up to the highest values of leading-jet transverse momenta studied – even the most energetic jets do not escape the medium unaltered.
Further tests of the jet-quenching hypothesis use control measurements involving probes that do not interact strongly, such as photons, Z and W bosons. The transverse-momentum spectra of charged hadrons and isolated photons are compared with their equivalents in pp collisions. Figure 2 shows the suppression factor, RAA, of the production rates of high transverse-momentum particles, scaled to be unity if nuclear collisions are a simple superposition of pp collisions. As expected, a strong suppression is observed for charged particles (RAA < 1), but the yields of electroweak probes appear unaffected by the medium (RAA ≈ 1). The measurement of b-decays to J/Ψ particles shows clearly that b-quarks are also strongly suppressed.
Having seen that isolated photons do not suffer suppression in the medium, CMS took the study to the next level using the high-luminosity data from 2011. The first measurement of a photon–jet imbalance was performed by examining events containing an isolated photon (γ) with pT > 60 GeV/c and an associated jet with pT > 30 GeV/c. The transverse momenta of the jet and the photon are compared by forming the ratio xjγ = pT(jet)/pT(γ). Figure 3 shows the centrality dependence of the average momentum imbalance, as well as the fraction of isolated photons with an associated jet partner Rjγ. The measurements in PbPb collisions are compared with those in pp collisions at the same energy and simulations that do not include the jet-quenching effect. A significant decrease in <xjγ> and Rjγ compared with the simulation is observed for more central PbPb collisions, indicating a larger parton energy loss in the collisions where the volume of the medium is larger.
While the jet-quenching phenomenon is undoubtedly established from the data, a complete theoretical understanding of the underlying parton energy-loss mechanism is still lacking. The data sample with 150 μb–1 integrated PbPb luminosity allows the study of the azimuthal anisotropy of charged-particle production up to high pT, providing additional information on the path-length dependence of the in-medium parton energy loss. Since the initial nuclear overlap zone for off-centre collisions is azimuthally asymmetrical with approximately ellipsoidal shape, partons propagating in the direction of the minor axis of the ellipse are expected to lose less energy than those propagating along the major axis. This leads to a final particle distribution (at any given transverse momentum) that is not cylindrically symmetrical, but has a cosine-shaped modulation as a function of azimuthal angle (that is, the rotation around the beamline). Figure 4 shows the half-amplitude of this cosine modulation, at different transverse-momenta and collision centralities. Nonzero elliptic anisotropy is observed even at high pT (up to pT ≈ 40 GeV/c) where most charged particles originate from the fragmentation of jets. These measurements are thus indirectly related to the amount of energy loss (and its dependence on the path length) of energetic partons inside the hot QCD medium.
Quarkonium suppression
The ultimate proof for the formation of QGP in heavy-ion collisions would be a measurement that demonstrates the presence of deconfined quarks and gluons. In the plasma state, the quark and gluon colour charges would be neutralized (or screened), similarly to the Debye screening of the electric charges of electrons and ions in an electromagnetic plasma. The colour-charge screening can be studied experimentally through the measurement of quarkonia, which consist of bound heavy quark–antiquark pairs (charm or beauty). In the QGP, the attractive force binding the pair together would be reduced, hindering the formation of the quarkonium states. Thus, observation of suppression in the production rate of these particles in comparison with the production rate in pp collisions is a signature of deconfinement, although other processes may obscure the effect.
CMS has excellent capabilities for muon detection and has measured the production rates of several particles (J/ψ, ψ(2S), ϒ(1S,2S,3S)) that have different radii and probe colour screening at different distance scales. The various quarkonium states are expected to “melt” in the QGP at different temperatures, corresponding to their respective binding energies. The measurement of the suppression pattern of several of these particles is thus needed to constrain the initial temperature in the collision and to demonstrate deconfinement.
The suppression of the excited states of the ϒ family was already seen in the 2010 data, albeit with limited statistical precision. With the high-luminosity data from 2011 the effect has been confirmed and studied in much more detail. Figure 5 shows the dimuon invariant-mass distribution obtained in PbPb collisions compared with the distribution measured in pp collisions, and clearly reveals the strong suppression of the excited ϒ states. To quantify the effect, the yields of the ϒ(2S) or ϒ(3S) states are compared with the yield of the ϒ(1S) state in PbPb and pp collisions by forming a double ratio: [ϒ(nS)/ϒ(1S)]PbPb/[ϒ(nS)/ϒ(1S)]pp. The values of these double ratios are determined to be 0.21 ± 0.07 (stat.) ± 0.02 (syst.) for n = 2 and less than 0.17 at 95% confidence level for n = 3. The individual ϒ(1S) and ϒ(2S) states are suppressed, compared with pp collisions, by factors of about 2 and 10, respectively. CMS thus finds the expected melting pattern, with the suppression being ordered according to the binding energy of the respective quarkonium states. The ψ(2S) state at high pT also falls into this picture, although a hint of an intriguing opposite trend was recently observed at low pT, but with limited statistical significance. More data are needed to confirm this observation, in particular a larger pp reference data set at the matching energy of 2.76 TeV.
By July 2012, the CMS heavy-ion programme had produced 16 papers submitted to refereed journals, of which 11 have already been published. More analyses are under way, and the collaboration is also preparing for the upcoming proton–lead collisions in the LHC, which will serve as a reference for normal nuclear effects. Such a control measurement, together with short, high-luminosity pp runs at 2.76 TeV and 5 TeV (to match the proton–lead collision energy) requested by CMS will complete the first round of pioneering investigations into the extremes of strongly interacting matter. With these data, the first steps of a journey back to the state of the universe just a few microseconds after the Big Bang are being taken at the LHC, and the CMS experiment is one of the “time-machines” making this exciting journey possible.
Under extreme conditions of temperature and/or density, hadronic matter “melts” into a plasma of free quarks and gluons – the so-called quark–gluon plasma (QGP). To create these conditions in the laboratory, heavy ions (e.g. lead nuclei) are accelerated and made to collide head on, as was done at the LHC for two dedicated periods in 2010 and 2011. A key design consideration of the ALICE experiment at the LHC is the ability to study QCD and quark (de)confinement under these extreme conditions. This is done by using particles – created inside the hot volume as it expands and cools down – that live long enough to reach the sensitive detector layers located around the interaction region. The physics programme at ALICE relies on being able to identify all of them – i.e. to determine if they are electrons, photons, pions, etc – and to determine their charge. This involves making the most of the (sometimes slightly) different ways that particles interact with matter. This article gives an overview of the methods used for particle identification (PID) and their implementations in ALICE and describes how new technologies were used to push the state of the art.
Penetrating muons
Muons can be identified by the fact that they are the only charged particles able to pass almost undisturbed through any material. This is because muons with momenta below a few hundred GeV/c do not suffer from radiative energy losses and so do not produce electromagnetic showers. Also, being leptons, they are not subject to strong interactions with the nuclei of the material that they traverse. This behaviour is exploited in muon spectrometers in high-energy-physics experiments by installing muon detectors either behind the calorimeter systems or behind thick absorber materials. All other charged particles are completely stopped, producing electromagnetic (and hadronic) showers.
The muon spectrometer in the forward region of ALICE features a thick, complex front absorber and an additional muon filter comprising an iron wall 1.2 m thick. Muon candidates selected from tracks penetrating these absorbers are measured precisely in a dedicated set of tracking detectors. Pairs of muons are used to observe the full spectrum of heavy-quark vector-meson resonances (J/Ψ, …). Their production rates can be analysed as a function of transverse momentum and collision centrality to investigate dissociation arising from colour screening. In addition, muons from the semileptonic decay of open charm and open beauty can also be studied with the muon spectrometer.
Weighing particles
Hadron identification can be crucial for heavy-ion physics. Examples are open charm and open beauty, which allow the investigation of the mechanisms for the production, propagation and hadronization of heavy quarks in the hot and dense medium formed in the heavy-ion collisions. The most promising channel is the process D0 → K– π+, which requires efficient hadron identification owing to the small signal-to-background ratio.
Charged hadrons are unambiguously identified if their mass and charge are determined
Charged hadrons (in fact, all stable charged particles) are unambiguously identified if their mass and charge are determined. The mass can be deduced from measurements of the momentum and of the velocity. Momentum and the sign of the charge are obtained by measuring the curvature of the particle’s track in a magnetic field. To obtain the particle velocity there are four methods based on measurements of time-of-flight (TOF) and ionization, and on the detection of transition radiation (TR) and Cherenkov radiation. Each method works well in different momentum ranges or for specific types of particle. They are combined in ALICE to measure, for instance, particle spectra. Figure 1, for example, shows the abundance of pions in lead–lead (PbPb) collisions as a function of transverse momentum and collision centrality.
Kicking electrons from atoms
The characteristics of the ionization process caused by fast, charged particles passing through a medium can be used for PID. The velocity dependence of the ionization strength is connected to the Bethe-Bloch formula, which describes the average energy loss of charged particles through inelastic Coulomb collisions with the atomic electrons of the medium. Multiwire proportional counters (MWPCs) or solid-state counters are often used as the detection medium because they provide signals with pulse heights that are proportional to the ionization strength. Because energy-loss fluctuations can be considerable, in general many pulse-height measurements are performed along the particle track to optimize the resolution of the ionization measurement.
In ALICE this technique is used for PID in the large time-projection chamber (TPC) and in four layers of the silicon inner tracking system (ITS). A TPC is a large volume filled with a gas as the detection medium. Almost all of this volume is sensitive to the traversing charged particles but it features a minimum material budget. The straightforward pattern recognition (continuous tracks) makes TPCs the perfect choice for high-multiplicity environments, such as in heavy-ion collisions, where thousands of particles have to be tracked simultaneously. Inside the ALICE TPC, the ionization strength of all tracks is sampled up to 159 times, resulting in a resolution of the ionization measurement as good as 5%. Figure 2 shows the TPC ionization signal as a function of the particle rigidity for negative particles, indicating the different characteristic bands for various types of particle. A particle is identified when the corresponding point in the diagram can be associated with only one such band within the measurement errors. The method works well, especially for particles with low momenta up to several hundred MeV/c.
TOF measurements yield the velocity of a charged particle by measuring the flight time over a given distance along the track trajectory. Provided the momentum is also known, the mass of the particle can then be derived from these measurements. The ALICE TOF detector is a large-area detector based on multigap resistive plate chambers (MRPCs) that cover a cylindrical surface of 141 m2, with an inner radius of 3.7 m. The MRPCs are parallel-plate detectors built of thin sheets of standard window glass to create narrow gas gaps with high electric fields. These plates are separated using fishing lines to provide the desired spacing; 10 gas gaps per MRPC are needed to arrive at a detection efficiency close to 100%.
The simplicity of the construction allows a large system to be built with an overall TOF resolution of 80 ps at a relatively low cost. This performance allows the separation of kaons, pions and protons up to momenta of a few GeV/c. Combining such a measurement with the PID information from the ALICE TPC has proved useful in improving the separation between the different particle types, as figure 3 shows for a particular momentum range.
Detecting additional photons
The identification of electrons and positrons in ALICE is achieved using a transition radiation detector (TRD). In a similar manner to the muon spectrometer, this system enables detailed studies of the production of vector-meson resonances, but with extended coverage down to the light vector-meson ρ and in a different rapidity region. Below 1 GeV/c, electrons can be identified via a combination of PID measurements in the TPC and TOF. In the momentum range 1–10 GeV/c, the fact that electrons may create TR when travelling through a dedicated “radiator” can be exploited. Inside such a radiator, fast charged particles cross the boundaries between materials with different dielectric constants, which can lead to the emission of TR photons with energies in the X-ray range. The effect is tiny and the radiator has to provide many hundreds of material boundaries to achieve a high enough probability to produce at least one photon. In the ALICE TRD, the TR photons are detected just behind the radiator using MWPCs filled with a xenon-based gas mixture, where they deposit their energy on top of the ionization signals from the particle’s track.
The ALICE TRD was designed to derive a fast trigger for charged particles with high momentum and can significantly enhance the recorded yields of vector mesons. For this purpose, 250,000 CPUs are installed right on the detector to identify candidates for high-momentum tracks and analyse the energy deposition associated with them as quickly as possible (while the signals are still being created in the detector). This information is sent to a global tracking unit, which combines all of the information to search for electron–positron track pairs within only 6 μs.
Measuring an angle
Cherenkov radiation is a shock wave resulting from charged particles moving through a material faster than the velocity of light in that material. The radiation propagates with a characteristic angle with respect to the particle track, which depends on the particle velocity. Cherenkov detectors make use of this effect and in general consist of two main elements: a radiator in which Cherenkov radiation is produced and a photon detector. Ring-imaging Cherenkov (RICH) detectors resolve the ring-shaped image of the focused Cherenkov radiation, enabling a measurement of the Cherenkov angle and thus the particle velocity. This, in turn, is sufficient to determine the mass of the charged particle.
If a dense medium is used, only a thin radiator layer of a few centimetres is required to emit a sufficient number of Cherenkov photons
If a dense medium (large refractive index) is used, only a thin radiator layer of a few centimetres is required to emit a sufficient number of Cherenkov photons. The photon detector is then located at some distance (usually about 10 cm) behind the radiator, allowing the cone of light to expand and form the characteristic ring-shaped image. Such a proximity-focusing RICH is installed in the ALICE experiment. The High-Momentum Particle IDentificaton (HMPID) detector is a single-arm array that has a reduced geometrical acceptance. Similar to the ALICE TOF, it can identify individual charged hadrons up to momenta of a few GeV/c but with slightly higher precision.
Completing the picture
The ALICE detector also contains other components that can identify particles. A high-resolution electromagnetic calorimeter, the PHOS, which covers a limited acceptance domain at central rapidity, provides data to test the thermal and dynamical properties of the initial phase of the collision by measuring photons emerging directly from the collision. Last, a pre-shower detector, the PMD, studies the multiplicity and spatial distribution of such photons in the forward region.
Each method described in this article provides a different piece of information. However, only by combining them in the analysis of the data produced by ALICE can the particles produced in the collisions be measured in the most complete way possible. In this way they can reveal the whole picture of what happens in the collisions.
By the end of 2011, hopes for the discovery of the Higgs boson during 2012 were riding high on the back of tantalizing hints in the 5 fb–1 data sample. The aim was to quadruple the data set this year, with the added benefit that increasing the centre-of-mass energy from 7 TeV to 8 TeV brings a higher predicted rate of Higgs production. The first planned checkpoint was for the ICHEP 2012 conference and in the weeks preceding it the LHC performed better than ever, resulting in a total delivered luminosity of more than 6 fb–1 at 8 TeV. Thanks to the expertise and continued dedication of many people, the ATLAS detector was in great shape, and 90% of the delivered data were recorded and passed the strict quality requirements to go forward for analysis.
The strategy
The ATLAS strategy in preparation for the early ICHEP milestone was to focus first on the most sensitive decay modes, the decay of the Higgs boson to two photons (γγ), to two Z bosons or to two W bosons. The W and Z bosons are identified from their most clear final states. The two Zs decay to four leptons (llll), electrons or muons, and the W pair is identified in the mixed-flavour final state with an electron, a muon and two neutrinos: WW→eνμν. The γγ and ZZ→llll modes have excellent mass resolution because the Higgs boson decays entirely into visible, well measured particles. However, they have quite different signal-to-background rates and features, requiring appropriate analysis strategies. By contrast, the presence of two invisible neutrinos means that the WW mode has low mass-resolution.
For each final state, the approach was not to look in the signal region of the 2012 data until the analysis procedure was frozen, to avoid any bias in tuning the event selection criteria. The selections were optimized using simulated samples and control regions in the data. These are samples of events with configurations that cannot come from a Higgs signal but which allow salient features of the data to be compared with simulation.
For the γγ final state, the mass distribution of the photon pair in events with two energetic photons is shown in figure 1a. The background to the Higgs signal is dominated by genuine γγ events from known processes, plus events with one or two hadronic jets misidentified as photons. This background forms a smoothly falling spectrum on top of which there is a visible bump around 126 GeV. However, this distribution tells only part of the story. The potential significance of the signal is higher in subsets of the data that have better mass resolution. The resolution depends on whether the photons are in the central or forward parts of the detector, and also on whether one or both photons have “converted” by the process γ→e+e–. Furthermore, the signal-to-background ratio also changes according to the number of additional hadronic jets in the event because this characterizes different Higgs-production mechanisms. The data were divided into 10 subsets, for each of which the background shape was derived by fitting the data themselves. By evaluating the probability that fluctuations of the smooth background could create the bump, the local significance at 126 GeV is found to be equivalent to 4.5 standard deviations (σ).
Comparing the mass distribution from the two-photon sample with the distribution in figure 1b, where the mass is calculated from the four leptons in ZZ→llll events, the situation is quite different. The predicted signal to background in the interesting mass range between 120 and 130 GeV is much larger for the ZZ final state, with about half of the background coming from genuine ZZ events and half from other processes. The background shape is more complicated than in the γγ case but the expected features are well reproduced by simulation. The small peak in the distribution at 125 GeV has a local significance of 3.4σ.
Combining the ZZ→llll result with the γγ result and with all of the channels measured in 2011 brings the local significance to the pivotal threshold of 5.0 σ, as was announced to cheers at the 4 July seminar. Moreover, the signal masses measured in these two high-resolution channels are consistent, with an overall best-fit mass of 126.0 ± 0.4 (stat.) ± 0.4 (syst.) GeV.
The WW→eνμν analysis was ready a few days after the seminar and is included in the publication. Although the mass cannot be calculated, a transverse-mass variable mT can be formed from the measured electron and muon and the missing transverse energy in the event that arises from the unobserved neutrinos. Figure 1c shows the distribution of mT, with the predicted broad signal from a 125 GeV Higgs boson superimposed on the known backgrounds. The visible excess of events over background lends further evidence for the presence of a signal, bringing the overall significance to 5.9σ, corresponding to a one in 600 million chance that the known background processes could fluctuate to give such a convincing excess.
It came as something of a shock that the discovery threshold was reached so early in 2012. After more than 20 years of development, the detector has proved that it is capable of measuring leptons, photons, jets and missing energy with excellent precision, and it is operating with remarkable efficiency. This performance has been maintained even though the LHC is delivering higher luminosity than ever, with more proton–proton interactions per bunch crossing than foreseen. The trigger menus have been fine-tuned to select the most interesting events. The intricate process of reconstructing and distributing millions of events across the worldwide LHC Computing Grid in a matter of days runs smoothly; the ability to go from recording the last data to announcing a discovery just a couple of weeks later was incredible. In all aspects of the endeavour, people were prepared to work without sleep to ensure that the next step went without a hitch. The excitement as the data were revealed for the first time was tangible, and the thrill of the announcement on 4 July was shared by the collaboration around the world, from the lucky few in the CERN auditorium to collaborators at their home institutions and the attendees at the ICHEP conference in Melbourne.
The celebratory champagne has been drunk and the next stage of the work is beginning. The question on everyone’s lips now is whether this new particle has the features of the Standard Model Higgs boson. Undoubtedly, it is a brand-new boson, and we look forward to getting to know it better.
To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...
9.35 a.m. 4 July 2012. In front of an expectant crowd packing CERN’s main auditorium, Joseph Incandela shows a slide on behalf of the CMS collaboration; its subject, the combination of the two search channels with the best mass resolution, H→γγ and H→ZZ→ 4 leptons (llll). The slide shows a clear excess that corresponds to 5 σ above the expected background, signalling the discovery of a new particle. The audience erupts into applause. These decay modes not only give a measure of the mass of the new particle as 125 +/– 0.6 GeV but also reveal that it is, indeed, a boson, meaning a particle with integer spin; the two-photon decay mode further implies that its spin must be different from 1 (figure 1).
The ideas that led to the announcement were seeded more than 20 years ago
The search for the Standard Model Higgs boson, the missing keystone of the current framework for describing elementary particles and forces, has been going on for some 40 years. The ideas that led to the 4 July announcement were seeded more than 20 years ago: in 1990, at the Aachen workshop where people first heard the term “Compact Muon Solenoid”, and where people such as Michel Della Negra and Tejinder Virdee – the founding fathers of the CMS collaboration – presented quantitative ideas on how the Higgs boson, if it existed, could be found at the LHC. They aimed to provide coverage down to the region of low mass, which required precision tracking and electromagnetic calorimetry.
A measure of the performance of CMS, its hardware, software, distributed computing, analysis systems and the inventiveness of the people doing the analysis, can be gauged by the fact that a discovery of a Higgs-like boson has been made at half of the design energy of the LHC, using one-third of the integrated luminosity and under fiercer than the design “pile-up” conditions that were foreseen in the pre-data-taking estimates for reaching such a significance. This success is a real tribute to the thousands of CMS physicists and several generations of students who have turned CMS from a proposal on paper to a scientific instrument, hors du commun, producing frontier physics.
On 4 July the CMS collaboration presented searches for the Standard Model Higgs boson in five distinct decay modes: γγ, ZZ→llll, WW→lνlν, ττ and bb, the so-called high-priority analyses. The 2012 data-taking campaign and physics analyses had been under preparation since the end of 2011. The CMS collaboration had been pushing to go to 8 TeV collision energy and, assuming that this would happen, started the data simulation at 8 TeV in December. The collaboration identified 21 high-priority analyses, including the ones for the Higgs searches. The reconstruction software was improved and the trigger menus prepared to select with high efficiency the events necessary for the search. The software and computing resources were for the most part dedicated to the high-priority analyses.
The limits on the Higgs boson mass, established by experiments at CERN’s Large Electron–Positron collider and Fermilab’s Tevatron, and by the LHC campaign in 2011, showed that the Standard Model Higgs boson, if it existed, would most likely inhabit the mass range 114.4–127 GeV. Another important strategic decision was to re-optimize and improve the analyses using the expected sensitivity as the driving criterion. The entire analysis procedure in each individual analysis was assessed on the basis of maximizing sensitivity without looking into the above-mentioned mass region – in other words, they were “blind”. This would inevitably lead to a day of high drama when the “unblinding” was to take place, on 15 June.
The unblinding procedure, defined before 2012 data-taking, was to proceed in two steps:
• The performance of the analyses would be evaluated and pre-approved by the collaboration based on the first 3 fb–1 of data that had been collected and fully certified. On 15 June, the results in the blinded region would be shown. The deadline of 15 June arrived and all analyses were declared ready by the analysis review committees and on seeing the results from the high mass-resolution channels most of the hundreds present at CERN or connected via videoconferencing were astounded – there were the first clear signs that a new particle could be coming into view. The indications seen in the 2011 data not only remained, but were strengthened. A day of excitement indeed!
• From 15 June onwards the analyses would be – and were – simply topped-up, once the data quality-certification process was completed. They would eventually include all of the data available up until the technical stop of the LHC planned for late June.
Expectations started to increase, especially when observing the fantastic performance of the LHC, which was delivering collisions at a record rate. At the same time, the considerable increase in sensitivity of all five analyses, compared with those of 2011, meant that a discovery became a real possibility. In particular, the H→ττ channel had improved in sensitivity by more than a factor of two and H→bb was also starting to contribute. All of the analyses had integrated multivariate analysis methods for selection and/or reconstruction to optimize use of the full event information, leading to improved sensitivity. The channels with high mass-resolution, H→γγ and H→ZZ→llll, achieved close-to-design resolutions, e.g. for the best categories of events, 1.1 GeV and <1 GeV for diphoton and four-lepton states, respectively (figures 2 and 3). The anticipated number of standard deviations (σ) for the expected significance came out close to 6 σ (median) using 5 fb–1 from each of the 7 TeV and 8 TeV data sets (figure 1). A higher (lower) observed significance would indicate an upwards (downwards) fluctuation of this expectation.
All of the five high-priority analyses were performed independently at least twice. Furthermore, improvements in the definition and selection of the physics objects were subjected to scrutiny and formal approval before deployment.
As every new batch of certified data was added, the analysts eagerly looked forward to updates. The final word would belong to the team responsible for combining the results from the five high-priority analyses, the combination procedure having been validated before the unblinding.
The combination of these five analyses reveals an excess of events above the expected background, with a maximum local significance of 5.0 σ at a mass of 125.5 GeV. The expected significance for a Standard Model Higgs boson of that mass is 5.8 σ. The signal strength σ/σSM was measured to be 0.87+/– 0.23, where σ/σSM denotes the production cross-section multiplied by the relevant branching fraction, relative to the Standard Model expectation.
Having clearly seen a new particle, considerable attention was then devoted to measuring properties such as mass, spin if possible, and its couplings to bosons and fermions. All in all, the results presented by CMS are consistent, within uncertainties, with expectations for a Standard Model Higgs boson. With the recent decision to extend the 2012 data-taking by 55 days, the collaboration is now eager to accumulate up to three times more data, which should enable a more significant test of this conclusion and an investigation of whether the properties of the new particle imply physics beyond the Standard Model.
This will prove to be the discovery of a particle sans precedent. If it is confirmed to be a fundamental scalar (spin 0) then it is likely to have far-reaching consequences on physicists’ thinking about nature. It would be the first fundamental scalar boson. It is known that fundamental scalar fields play an important role not only in the presumed inflation in the early instants of the universe but also in the recently observed acceleration of its expansion. There can be no doubt that exciting times lie ahead.
Never before did the International Conference on High-Energy Physics (ICHEP) start with such a bang. Straight after registering on 4 July in Melbourne, participants at ICHEP2012, the 36th conference in the series, were invited to join a seminar at CERN via video link, where they would see the eagerly anticipated presentations of the latest results from ATLAS and CMS. The excitement generated by the evidence for a new boson sparked a wind of optimism that permeated the whole conference. Loud cheers and sustained applause were appropriately followed by the reception to welcome more than 700 participants from around the world, where they could discuss the news over a glass of delicious Australian wine or beer.
As usual, ICHEP consisted of three days of six parallel sessions followed by a rest day and then three days of plenary talks to cover the breadth and depth of particle physics around the world. This article presents only a personal choice of the highlights.
All eyes on the new boson
Talks on the search for the Higgs boson drew huge crowds in the parallel sessions. The discovery of something that looks very much like the Higgs boson raises two pressing questions: what kind of boson was found; and what kind of limits does this discovery impose on the existing models? While the answer to the first question will come only when more data are available, it is already possible to start answering the second question.
Sara Bolognesi of Johns Hopkins University presented some interesting preliminary work based on a recently published study in which helicity amplitudes are used to reveal the spin and parity of the new boson. These can be measured through the angular correlations in the decay products. For example, in H → WW decays, when both Ws decay into leptons, the angular separation in the transverse plane can help not only to reduce the background but also to distinguish between spin-parity 0+ and 2+. Likewise, the parity of a spin-0 boson can be inferred from the distribution of the decay angles of H → ZZ → llll. Bolognesi and her colleagues developed a Monte Carlo generator that allows the comparison of any hypothesized spin with data and have made the full analytical computation of the angular distributions that describe the decays H → WW, ZZ and γγ. All that is needed is more data – and the nature of the new particle will be revealed.
Another approach to determine the spin of the new boson consists of studying which decay modes are observed. A Standard Model Higgs boson has spin 0, so it should couple to fermions and vector bosons. Spin 1 is already excluded for the new boson because it could not produce two photons (γγ), each with spin 1. A spin 2 boson could decay into bb– (with an extra spin 1 gluon on board) but not to two τ leptons. So, it was puzzling to hear from Joshua Swanson of the University of Wisconsin Madison that CMS does not observe H → ττ after having analysed the 10 fb–1 at hand from 2011 and 2012. The current analysis is consistent with the background-only hypothesis, yielding an exclusion limit that is 1.06 times the Standard Model production cross-section for mH = 125 GeV. Needless to say, this will be closely monitored as soon as more data become available.
Meanwhile, many theorists and experimentalists are already speculating on the possible impact of the discovery on the current theoretical landscape. Several people showed the effect of all known measurements in flavour physics, direct limits and the new boson mass on existing models. Nazila Mahmoudi of CERN and Clermont-Ferrand University reminded the audience that there is more to supersymmetry (SUSY) than the constrained minimal supersymmetric model (CMSSM). She showed that, assuming that the new particle is a Higgs boson, its mass has a huge impact on the allowed parameter space. Already, several constrained models such as the mSUGRA, mGMSB, no-scale and cNMSSM are severely limited or even ruled out. This impact is, in fact, complementary to direct searches for SUSY. Mahmoudi stressed the importance of going back to unconstrained SUSY models, pointing out that there is still plenty of space for the MSSM model.
So many searches, such little luck
In the search for direct detection of new phenomena, both for exotics and for SUSY, the results were humbling despite the numerous attempts. In the parallel sessions, more than 30 talks were given on SUSY alone,
sometimes covering up to five different analyses. Andy Parker of the University of Cambridge, who reviewed this field, showed how these searches have already covered all of the most obvious places. However, as he reminded the audience, there are still two big reasons to believe in SUSY. First, it provides a candidate for dark matter that has just the right cross-section to be consistent with today’s relic abundance. Second, a light Higgs particle needs this kind of new physics to stabilize its mass. Parker also pointed out that only the third generation of SUSY particles, namely stops and staus, need to be light, a point that Riccardo Barbieri of Scuola Normale Superiore and INFN also stressed in his conference review. For these particles, the current model-independent limits are still rather low, well below 1 TeV, but should improve rapidly with more data.
SUSY could also be hidden if the mass-splitting between gluinos and neutralinos is rather small. In that case there would be very little missing transverse energy (MET), when most analyses have been looking towards large MET. This is the idea behind various scenarios with compressed mass spectra. Or it might be that the SUSY particles are so long lived that they require an adapted trigger strategy because they decay beyond the first layers of the detectors. Searches have been made in all of these directions but without any success so far.
Nevertheless, with the discovery of a new boson, there is much more optimism than a year ago during the European Physical Society conference on High-Energy Physics (EPS HEP 2011), where Guido Altarelli had commented that given no sign for SUSY yet, it was too early for despair but enough for depression. The word of caution that Parker raised, echoing Mahmoudi, provides room for optimism. It is of utmost importance to stay away from the hypotheses of constrained models and to aim instead for the broadest possible scope. SUSY is far from being dead yet and there is plenty of unexplored parameter space, with much of it still containing particles of low mass. As Raman Sundrum of the University of Maryland remarked: “We must not only look for what’s left but rather, what’s right.”
Testing the consistency of the Standard Model with the so-called electroweak fit has been a tradition for all major conferences for the past decade or two and in this ICHEP proved no different – except for a major twist. For the first time, the mass of the newly found boson was used to test if all electroweak measurements (W and Z boson masses, the top-quark mass, single and diboson production cross-sections, lepton universality etc.) could fit together. All of these measurements were reviewed by Joao Barreiro Guimaraes da Costa of Harvard University, culminating with the overall electroweak fit in terms of W-mass versus top-mass space in figure 1. This allows testing of how consistently all of these parameters fit together under the hypothesis that the new boson is the Standard Model Higgs boson (thin blue line) or one associated with MSSM (green band). The blue ellipse shows the current status of the experimental measurements of mt and mW, whereas the black ellipse depicts what will happen if the LHC brings the uncertainty on mW down to 5 MeV – although this will be a great challenge. If the central value remains unchanged, it would bring the Standard Model into difficulty, whereas there will still be plenty of room for the MSSM parameters. One noteworthy result of this global fit is the prediction of the Standard Higgs boson mass at 125.2 GeV when all electroweak parameters are taken into account. Only the direct exclusion limits from the Large Electron–Positron collider and the Tevatron were included.
Dark matter, light neutrinos
“No theories, just guesses for dark matter.” These were the words with which Neal Weiner of New York University summarized the situation on the theory front for dark matter. He explained that, unlike for the Higgs boson, there is currently no theory that allows predictions experimentalists could try to verify. The field is faced with a completely open slate.
If weakly interacting massive particles (WIMPs), a generic class of dark-matter candidates, exist with a mass of around 100 GeV, then some 10 million would go through a person’s hand every second, as Lauren Hsu of Fermilab pointed out during her comprehensive review of direct searches for dark matter. Nevertheless, in contrast to the clarity of Hsu’s presentation, the situation remains extremely confusing. She first reminded the audience of the basics. A WIMP could scatter elastically off a nucleon and the scattering cross-section can be broken into two terms: a spin-independent term (SI), which grows as the square of the atomic mass A; and a spin-dependent term (SD) that scales with the spin of the nucleon.
Currently, xenon-based and cryogenic germanium experiments dominate the field for the SI measurements, while superheated liquid detectors such as Picasso and COUPP are competitive for SD measurements. The XENON100 collaboration’s results for 2011 exceed the sensitivity of other experiments over a range of WIMP masses (new results with 3.5 times better sensitivity appeared just after the conference). SuperCDMS, a germanium-based detector, started operation in March 2012 but first results have still to be released.
Several inconsistencies remain unexplained. In 2008, the DAMA/LIBRA collaboration first reported an annual modulation in event rate that was consistent with dark matter with a statistical significance that now reaches 8.9 σ. This modulation peaks in summer and is at its lowest in winter, making some people suspect backgrounds that are modulated by seasonal changes. COUPP and KIMS, two experiments that use iodine as DAMA/LIBRA does, have now been running for some time. However, their data are not consistent with elastic scattering of WIMPs off iodine, so the mystery continues in terms of what DAMA/LIBRA is seeing. Finally, DM-ICE is a new effort underway in which about 200 kg of sodium-iodide crystals will be deployed within the IceCube detector at the South Pole. One interesting point is that any background tied to seasonal effects will modulate with a different phase in the southern hemisphere.
This is not the only ongoing discrepancy. Two collaborations, CoGeNT and CRESST-II, announced the observation of an annual modulation in low-energy events, with the CRESST-II excess being at 4.2 σ. This contradicts many other results (from CDMS, XENON100, EDELWEISS, ZEPLIN, etc.) where no modulation is observed in low-energy data. The CoGeNT observation was particularly hard to reconcile with the CDMS results because both CoGeNT and CDMS are germanium-based detectors. However, now that the CoGeNT collaboration has modified its background estimates, the data from these two experiments are no longer in conflict. The CRESST team is working on reducing its background, which could help resolve this discrepancy.
The fact that neutrinos have mass proves that there is physics beyond the Standard Model.
Takashi Kobayashi
Moving on to the field of neutrino physics, Takashi Kobayashi of KEK reminded the audience that the mere fact that neutrinos have mass proves that there is physics beyond the Standard Model. These masses induce mixing – in that the different flavours of neutrinos are linear combinations of mass eigenstates. Neutrino mixing is now described by the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix and until recently it remained to be seen if all three flavours participated in the mixing.
Without a doubt the biggest news in neutrino experiments this year came from the Daya Bay experiment’s measurement of the mixing angle between the first and third neutrino-mass eigenstates, Θ13. This is the last ingredient needed to allow future tests of CP-violation in the neutrino sector. T2K had reported the first evidence of νe appearance in 2011, which already implied a non-zero value for θ13. Jun Cao of the Institute of High-Energy Physics, Beijing showed how T2K was then followed by MINOS, Double Chooz, Daya Bay and now RENO, with a first result showing a 4.9 σ deviation from zero for θ13. The Daya Bay group has achieved the best measurement, now with sin22θ13 = 0.089 ± 0.010, a 7.7 σ deviation from zero.
One big remaining area of questions concerns the neutrino-mass hierarchy. Which mass eigenstate is the lightest? Do we have a normal or an inverted hierarchy (figure 2)? As always, much still remains to be done in neutrino physics but, as in the past, it is bound to bring interesting or even surprising results.
Great theoretical developments
Unnoticed by most experimentalists, there have been tremendous developments in the past eight years with scattering amplitude theory. “While experimentalists were busy building the LHC experiments, theorists were improving their understanding of perturbative scattering amplitudes,” said Lance Dixon of SLAC in his overview talk. This has allowed them to “break the dam”, leaving impossibly complex Feynman-diagram-based calculations behind in performing computations at next-to-leading order (NLO). The unprecedented precision achieved has led to the description of complex multijet events, such as those observed in collisions at the Tevatron and the LHC.
These new techniques were developed within the context of the maximally supersymmetric Yang-Mills theory (N = 4 SYM), an exotic cousin of QCD. Currently, complete calculations are possible only for events producing W/Z + jets, which involve at most 110 1-loop diagrams. However, this new technique can tackle the equivalent of 256,265 diagrams. Feynman diagrams are still used but only for the simpler, tree-level processes. One major advantage of scattering amplitudes is that they can recycle tree processes into loops, bringing much simplification to the calculations. This is bringing amazing precision into the new calculations, as Dmitry Bandurin of Florida State University revealed. Now, QCD-inclusive jet cross-sections agree with recent theoretical calculations over 8–9 orders of magnitude and up to jet momenta of 2 TeV, as measurements by the CMS collaboration show (figure 3).
ATLAS showed the first inclusive jet data at 8 TeV, confirming the expected increase in jet-production rates and reach in transverse momentum. The current level of understanding in jet identification, systematics and jet-energy scale leads in many cases to experimental uncertainties similar to or lower than the theoretical uncertainties. The sensitivity to parton density functions (PDFs) sets the strongest constraint on the gluon PDF and the extraction of the strong coupling constant αs. It also tests αs running up to 400 GeV. The inclusive Z and W results extensively cross-check perturbative QCD calculations, leading to a triumph of NLO, matrix element and parton-shower Monte Carlo predictions. Studies of multiple parton interactions at the Tevatron and the LHC are leading to improved phenomenological nucleon models. All of these results are important for searches for new physics at high energies. The participants witnessed the impressive amount of work accomplished in measuring PDFs, cross-sections, diffractive processes and deep-inelastic scattering, all of which are much needed building blocks for the groundwork that underpins discoveries.
Last summer at EPS-HEP 2011, the LHCb and CMS collaborations created a stir when they presented their first precise search for Bs → μμ decay – a channel that is sensitive to new physics. Now, combining all of the 2011 data for CMS, ATLAS and LHCb, the 95% CL upper limit on this branching fraction is 4.2 × 10–9, closing in on the Standard Model prediction of (3.2 ± 0.2) × 10–9. This new LHC result increases the tension with the result from the CDF experiment at the Tevatron of 13+9–7 × 10–9, which is slightly reduced after including all 10 fb–1 of data.
The LHCb collaboration reported the first observation of a decay with a b → d transition involving a penguin diagram, which makes B+ → πμμ the rarest B decay ever observed. In the Standard Model, it is 25 times smaller than similar decays involving b → s transitions. With 1 fb–1 of collision data, the LHCb experiment obtained 25.3+6.7–6.4 signal events – a result that is 5.2 σ above background and consistent with the predictions of the Standard Model. The collaboration also reported on the first measurement of CP violation in charmless decays.
At the Tevatron, both the CDF and DØ experiments still see a significant forwards-backwards asymmetry in tt&x#305; production in all channels with a strong dependence on mtt&x#305;, which conflicts with the Standard Model. No such asymmetry is seen by either ATLAS or CMS at the LHC, where it is defined as the asymmetry in the widths of the t and t&x#305; rapidity distributions.
Looking towards the future
CERN’s director-general, Rolf Heuer, concluded the conference by reviewing the future for high-energy-physics accelerators, stating how the LHC results will guide the way at the energy frontier. The current plans for CERN include a long shutdown in 2013–2014 to increase the centre-of-mass energy, possibly to the design value of 14 TeV. This will be followed by two other shutdowns: one in 2018, for upgrades to the injector and the LHC to go to the ultimate luminosity; and one in 2022 for new focusing magnets and crab cavities for high luminosity with levelling, with the humble goal of accumulating about 3000 fb–1 by 2030.
Numerous other plans are in the air, such as a linear collider, where Heuer stressed the importance for the international community to join forces on a single project. “We need to have accelerator laboratories in all regions of the globe planned in an international context, and maintain excellent communication and outreach to show the benefits of basic science to society,” he stressed.
There was not a dull moment at the ICHEP conference in Melbourne, thanks to the efforts of the organizers and their crew. Everyone who joined one of the many possible conference tours on Sunday was treated to views of incredibly beautiful coastlines and native wildlife. The overall experience was well worth the journey.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.