As Run 2 at the LHC gains momentum, a combined analysis of data sets from Run 1 by the ATLAS and CMS collaborations has provided the sharpest picture yet on the Higgs boson properties (ATLAS 2015, CMS 2015).Three years after the announcement in July 2012 of the discovery of a new boson, the two collaborations are closing the books on measurements of Higgs properties by performing a combined Run 1 analysis, which includes data collected in 2011 and 2012 at centre-of-mass energies of 7 and 8 TeV, respectively. This analysis follows hot on the heels of the combined measurement of the Higgs boson mass, mH = 125.09±0.24 GeV, published in May by ATLAS and CMS (ATLAS and CMS 2015).
The new results are the culmination of one and a half years of joint work by the ATLAS and CMS collaborators involved in the activities of the LHC Higgs Combination Group. For this combined analysis, some of the original measurements dating back to 2013 were updated to account for the latest predictions from the Standard Model. A comprehensive review of all of the experimental systematic and theoretical uncertainties was also conducted to account properly for correlations. The analysis presented technical challenges, because the fits involve more than 4200 parameters that represent systematic uncertainties. The improvements that were made to overcome these challenges will now make their way into data-analysis tools, such as ROOT, that are widely used by the high-energy particle-physics community.
The results of the combination present a picture that is consistent with the individual results. The combined signal yield relative to the Standard Model expectation is measured to be 1.09±0.11, and the combination of the two experiments leads to an observation of the H → τ+τ– decay at the level of about 5.5σ – the first observation of the direct decay of the Higgs boson to fermions. Thanks to the combined power of the data sets from ATLAS and CMS, the analysis yields unprecedented measurements of the properties of the Higgs boson, with a precision that enables the search for physics beyond the Standard Model in possible deviations of the measurements from the model’s predictions. The figure shows clearly the increased precision obtained when combining the ATLAS and CMS analyses.
The combined analysis is performed for many benchmark models that the LHC Higgs Cross-Section Working Group proposed, so as to be able to explore the various different effects of physics models that go beyond the Standard Model. As Run 2 gains momentum, the two collaborations are looking forward to reaping the benefits of the increase in centre-of-mass energy to 13 TeV, which will make some of the most interesting processes, such as the production of Higgs bosons in association with top quarks, more accessible than ever. However, even with the first results from Run 2, this set of combined results from 7 and 8 TeV collisions in Run 1 will continue to provide the sharpest picture of the Higgs boson’s properties for some time to come.
After nearly 13 years as editor of CERN Courier, I am stepping down as I head off into retirement. I would like to thank the many contributors and also the team at IOP Publishing who bring such a professional standard to the magazine. Most importantly, I must thank the enthusiastic readers for their continued support, and ask everyone to join me in welcoming the new editor, Antonella Del Rosso. Christine Sutton, CERN.
Heavy quarks are important probes of the quark–gluon plasma that is produced when relativistic heavy ions collide. Because of a mass effect, it has been argued that heavy quarks lose less energy through gluon radiation than light quarks as they traverse the medium. However, studying heavy quarks in a particle-dense environment is challenging. Moreover, the physics interest is in bulk behaviour of charm quarks, so it is important to study charmed hadron production over the full range of momentum. At low momenta, multiple scattering is very important, and this places strict constraints on the amount of material in the detector.
The STAR Heavy Flavour Tracker (HFT) was built to meet these challenges. Installed in the STAR detector at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory early last year, it took data during the 2014 and 2015 running periods. It was specifically designed and constructed to allow for the direct topological reconstruction of heavy-flavour decay vertices such as the D0 (decay distance cτ around 120 μm), by tracking the decay particles through four layers of silicon detectors to extend the STAR physics programme to include fully reconstructed charmed hadrons.
To do this, the tracker incorporated a number of novel features. First, in addition to the two outermost layers of standard-technology silicon strips and pad sensors, the innermost two layers of the HFT – the pixel (PXL) detector (figure 1) – are constructed using monolithic active-pixel sensors (MAPS). This is the first large-scale use at a collider experiment of MAPS technology, which integrates the silicon of the detector and the signal processing on a single silicon die. Second, its novel design, with a low-mass carbon-fibre support structure, aluminum conductor read-out cables (instead of copper) and air cooling (instead of water) gives the PXL a sleek footprint with a very low radiation length – 0.4% per layer – to minimize multiple Coulomb scattering. These features give the detector, which was conceived and built by the Relativistic Nuclear Collisions group at the Lawrence Berkeley National Laboratory (LBNL), excellent pointing capabilities, with a resolution for its distance of closest approach of only 40 μm for 750 MeV/c kaons.
In addition, the detector-support mechanics are designed to allow for very fast insertion and detector replacement. The PXL detector can be inserted, cabled and working in 12 hours. This allows for quick changes if the detector suffers radiation damage.
The MAPS chips were developed by the microelectronics group at the Institut Pluridisciplinaire Hubert Curien in Strasbourg, in collaboration with LBNL, and are the result of a 10 year development process. The sensor design is highly optimized for the RHIC environment. A single sensor features 890,000 pixels, each measuring 20.7 μm × 20.7 μm. The detector integration time is 186 μs, allowing the detector to function at RHIC with a very low occupancy. The fast read-out is achieved with binary output using column-level discriminators and on-chip zero-suppression/data-compression circuitry. The detector’s initial performance is in line with expectations: figure 2 shows an invariant D0 → K–π+ (and conjugate) mass peak, shown at the recent Quark Matter 2015 conference.
In a study reported in Nature, a team working at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC has shown that the high electric-field gradients possible in plasma can be harnessed to accelerate positrons, just as well as they can for electrons.
In 2014, an experiment at FACET, which uses the first 2 km of the famous SLAC linac, was able to demonstrate plasma-wakefield acceleration of electrons, with both a high gradient and a high energy-transfer efficiency – a crucial combination that had not previously been achieved (CERN Courier January/February 2015 p9). However, for positrons, plasma-wakefield acceleration is much more challenging, and it was thought that no matter where a trailing positron bunch was placed in a wake, it would lose its compact, focused shape or even slow down.
In the new study, the team demonstrated a new regime for plasma-wakefield acceleration where particles in the front of a single positron bunch transfer their energy to a substantial number of those in the rear of the same bunch by exciting a wakefield in the plasma. In the process, the accelerating field is altered –”self-loaded” – so that in the tests about a billion positrons gained 5 GeV in energy with a narrow energy spread over a distance of just 1.3 m. Moreover, the positrons extract about 30 per cent of the wake’s energy and form a spectrally distinct bunch with a root-mean-square energy spread as low as 1.8%.
This ability to transfer energy efficiently from the front to the rear within a single positron bunch makes the scheme highly attractive as an energy booster for a future electron–positron collider.
The 2015 edition of the European Physical Society Conference on High Energy Physics (EPS-HEP 2015), which took place in Vienna in July (“Vienna hosts a high-energy particle waltz”), provided an opportunity for the ALICE collaboration to present the latest results from analysis of data from Run 1 of the LHC. While many of the presentations centred on the properties of the quark–gluon plasma (QGP) as produced in the collisions of heavy ions, there was also an interesting glimpse of other kinds of physics that ALICE can investigate.
Once in a while in the heavy-ion collisions, a few protons and neutrons are created close enough in phase space such that they coalesce into a nucleus. The heavier the nucleus (the larger the number of nucleons), the lower the probability that it is created, but about once in 10 thousand events, for example, a 3He nucleus can be created and detected within ALICE’s tracking and particle-identification set-up. Moreover, the lead–ion collisions at the LHC also provide a copious source of antiparticles, such that nuclei and the corresponding antinuclei are produced at nearly equal rates.
This allows ALICE to make a detailed comparison of the properties of the nuclei and antinuclei that are most abundantly produced. At EPS-HEP 2015, the collaboration presented a new limit on the conservation in nucleon–nucleon interactions of CPT symmetry – the fundamental symmetry that implies that all of the laws of physics are the same under the simultaneous reversal of charges (charge conjugation, C), reflection of spatial co-ordinates (parity transformation, P) and time inversion (T). The new test of CPT invariance was extracted from measurements of the mass-to-electric-charge ratios of the deuteron/antideuteron and the 3He/3He nuclei. The combined results of the difference of the mass-over-charge ratio for each pair of the nucleus/antinucleus species allowed the extraction of differences in their relative binding energies. The measurements, published in Nature Physics, confirm CPT invariance to an unprecedented precision in the sector of light nuclei (ALICE Collaboration 2015).
The strongly interacting hot and dense matter, the QGP, produced in heavy-ion collisions is characterized by the smallest ratio of sheer viscosity to entropy density of all known materials – a substance that flows almost as a perfect liquid. This QGP is a system of quarks and gluons where the mean free path is very short – a so-called strongly coupled system. A parton traversing such a medium, even a highly energetic one, is exposed to the medium and loses part of its energy. The new measurements by ALICE presented at EPS-HEP 2015 indicate that the heavier charm and beauty quarks also lose a significant part of their energy in the dense QGP. For relatively low quark momenta, the interaction with the bulk of the partons in the medium may follow exclusively through elastic scatterings. For high-energy quarks, a number of soft gluons can be radiated, carrying a fraction of quark energy into the medium. These processes are a QCD analogue of phenomena known from QED: the physics of a parton traversing a droplet of QGP resembles the scenario of an electrically charged particle traversing ordinary matter.
In other measurements, the ALICE collaboration has compared data on the production of D mesons (containing a charm quark) with data from CMS on non-prompt J/ψ mesons (the decay products of heavier mesons containing a beauty quark). The comparison shows that the heavier the quark, the less energy it loses inside the medium. Indeed, this was one of the most striking predictions of theoretical models describing strongly coupled QCD matter – the plasma is less opaque to heavy quarks as compared to light quarks and gluons. So, these new measurements at last provide the first confirmation of these predictions.
The nature of the interactions between the heavy quarks and the medium can also be deduced from the azimuthal asymmetry of the production of heavy-flavour hadrons: the magnitude of the asymmetry is proportional to the collective flow of the medium. Measurements of the asymmetry presented by ALICE confirm that the heavy quarks participate in the collective flow of QGP. These results are critical to establishing the focus of future theoretical work on the transport properties of the plasma, while from the experimental point of view, the ALICE collaboration is looking forward to the improved precision from the measurements in LHC Run 2.
The droplet of QGP produced in heavy-ion collisions constantly expands, and lasts at most about 10 fm/c (30 × 10−24 s). After that time, the temperature drops below the critical temperature (about 155 MeV) and the energy density falls below a critical density of about 0.5 GeV/fm3. At that point, the distances between the quarks become large and, owing to the nature of the strong force, the partons are re-confined/combined into colour-neutral hadrons. Following this hadronization process, the system becomes a gas of hadrons and, while the gas is still hot, the hadrons may still interact. The most useful messengers from this phase of the collision are the short-lived hadronic resonances. At the conference, ALICE presented extensive studies of the short-lived mesons and baryons. Their production rates provide sensitive information on the strength of the hadron–hadron interactions, and thus are a vital source for understanding the properties of the hadron gas. Knowing the equation-of-state of the hadron gas allows the genuine QGP signals to be unravelled in greater detail.
Finally, ALICE presented signatures of collective particle production in an extended pseudorapidity range in proton–lead collisions (“ALICE goes forward with the ridge in pPb collisions”). Such collective behaviour, known from heavy-ion collisions, was not initially expected for the smaller proton–lead system. The new measurement provides qualitatively new constraints to theoretical models attempting to explain the novel phenomena.
In an improved analysis of 8 TeV collision events at the LHC, the CMS experiment has made the first observation of the production of a top quark–antiquark pair together with a Z boson, ttZ, as well as the most precise cross-section measurements of ttZ and ttW to date.
Since the top quark was discovered 20 years ago, its mass, width and other properties have been measured with great precision. However, only recently have experiments been able to study directly the top quark’s interactions with the electroweak bosons. Its coupling to the W boson has been tightly constrained using single top events in proton–antiproton collisions at Fermilab’s Tevatron and proton–proton collisions at the LHC. Direct measurements of the top quark’s couplings to the photon (γ) and the Z or Higgs boson are currently most feasible in LHC collisions that produce a tt pair and a coupled boson: ttγ, ttZ and ttH. However, studying these processes (and the related ttW) is challenging because their expected production rates are hundreds of times smaller than the tt- cross-section.
The CMS and ATLAS experiments at CERN have previously observed ttγ, found evidence for tttZ, and conducted searches for ttW and ttH in 7 and 8 TeV proton–proton collisions. Deviations from the predicted cross-sections could hint at non-Standard Model physics such as anomalous top-quark-boson couplings or new particles decaying into multiple charged leptons and bottom quarks.
The decays ttW and ttZ both produce two b quarks, and are most easily distinguished from tt, WZ, and ZZ backgrounds when they produce two to four charged leptons and up to four additional quarks. However, signal events can be identified even more precisely when the reconstructed leptons and quarks are matched to particular top, W or Z decays. Leptons of the same flavour and opposite charge, with an invariant mass near 91 GeV, are assigned to Z decays. The remaining leptons and quarks are compared with top and W decays using the charge and b-quark identification of single objects, together with the combined mass of multiple objects. Every possible permutation of objects matched to decays is tested, and the best matching is taken as the reconstruction of the entire ttW or ttZ event. Background events with fewer top quarks or W or Z bosons are typically worse matches to ttW and ttZ than signal events.
The figure shows the best match score in events with three charged leptons and four reconstructed quarks in data, along with estimates of ttZ, WZ and tt, as well as tt and single Z with a non-prompt lepton from quark decay. The hashed area indicates the 68% uncertainty in the signal-plus-background prediction. The matching scores are combined with quark and lepton momenta and other distinguishing variables in so-called boosted decision trees (BDTs), which separate signal from background events. The BDTs are used to compare data events with signal and background models, and so estimate the number of signal events contained in the data. This estimate makes it possible to measure the cross-sections.
The ttW cross-section is measured in events with two same-charge leptons or three leptons, and is found to be 382+117–102 fb, somewhat larger than the 203+20–22 fb predicted by the Standard Model. This higher-than-expected value is driven by an excess of signal-like data events with two same-charge leptons. The data overall exclude the zero-signal hypothesis with a significance of 4.8σ. Events with two opposite-charge leptons, three leptons, or four leptons are used in the ttZ search. The measured ttZ cross-section is 242+65–55 fb, quite close to the Standard Model prediction of 206+19–24 fb. The zero-signal hypothesis is rejected with a significance of 6.4σ, making this measurement the first observation of the ttZ process.
The measured cross-sections are also used to place the most stringent limits to date on models of new physics employing any of four different dimension-six operators, which would affect the rates of ttW or ttZ production. Further studies in 13 TeV collisions should provide an even more detailed picture of these interesting processes and may reveal the first hints of new physics at the LHC.
After demonstrating a good understanding of the detector and observing most of the Standard Model particles using the first data of LHC Run 2 collected in July (CERN Courier September 2015 p8), the ATLAS collaboration is now stepping into the unknown, open to the possibility that dimensions beyond the familiar four could make themselves known through the appearance of microscopic black holes.
Relative to the other fundamental forces, gravity is weak. In particular, why is the natural energy scale of quantum gravity, the Planck mass MPl, roughly 17 orders of magnitude larger than the scales of electroweak interactions? One exciting solution to this so-called hierachy problem exists in “brane” models, where the particles of the Standard Model are mainly confined to a three-plus-one-dimensional brane and gravity acts in the full space of the “bulk”. As gravity escapes into the hypothesized extra dimensions, it therefore “appears” weak in the known four-dimensional world.
With enough large, additional dimensions, the effective Planck mass, MD, is reduced to a scale where quantum gravitational effects become important within the energy range of the LHC. Theory suggests that microscopic black holes will form more readily in this higher-dimensional universe. With the increase of the centre-of-mass energy to 13 TeV at the start of Run 2, the early collisions could already produce signs of these systems.
If produced by the LHC, a black hole with a mass near MD – a quantum black hole – will decay faster than it can thermalize, predominately producing a pair of particles with high transverse momentum (pT). Such decays would appear as a localized excess in the dijet mass distribution (figure 1). This signature is also consistent with theories that predict parton scattering via the exchange of a black hole – so-called gravitational scattering.
A black hole with a mass well above MD will behave as a classical thermal state and decay through Hawking emission to a relatively large number of high-pT particles. The frequency at which Standard Model particles are expected to be emitted is proportional to the number of charge, spin, flavour and colour states available. ATLAS can therefore perform a robust search for a broad excess in the scalar sum of jet pT (HT) in high-multiplicity events (figure 2), or in similar final states that include a lepton. The requirement of a lepton (electron or muon) helps to reduce the large multijet background.
Even though the reach of these analyses extends beyond the previous limits, they have so far revealed no evidence for black holes or any of the other signatures to which they are potentially sensitive. Run 2 is just underway and with more luminosity to come, this is only the beginning.
One of the hottest debates at the LHC is the potential emergence of collective effects in proton–lead (pPb) collisions, prompted by the discovery of double ridge structures in angular correlations of charged particles (CERN Courier March 2013 p6), and the dependence of the azimuthal asymmetry, characterized by its second Fourier coefficient v2, on particle mass (CERN Courier September 2013 p10). The experimental findings in pPb are qualitatively the same as those in PbPb collisions, and they are usually interpreted as hydrodynamic signatures of a strongly coupled, nearly perfect quantum liquid. However, QCD calculations, which invoke the colour-glass condensate (CGC) formalism for the gluon content of a high-energy nucleus in the saturation regime, can also describe several features of the data.
Thus, one of the key questions to answer is, whether the ridge is a result of final-state effects, driven by the density of produced particles, or of initial-state effects, driven by the gluon density at low-x. In the former case, v2 could be expected to be larger in the Pb-going direction, while it would be larger the p-going direction in the latter case.
The ALICE collaboration has recently completed a measurement to address this question in analysis of pPb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. Muons reconstructed in the muon spectrometer at forward (p-going) and backward (Pb-going) rapidities (2.5 < |η| < 4.0) were correlated with associated charged particles reconstructed in the central (|η| < 1.0) tracking detectors. In high-multiplicity events, this revealed a pronounced near-side ridge at forward- and backward-going rapidities, ranging over about five units in Δη, similar to the case of two-particle angular correlations at mid-rapidity. An almost symmetric double ridge structure emerged when, as in previous analyses, jet-like correlations from low-multiplicity events were subtracted.
The v2 for muons, vμ2, in high-multiplicity events was obtained by dividing out the v2 of charged particles measured at mid-rapidity from the second-order two-particle Fourier coefficient, under the assumption that it factorizes into a product of muon v2 and charged-particle v2. The vμ2 coefficients were found to have a similar dependence on transverse momentum (pT) in p-going and Pb-going directions, with the Pb-going coefficients larger by about 16±6%, more or less independent of pT within the uncertainties of the measurement. The dominant contribution to the uncertainty arose from the correction for jet-like correlations affecting the extraction of v2.
The results add further support to the hydrodynamic picture, and are in qualitative agreement with model calculations incorporating final-state effects. At high pT (> 2 GeV/c), the measurement is sensitive to a contribution from heavy-flavour decays, and hence may be used to constrain the v2 of D mesons from calculations.
LHCb has significantly improved the trigger for the experiment during Run 2 of the LHC. The detector is now calibrated in real time, allowing the best possible event reconstruction in the trigger, with the same performance as the Run 2 offline reconstruction. The improved trigger allows event selection at a higher rate and with better information than in Run 1, providing a significant advantage in the hunt for new physics in Run 2.
The trigger consists of two stages: a hardware trigger that reduces the 40 MHz bunch-crossing rate to 1 MHz, and two high-level software triggers, HLT1 and HLT2 (figure 1). In HLT1, a quick reconstruction is performed before further event selection. Here, dedicated inclusive triggers for heavy-flavour physics use multivariate approaches. HLT1 also selects an inclusive muon sample, and exclusive lines select specific decays. This trigger typically takes 35 ms/event and writes out events at about 150 kHz.
In Run 1, 20% of events were deferred and processed with the HLT between fills. For Run 2, all events that pass HLT1 are deferred while a real-time alignment is run, so minimizing the time spent using sub-optimal conditions. The spatial alignments of the vertex detector – the VELO – and the tracker systems are evaluated in a few minutes at the beginning of each fill. The VELO is reinserted for stable collisions in each fill, so the alignment could vary from one fill to another; figure 2 shows the variation for the first fills of Run 2. In addition, the calibration of the Cherenkov detectors and the outer tracker are evaluated for each run. The quality of the calibration allows the offline performance, including the offline track reconstruction, to be replicated in the trigger, thus reducing systematic uncertainties in LHCb’s results.
The second stage of the software trigger, HLT2, now writes out events for offline storage at about 12.5 kHz (compared to 5 kHz in Run 1). There are nearly 400 trigger lines. Beauty decays are typically found using multivariate analysis of displaced vertices. There is also an inclusive trigger for D* decays, and many lines for specific decays. Events containing leptons with a significant transverse momentum are also selected.
A new trigger stream – the “turbo” stream – allows candidates to be written out without further processing. Raw event data are not stored for these candidates, reducing disk usage. All of this enables a very quick data analysis. LHCb has already used data from this stream for a preliminary measurement of the J/ψ cross-section in √s = 13 TeV collisions (CERN Courier September 2015 p11).
This is an event view of the highest energy neutrino detected so far by the IceCube experiment based at the South Pole (CERN Courier December 2014 p30). Each sphere is one optical sensor; the coloured spheres show those that observed light from this event. The sizes show how many photons each module observed, while the colour gives some idea of the arrival time of the first photon, from red (earliest) to blue (latest). It is easy to see that the neutrino is going slightly upward (by about 11.5°), so the muon cannot be from a cosmic-ray air shower; it must be from a neutrino. The event, detected on 11 June 2014, was in the form of a through-going muon, which means that the track originated and ended outside of the detector’s volume. So, IceCube cannot measure the total energy of the neutrino, but rather its specific energy loss (dE/dx). While the team is still working on estimating the neutrino energy, the total energy loss visible in the detector was 2.6±0.3 PeV.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.