The Antihydrogen TRAP (ATRAP) experiment at CERN’s Antiproton Decelerator has reported a new measurement of the antiproton’s magnetic moment made with an unprecedented uncertainty of 4.4 parts per million (ppm) – a result that is 680 times more precise than previous measurements. The unusual increase in precision results from the experiment’s ability to trap individual protons and antiprotons, as well as from using a large magnetic gradient to gain sensitivity to the tiny magnetic moment.
By applying its single particle approach to the study of antiprotons, the ATRAP experiment has been able make precise measurements of the charge, mass and magnetic moment of the antiproton. Using a Penning trap, the antiproton is suspended at the centre of an iron ring-electrode that is sandwiched between copper electrodes. Thermal contact with liquid helium keeps the electrodes at 4.2 K, providing a nearly perfect vacuum that eliminates the stray matter atoms that could otherwise annihilate the antiproton. Static and oscillating voltages applied to the electrodes allow the antiproton to be manipulated and its properties to be measured.
The result is part of an attempt to understand the matter–antimatter imbalance of the universe. In particular, a comparison of the antiproton’s magnetic moment with that of the proton, tests the Standard Model and its CPT theorem at high precision. The ATRAP team found that the magnetic moments of the antiproton and proton are “exactly opposite”: equal in strength but opposite in direction with respect to the particle spins and consistent with the prediction of the Standard Model and the CPT theorem to 5 parts per million.
However, the potential for much greater measurement precision puts ATRAP in position to test the Standard Model prediction much more stringently. Combining the single particle methods with new quantum methods that make it possible to observe individual antiproton spin flips should make it feasible to compare an antiproton and a proton to 1 part per billion or better.
The “winter” conferences earlier this year saw the LHCb collaboration present three important results from its increasingly precise search for new physics.
One fascinating area of study is the quantum-mechanical process in which neutral mesons such as the D0, B0 and B0s can oscillate between their particle and antiparticle states. The B0s mesons oscillate with by far the highest frequency of about 3 × 1012 times per second, on average about nine times during their lifetime. In an updated study, the collaboration looked at the decays of B0s mesons into D–s π+ with D–s decays reconstructed in five different channels. While the B0s oscillation frequency Δms has been measured before, the oscillations themselves had been previously seen only by folding the decay-time distribution onto itself at the period of the measured oscillation. In this updated analysis the oscillation pattern is spectacularly visible over the full decay-time distribution, as figure 1 shows. The measured value of the oscillation frequency is Δms = 17.768 ± 0.023 ± 0.006 ps–1, which is the most precise in the world (LHCb collaboration 2013a).
CP violation can occur in the B0s sector – in the interference between the oscillation and decay of the meson – but it is expected to be a small effect in the Standard Model. Knowledge of such CP-violating parameters is important because they set the scale of the difference between properties of matter and antimatter; they may also reveal effects of physics beyond the Standard Model. LHCb has previously reported on a study of B0s decays into J/ψ φ and J/ψ π+π– final states, but now the analysis has been finalized. One important improvement is in the flavour tagging, which determines whether the initial state was produced as a B0s or anti-B0s meson. This decision was previously based on “opposite-side” tagging, i.e. from measuring the particle/antiparticle nature of the other b-quark produced in conjunction with the B0s. The collaboration has now achieved improved sensitivity by including “same-side” tagging, from the charge of a kaon produced close to the B0s, as a result of the anti-s-quark produced in conjunction with the B0s. This increases the statistical power of the tagging by about 40%. The values of the CP-violating parameter φs, together with the difference in width of the heavy and light B0s mass states, ΔΓs, are shown in figure 2, which also indicates the small allowed region for these two parameters, corresponding to φs = 0.01 ± 0.07 ± 0.01 rad and ΔΓs = 0.106 ± 0.011 ± 0.007 ps–1 (LHCb collaboration 2013b)
Last, the collaboration has opened a door for important future measurements with a first study of the time-dependent CP-violating asymmetry in hadronic B0s meson decays into a φφ pair, a process that is mediated by a so-called penguin diagram in the Standard Model. Both φ mesons decay in turn into a K+K– pair. The invariant mass spectrum of the four-kaon final state shows a clean signal of about 880 B0s → φφ decays. A first measurement of the CP-violating phase φs for this decay indicates that it lies in the interval of (–2.46, –0.76) rad at 68% confidence level. This is consistent with the small value predicted in the Standard Model, at the level of 16% probability. Although the current precision is limited, this will become a very interesting measurement with the increased statistics from further data taking (LHCb collaboration 2013c)
These results represent the most precise measurements to date, based on data corresponding to the 1 fb–1 of integrated luminosity that LHCb collected in 2011. They are in agreement with the Standard Model predictions and significantly reduces the parameter region in which the signs of new physics can still hide.
In a striking and unexpected observation from new studies aimed at an understanding of the anomalous Y(4260) particle, the international team that operates the Beijing Spectrometer (BESIII) experiment at the Beijing Electron–Positron Collider (BEPCII) has reported that it decays to a new, and perhaps even more mysterious, particle that they have named the Zc(3900).
The Y(4260) has mystified researchers since its discovery by the BaBar collaboration at SLAC in 2005. While other particles with certain similarities have long been successfully explained as bound states of a charmed quark and anticharmed quark, attempts to incorporate the Y(4260) into this model have failed and its underlying nature remains unknown. In December 2012, the BESIII team embarked on a programme to produce large numbers of Y(4260) particles by annihilating electrons and positrons with a total energy tuned to the particle’s mass. Previous studies had used electron–positron collisions at a higher energy, where the Y(4260) mesons were produced via the relatively rare process in which either the original electron or positron particle first radiated a high-energy photon, thereby lowering the total annihilation energy to the mass region of the Y(4260). By contrast, by tuning the beam energies to the particle’s mass, BEPCII can produce the Y(4260) directly and more efficiently. During the first two weeks of the programme, BESIII already collected the world’s largest sample of Y(4260) decays and by the end of the first month there was strong evidence pointing to the existence of the Zc(3900).
The anomalous charmonium particles – such as the Y(4260) and, now, the Zc(3900) – appear to be members of a new class of recently discovered particles. Called the XYZ mesons, they are adding new dimensions to the study of the strong force. QCD, the theory of the strong force, allows more possibilities for charmonium mesons than simply a charmed quark bound to an anticharmed quark. One possibility is that gluons may exist inside mesons in an excited state, a configuration referred to as “hybrid charmonium”. An alternative is that more than just a charmed and anticharmed quark may be bound together to form a “tetraquark” or a molecule-like meson.
Some progress has been made recently in using lattice QCD to account for the existence of the Y(4260) as a state of hybrid charmonium. However, the hybrid picture cannot explain the newly discovered Zc(3900), which decays into a charged pion plus a neutral J/ψ. To decay in this way, the Zc(3900) must contain a charmed quark and an anticharmed quark (to form the J/ψ) together with something that is charged, so therefore cannot be a gluon. To have nonzero charge, the Zc(3900) cannot be a hybrid, but must also contain lighter quarks. Different theoretical models have been proposed that attempt to explain how this could come about. The positively charged Zc(3900) particle could be a tightly bound four-quark composite of a charmed and anticharmed quark pair plus an additional up quark and antidown quark. Or, perhaps, the Zc(3900) is a molecule-like structure comprising two mesons, each of which contain a charmed quark (or anticharmed quark) bound to a lighter antiquark (or quark). Another scenario is that the Zc(3900) is an artefact of the interaction between these two mesons.
Whatever the explanation, the appearance of such an exotic state in the decay of another exotic state was not anticipated by most researchers. Now, the ball is clearly in the experimenters’ courts and there is much hope – by theorists and experimenters alike – that with more data, the veil that continues to shroud these mysterious particles can be lifted.
• The Beijing Spectrometer (BESIII) collaboration has some 350 members from 50 institutions in 11 countries.
The international Borexino collaboration has released results from a new measurement of geoneutrinos corresponding to 1352.60 live days and about 187 tonnes of liquid scintillator after all selection criteria have been applied (3.7 × 1031 proton × year). This corresponds to a 2.4 times higher exposure with respect to the measurement made in 2010.
Borexino is a liquid-scintillator detector built principally underground at INFN Gran Sasso National Laboratory in central Italy to detect solar neutrinos. However, because of its high level of radiopurity – unmatched elsewhere in the world – it can also detect rare events such as the interactions of geoneutrinos. These are electron-antineutrinos that are produced in the decays of long lived radioactive elements (40K, 238U and 232Th) in the Earth’s interior.
From the data collected, 46 electron-antineutrino candidates have been found, about 30% of them geoneutrinos. Borexino has also detected electron-antineutrinos from nuclear power plants around the world. These latter antineutrinos give a signal of about 31 events, which is in good agreement with the number expected from the 446 nuclear cores operating during the period of interest (December 2007 to August 2012) and from current knowledge of the parameters of neutrino oscillations. The total expected background for electron-antineutrinos in Borexino is determined to be about 0.7 events. The small background is a result of the high level of radiopurity of the liquid scintillator. For the current measurement, the null geoneutrino hypothesis has a probability of 6 × 10–6.
The detection of geoneutrinos offers a unique tool to probe uranium and thorium abundances within the mantle. By considering the contribution from the local crust (around the Gran Sasso region) and the rest of the crust to the geoneutrino signal, the signal from the radioactivity of uranium and thorium in the mantle can be extracted. The latest results from Borexino, together with the measurement by the KamLAND experiment in Japan, indicate a signal from the mantle of 14.1±8.1 TNU (1 TNU = 1 event/year/1032 protons).
These new results mark a breakthrough in the comprehension of the origin and thermal evolution of the Earth. The good agreement between the ratios of thorium to uranium determined from geoneutrino signals and the value obtained from chondritic meteorites has fundamental implications for cosmochemical models and the processes of planetary formation in the early Solar System.
By measuring the geoneutrino flux at the surface, the contribution of radioactive elements to the Earth’s heat budget can be explored. The radiogenic heat is of great interest for understanding a number of geophysical processes, such as mantle convection and plate tectonics. For the first time two independent geoneutrino detectors – Borexino and KamLAND, which are placed in different sites around the planet – are providing the same constraints on the radiogenic heat power of the Earth set by the decays of uranium and thorium. With these latest results, the Borexino collaboration finds that the data fit to a possible georeactor with an upper limit on the output power of 4.5 TW at 95% confidence level.
The OPERA experiment at Gran Sasso has observed a third neutrino oscillation, with a muon-neutrino produced at CERN detected as a τ neutrino in the Gran Sasso laboratory. This extremely rare event was observed only twice previously.
OPERA, which is run by an international experiment involving 140 physicists from 28 research institutes in 11 countries, was set up for the specific purpose of discovering neutrino oscillations of this kind. A beam of neutrinos produced at CERN travels towards the INFN Gran Sasso National Laboratory some 730 km away. Thanks to their weak interactions, the neutrinos arrive almost unperturbed at the giant OPERA detector, which consists of more than 4000 tonnes of material, has a volume of some 2000 m3 and contains nine million photographic plates. After the first neutrinos arrived at Gran Sasso in 2006, the experiment gathered data for five consecutive years, from 2008 to 2012. The first τ neutrino was observed in 2010, the second in 2012.
The arrival of the τ neutrino is an important confirmation of the two previous observations. Statistically, the observation of three τ neutrinos enables the collaboration to claim confidently that muon neutrinos oscillate to τ neutrinos. Data analysis is set to continue for another two years.
In the history of particle physics, July 2012 will feature prominently as the date when the ATLAS and CMS collaborations announced that they had discovered a new particle with a mass near 125 GeV in studies of proton–proton collisions at the LHC. The discovery followed just over a year of dedicated searches for the Higgs boson, the particle linked to the Brout-Englert-Higgs mechanism that endows elementary particles with mass. At this early stage, the phrase “Higgs-like boson” was the recognized shorthand for a boson whose properties were yet to be fully investigated. The outstanding performance of the LHC in the second half of 2012 delivered four times as much data at 8 TeV in the centre of mass as were used in the “discovery” analyses. Thus equipped, the experiments were able to present new results at the 2013 Rencontres de Moriond in March, giving the particle-physics community enough evidence to name this new boson “a Higgs boson”.
At the Moriond meeting, in addition to a suite of final results from the experiments at Fermilab’s Tevatron on the same subject, the ATLAS and CMS collaborations presented preliminary new results that further elucidate the nature of the particle discovered just eight months earlier. The collaborations find that the new particle is looking more and more like a Higgs boson. However, it remains an open question whether this is the Higgs boson of the Standard Model of particle physics, or one of several such bosons predicted in theories that go beyond the Standard Model. Finding the answer to this question will require more time and data.
This brief summary provides an update of the measurements of the properties of the newly discovered boson using, in most cases, the full proton–proton collision data sample recorded by the ATLAS and CMS experiments in 2011 and 2012 for the H→γγ, H→ZZ(*)→4l, H→WW(*)→lνlν, H→τ+τ– and H→bb channels, corresponding to integrated luminosities of up to 5 fb–1 at √s = 7 TeV and up to 21 fb–1 at √s = 8 TeV. In the intervening time, CMS and ATLAS have also developed searches for rarer decays – such as H→Zγ or H→μ+μ– – and for invisible or undetectable decays expected in theories beyond the Standard Model.
Whether or not the new particle is a Higgs boson is demonstrated by how it interacts with other particles, as well as by its own quantum properties. For example, a Higgs boson is postulated to have no spin and in the Standard Model its parity – a measure of how its mirror image behaves – should be positive. ATLAS and CMS have compared a number of alternative spin-parity (JP) assignments for this particle and, in pairwise hypothesis tests, the hypothesis of zero spin and positive parity (0+) is consistently favoured, as summarized in Table 1.
In CMS, the presence of a signal has been established in each of several expected decay channels. The H→γγ and H→ZZ(*)→4l channels point to a mass between 125.4 GeV and 125.8 GeV. For mH = 125 GeV, an excess of 4.1σ is observed in the H→WW(*)→lνlν channel and there are remarkable positive results in the decays to b quarks (2.2σ) and τ leptons (2.9σ), an important hint that this Higgs boson also couples to fermions. As expected in the Standard Model, the search for H→Zγ has not yielded a signal – nevertheless constraining the possibilities of models beyond the Standard Model.
Apart from exploiting the larger set of 8 TeV data, the CMS analyses have benefited from many improvements since the discovery announcement, from revised calibration constants to more sensitive analysis methods. In the H→γγ and H→ZZ(*)→4l channels, the largest difference is in the use of event classes with specific topologies to exploit the associated production modes. In these channels, the mass measurement has also benefited from improved energy and momentum resolution. Figure 1 shows the data entering the H→ZZ(*)→4l analysis and it gives a sense of how individual events build up to a 6.7σ excess of events and how their mass resolution (also depicted) allows a measurement of the mass of the new boson at 125.8 ± 0.5(stat.) ± 0.2(syst.) GeV, the precision being dominated by statistics and already better than 0.5%. This mass measurement is in remarkable agreement with the value of 125.4 ± 0.5(stat.) ± 0.6(syst.) GeV measured in the H→γγ channel, where the excess has a significance of 3.2σ. Updated CMS analysis of H→γγ, which takes advantage of the improved detector calibration, yields a result close to that expected for the Standard Model Higgs boson in terms of signal strength, μ = σ/σSM = 0.78+0.28–0.26
In figure 2, an overview of the main decays studied in CMS shows how evidence for a Higgs boson can be seen in each channel with individual significances ranging from 2.2σ to 6.7σ. With respect to the results presented by CMS last July, there are slight differences in the individual signal strengths: smaller in the H→γγ channel and larger in the H→bb and H→τ+τ– channels. These results strongly indicate that it is a Higgs boson. Overall, the results continue to be fully compatible with the expectation for a Standard Model Higgs boson, while within the current uncertainties many scenarios of physics beyond the Standard Model are still allowed.
For ATLAS, the combined signal strength for H→γγ, H→ZZ(*)→4l, H→WW(*)→lνlν and H→τ+τ– has been determined to be μ = 1.30 ± 0.13(stat.) ± 0.14(syst.) at the new mass measurement of 125.5 ± 0.2(stat.)+0.5–0.6(syst.) GeV. The collaboration has also measured the ratio of the cross-sections for vector-boson mediated and (predominantly) gluon-initiated processes for producing a Higgs boson, as shown in figure 3. Measurements of relative branching-fraction ratios between the H→γγ, H→ZZ(*)→4l and H→WW(*)→lνlν channels, as well as combined fits testing the fermion and vector coupling sector, couplings to W and Z and loop-induced processes of the Higgs-like boson, show no significant deviation from the Standard Model expectation, as figure 4 shows.
Figure 3 compares a summary of the combined results for Higgs production to the Standard Model expectation and demonstrates an overall consistency. Here, a common signal-strength scale factor, ggF+ttH, has been assigned to the gluon-fusion (ggF) and the small ttH production mode because they both scale predominantly with the Yukawa coupling of the top quark in the Standard Model. For the combination, vector-boson-fusion-like events and gluon-fusion-like events are distinguished within the individual analyses based on the kinematic properties of the event. The combined measured ratio of production scaling factors, μVBF/μggF+ttH = 1.2+0.7–0.5, driven by the H→γγ channel measurement, gives more than 3σ evidence for Higgs-boson production through vector-boson fusion.
Having demonstrated overall consistency in terms of production, five tests of the observed coupling scale factors are summarized in figure 4. This shows the overall consistency with the Standard Model hypothesis and places limits on various model extensions for the produced Higgs boson. These tests are implemented according to recommendations from the Higgs Cross-Section Working Group. The ATLAS results assume a single, narrow CP-even Higgs resonance at mH = 125.5 GeV with coupling strengths that may depart from the Standard Model in various prescribed ways. For example, the relative vector boson and fermion coupling strengths (labelled κV, κF) are allowed to vary, giving the experimental constraints on the relative deviation of these quantities shown in the upper section of figure 4. The current results are not powerful enough to resolve the ambiguity in the relative sign of κV and κF. Considering κV > 0, κF has a double minimum leading to the observed structure in the intervals allowed by the data. Figure 4 also shows the results for other benchmark parameterizations where no assumption is made on the total width for the fermion-to-boson coupling-strength ratio (labelled λFV) and where the ratio of W-to-Z couplings is tested (labelled λWZ). Scenarios for physics beyond the Standard Model contributions via loops (labelled κg, κγ) and via invisible or undetectable decays (labelled Bi,u) can similarly be compared with the intervals allowed by the data.
Using the latest data, alternative hypotheses have been tested but none of them is found to be preferred over the Standard Model.
After eight months, and thanks to the extraordinary performance of the LHC, the ATLAS and CMS collaborations have revealed more of the true nature of a new boson that is unique in the Standard Model. The more detailed picture that ATLAS and CMS have put together on this newborn boson since July 2012 remains unfailingly consistent with expectations drawn from the Standard Model, with the spin, parity, relative couplings, production and decay mechanisms all consistent at the current level of precision. Using the latest data, alternative hypotheses have been tested but none of them is found to be preferred over the Standard Model; rare decays have been searched for but, as expected in the Standard Model, no evidence for a signal has been found. The more similar this Higgs boson is to the Standard Model expectation, the more time, data and ingenuity will be required in the analyses of the LHC data to provide hints of physics at work beyond the Standard Model. Ultimately, upgraded and new accelerators will be needed to understand the interactions of the Higgs boson at a deeper level but for now it is clear that this boson is a precious thread with which we can hope to unravel more of the remaining mysteries of the universe.
Heavy-ion collisions are used at CERN and other laboratories to re-create conditions of high temperature and high energy density, similar to those that must have characterized the first instants of the universe, after the Big Bang. Yet heavy-ion collisions are not all equal. Because heavy ions are extended objects, the system created in a central head-on collision is different from that created in a peripheral collision, where the nuclei just graze each other. Measuring just how central such collisions are at the LHC is an important part of the studies by the ALICE experiment, which specializes in heavy-ion physics. The centrality determination provides a tool to compare ALICE measurements with those of other experiments and with theoretical calculations.
Centrality is a key parameter in the study of the properties of QCD matter at extreme temperature and energy density because it is related directly to the initial overlap region of the colliding nuclei. Geometrically, it is defined by the impact parameter, b – the distance between the centres of the two colliding nuclei in a plane transverse to the collision axis (figure 1). Centrality is thus related to the fraction of the geometrical cross-section that overlaps, which is proportional to πb2/π(2RA)2, where RA is the nuclear radius. It is customary in heavy-ion physics to characterize the centrality of a collision in terms of the number of participants (Npart), i.e. the number of nucleons that undergo at least one collision, or in terms of the number of binary collisions among nucleons from the two nuclei (Ncoll). The nucleons that do not participate in any collision – the spectators – essentially keep travelling undeflected, close to the beam direction.
However, neither the impact parameter nor the number of participants, spectators or nucleon–nucleon collisions are directly measurable. This means that experimental observables are needed that can be related to these geometrical quantities. One such observable is the multiplicity of the particles produced in collision in a given rapidity range around mid-rapidity; this multiplicity increases monotonically with the impact parameter. A second useful observable is the energy carried by the spectators close to the beam direction and deposited – in the case of the ALICE experiment – in the Zero Degree Calorimeter (ZDC); this decreases for more central collisions, as shown in the upper part of figure 2.
Experimentally, centrality is expressed as a percentage of the total nuclear interaction cross-section, e.g. the 10% most central events are the 10% that have the highest particle multiplicity. But how much of the total nuclear cross-section is measured in ALICE? Are the events detected only hadronic processes or do they include something else?
The most peripheral events, 10% of the total, remain contaminated by electromagnetic processes and trigger inefficiency
ALICE collected data during the LHC’s periods of lead–lead running in 2010 and 2011 using interaction triggers that have an efficiency large enough to explore the entire sample of hadronic collisions. However, because of the strong electromagnetic fields generated as the relativistic heavy ions graze each other the event sample is contaminated by background from electromagnetic processes, such as pair-production and photonuclear interactions. These processes, which are characterized by low-multiplicity events with soft (low-momentum) particles close to mid-rapidity, produce events that are similar to peripheral hadronic collisions and must be rejected to isolate hadronic interactions. Part of the contamination is rejected by requiring that both nuclei break-up in the collision, producing a coincident signal in both sides of the ZDC. The remaining contamination is estimated using events generated by a Monte Carlo simulator of electromagnetic processes (e.g. STARLIGHT). This shows that for about 90% of the hadronic cross-section, the purity of the event sample and the efficiency of the event selection is 100%. Nevertheless the most peripheral events, 10% of the total, remain contaminated by electromagnetic processes and trigger inefficiency – and must be used with special care in the physics analyses.
The centrality of each event in the sample of hadronic interactions can be classified by using the measured particle multiplicity and the spectator energy deposited in the ZDC. Various detectors in ALICE measure quantities that are proportional to the particle multiplicity, with different detectors covering different regions in pseudo-rapidity (η). Several of these, e.g. the time-projection chamber (covering |Δη| < 0.8) the silicon pixel detector (|Δη| < 1.4), the forward multiplicity detector (1.7 < Δη < 5.0 and –3.4 < Δη < –1.7) and the V0 scintillators (2.8 < Δη < 5.1 and –3.7 < Δη < –1.7), are used to study how the centrality resolution depends on the acceptance and other possible detector effects (saturation, energy cut-off etc.). The percentiles of the hadronic cross-section are determined for any value of measured particle multiplicity (or something proportional to it, e.g. the V0 amplitude) by integrating the measured distribution, which can be divided into classes by defining sharp cuts that cor-respond to well defi ned percentile intervals of the cross-section, as indicated in the lower part of fi gure 2 for the V0 detectors.
Alternatively, measuring the energy deposited in the ZDC by the spectator particles in principle allows direct access to the number of participants (all of the nucleons minus the spectators). However, some spectator nucleons are bound into light nuclear fragments that, with a charge-to-mass ratio similar to that of the beam, remain inside the beam-pipe and are therefore undetected by the ZDC. This effect becomes quantitatively important for peripheral events because they have a large number of spectators, so the ZDC cannot be used alone to give a reliable estimate of the number of participants. Consequently, the information from the ZDC needs to be correlated to another quantity that has a monotonic relation with the participants. The ALICE collaboration uses the energy of the secondary particles (essentially photons produced by pion decays) measured by two small electromagnetic calorimeters (ZEM). Centrality classes are defined by cuts on the two-dimensional distribution of the ZDC energy as a function of the ZEM amplitude for the most central events (0–30%) above the point where the correlation between the ZDC and ZEM inverts sign.
So how can the events be partitioned? Should the process be based on 0–1% or 0–10% classes? And what is the best way to estimate the centrality? These questions relate to the issue of centrality resolution. The number of centrality classes that can be defined is connected to the resolution achieved by the centrality estimation. In general, centrality classes are defined so that the separation between the central values of the participant distributions for two adjacent classes is significantly larger than the resolution for the variable used for the classification.
The real resolution
In principle, the resolution is given by the difference between the true centrality and the value estimated using a given method. In reality, the true centrality is not known, so how can it be measured? ALICE tested its procedure on simulations using the event generator HIJING, which is widely used and tested on hadronic processes, together with a full-scale simulation of detector response based on the GEANT toolkit. In HIJING events, the value of the impact parameter for every given event and, hence, the true centrality is known. The full GEANT simulation yields the values of signals in the detectors for the given event, so using these centrality estimators an estimate of the centrality can be calculated. The real centrality resolution for the given event is equal to the difference between the measured and the true centrality.
In the real data we approximated the true centrality in an iterative procedure, evaluating event-by-event the average centrality measured by all estimators. The correlation between various estimators is excellent, resulting in a high centrality-resolution. Since the resolution depends on the rapidity coverage of the detector used, the best result – achieved with the V0 detector, which has the largest pseudo-rapidity coverage in ALICE – ranges from 0.5% in central collisions to 2% in peripheral ones, in agreement with the estimation from simulations. This high resolution is confirmed by the analysis of elliptic flow and two-particle correlations where the results, which address geometrical aspects of the collisions, change with 1% centrality bins (figure 3).
So much for the experimental classification of the events in percentiles of the hadronic cross-section. This leaves one issue remaining: how to relate the experimental observables (particle multiplicity, zero-degree energy) to the geometry of the collision (impact parameter, Npart, Ncoll). What is the mean number of participants in the 10% most central events?
To answer this question requires a model. HIJING is not used in this case, because the simulated particle multiplicity does not agree with the measured one. Instead ALICE uses a much simpler model, the Glauber model. This is a simple technique, widely used in heavy-ion physics, from the Alternating Gradient Synchrotron at Brookhaven, to CERN’s Super Proton Synchrotron, to Brookhaven’s Relativistic Heavy-Ion Collider. It uses few assumptions to describe heavy-ion collisions and couple the collision geometry to the detector signals. First, the two colliding nuclei are described by a realistic distribution of nucleons inside the nucleus measured in electron-scattering experiments (the Woods-Saxon distribution). Second, the nucleons are assumed to follow straight trajectories. Third, two nucleons from different nuclei are assumed to collide if their distance is less than the distance corresponding to the inelastic nucleon–nucleon cross-section. Last, the same cross-section is used for all successive collisions. The model, which is implemented in a Monte Carlo calculation, takes random samples from a geometrical distribution of the impact parameter and for each collision determines Npart and Ncoll.
The Glauber model can be combined with a simple model for particle production to simulate a multiplicity distribution that is then compared with the experimental one. The particle production is simulated in two steps. Employing a simple parameterization, the number of participants and the number of collisions can be used to determine the number of “ancestors”, i.e. independently emitting sources of particles. In the next step, each ancestor emits particles according to a negative binomial distribution (chosen because it describes particle multiplicity in nucleon–nucleon collisions). The simulated distribution describes up to 90% of the experimental one, as figure 2 shows.
Fitting the measured distribution (e.g. the V0 amplitude) with the distribution simulated using the Glauber model creates a connection between an experimental observable (the V0 amplitude) and the geometrical model of nuclear collisions employed in the model. Since the geometry information (b, Npart, Ncoll) for the simulated distribution is known from the model, the geometrical properties for centrality classes defined by sharp cuts in the simulated multiplicity distribution can be calculated.
The high-quality results obtained in the determination of centrality are directly reflected in the analyses that ALICE performs to investigate the properties of the system that strongly depend on its geometry. Elliptic flow, for example, is a fundamental measurement of the degree of collectivity of the system at an early stage of its evolution since it directly reflects the initial spatial anisotropy, which is largest at the beginning of the evolution. The quality of the centrality determination allows access to the geometrical properties of the system with a very high precision. To remove non-flow effects, which are predominantly short-ranged in rapidity, as well as artefacts of track-splitting, two-particle correlations are calculated in 1% centrality bins with a one-unit gap in pseudo-rapidity. Using these correlations, as well as the multi-particle cumulants (4th, 6th and 8th order), ALICE can extract the elliptic flow-coefficient v2 (figure 3), i.e. the second harmonic coefficient of the azimuthal Fourier decomposition of the momentum distribution (ALICE collaboration 2011). Such measurements have allowed ALICE to demonstrate that the hot and dense matter created in heavy-ion collisions at the LHC behaves like a fluid with almost zero viscosity (CERN Courier April 2011 p7) and to pursue further the hydrodynamic features of the quark–gluon plasma that is formed there.
Contrary to the stereotype, advances in science are not typically about shouting “Eureka!”. Instead, they are about results that make a researcher say, “That’s strange”. This is what happened 30 years ago when the European Muon collaboration (EMC) at CERN looked at the ratio of their data on per-nucleon deep-inelastic muon scattering off iron and compared it with that of the much smaller nucleus of deuterium.
The data were plotted as a function of Bjorken-x, which in deep-inelastic scattering is interpreted as the fraction of the nucleon’s momentum carried by the struck quark. The binding energies of nucleons in the nucleus are several orders of magnitude smaller than the momentum transfers of deep-inelastic scattering, so, naively, such a ratio should be unity except for small corrections for the Fermi motion of nucleons in the nucleus. What the EMC experiment discovered was an unexpected downwards slope to the ratio (figure 1) – as revealed in CERN Courier in November 1982 and then published in a refereed journal the following March (Aubert et al. 1983).
This surprising result was confirmed by many groups
This surprising result was confirmed by many groups, culminating with the high-precision electron- and muon-scattering data from SLAC (Gomez et al. 1994), Fermilab (Adams et al. 1995) and the New Muon collaboration (NMC) at CERN (Amaudruz et al. 1995 and Arneodo et al. 1996). Figure 2 shows representative data. The conclusions from the combined experimental evidence were that: the effect had a universal shape; was independent of the squared four-momentum transfer, Q2; increased with nuclear mass number A; and scaled with the average nuclear density.
A simple picture
The primary theoretical interpretation of the EMC effect – the region x > 0.3 – was simple: quarks in nuclei move throughout a larger confinement volume and, as the uncertainty principle implies, they carry less momentum than quarks in free nucleons. The reduction of the ratio at lower x, named the shadowing region, was attributed either to the hadronic structure of the photon or, equivalently, to the overlap in the longitudinal direction of small-x partons from different nuclei. These notions gave rise to a host of models: bound nucleons are larger than free ones; quarks in nuclei move in quark bags with 6, 9 and even up to 3A quarks, where A is the total number of nucleons. More conventional explanations, such as the influence of nuclear binding, enhancement of pion-cloud effects and a nuclear pionic field, were successful in reproducing some of the nuclear deep-inelastic scattering data.
It was even possible to combine different models to produce new ones; this led to a plethora of models that reproduced the data (Geesaman et al. 1995), causing one of the authors of this article to write that “EMC means Everyone’s Model is Cool”. It is interesting to note that none of the earliest models were that concerned with the role of two-nucleon correlations, except in relation to six-quark bags.
The initial excitement was tempered as deep-inelastic scattering became better understood and the data became more precise. Some of the more extreme models were ruled out by their failure to match well known nuclear phenomenology. Moreover, inconsistency with the baryon-momentum sum rules led to the downfall of many other models. Because some of them predicted an enhanced nuclear sea, the nuclear Drell-Yan process was suggested as a way to disentangle the various possible models. In this process, a quark from a proton projectile annihilates with a nuclear antiquark to form a virtual photon, which in turn becomes a leptonic pair (Bickerstaff et al. 1984). The experiment was done and none of the existing models provided an accurate description of both sets of data – a challenge that remains to this day (Alde et al. 1984).
A significant shift in the experimental understanding of the EMC effect occurred when new data on 9Be became available (Seely et al. 2009). These data changed the experimental conclusion that the EMC effect follows the average nuclear density and instead suggested that the effect follows local nuclear density. In other words, even in deep-inelastic kinematics, 9Be seemed to act like two alpha particles with a single nearly free neutron, rather than like a collection of nucleons whose properties were all modified.
This led experimentalists to ask if the x> 1 scaling plateaux that have been attributed to short-range nucleon–nucleon correlations – a phenomenon that is also associated with high local densities – could be related to the EMC effect. Figure 3 shows the kinematic range of the EMC effect together with the x > 1 short-range correlation (SRC) region. While the dip at x = 1 has been shown to vary rapidly with Q2, the EMC effect and the magnitude of the x > 1 plateaux are basically constant within the Q2 range of the experimental data. Plotting the slope of the EMC effect, 0.3 < x < 0.7, against the magnitude of scaling x > 1 plateaux for all of the available data, as shown in figure 4, revealed a striking correlation (Weinstein et al. 2011). This phenomenological relationship has led to renewed interest in understanding how strongly correlated nucleons in the nucleus may be affecting the deep-inelastic results.
In February 2013, on nearly the 30th anniversary of the EMC publication, experimentalists and theorists came together at a special workshop at the University of Washington Institute of Nuclear Theory to review understanding of the EMC effect, discuss recent advances and plan new experimental and theoretical efforts. In particular, an entire series of EMC and SRC experiments are planned for the new 12 GeV electron beam at Jefferson Lab and analysis is underway of new Drell-Yan experimental data from Fermilab.
A new life
Although the EMC effect is now 30 years old, the recent experimental results have given new life to this old puzzle; no longer is Every Model Cool. Understanding the EMC effect implies understanding how partons behave in the nuclear medium. It thus has far-reaching consequences for not only the extraction of neutron information from nuclear targets but also for understanding effects such as the NuTeV anomaly or the excesses in the neutrino cross-sections observed by the MiniBooNe experiment.
Using two independent analyses, the LHCb collaboration has updated its measurement of ΔACP, the difference in CP violation in the decays D0→K+K– and D0→π+π–. This helps to cast light on the whether – and to what extent – CP violation occurs in interactions involving particles, such as the D0, that contain a charm quark.
The new results represent a significant improvement in the measurement of ΔACP, which has emerged as an important means to probe the charm sector. A previous measurement from LHCb was 3.5σ from zero and constituted the first evidence for CP violation in the charm sector (LHCb 2012). Subsequent results from the CDF and Belle collaborations, at Fermilab and KEK, respectively, further strengthened the evidence but not to the 5σ gold standard. Because the size of the effect was larger than expected, the result provoked a flurry of theoretical activity, including new physics models that could enhance such asymmetries and ideas for measurements that could elucidate the origin of the effect.
Both of the new measurements by LHCb use the full 2011 data set, corresponding to an integrated luminosity of 1.0 fb–1 of proton–proton collisions at 7 TeV in the centre of mass. The first uses the same “tagging” technique as all previous measurements, in which the initial flavour of the D meson (D0 or D–0) is inferred from the charge of the pion in the decay D*+→D0 π+. The second uses D mesons produced in semimuonic B decays, where the charge of the associated muon provides the tag. The two methods allow for useful cross-checks, in particular for biases that have different origins in the two analyses.
Compared with LHCb’s previous publication on ΔACP, the new pion-tagged analysis uses more data, fully reprocessed with improved alignment and calibration constants (LHCb 2013a). The most important change in the analysis procedure is that a vertex constraint has been applied to achieve a factor 2.5 better in background suppression. The result, ΔACP = (–0.34 ± 0.15 (stat.) ± 0.10 (syst.))%, is closer to zero than the previous measurement, which it supersedes. Detailed investigations reveal that the shift caused by each change in the analysis is consistent with a statistical fluctuation.
To add to the picture, the muon-tagged analysis also measures a value that is consistent with zero: ΔACP = (+0.49 ± 0.30 (stat.) ± 0.14 (syst.))% (LHCb 2013b). In both analyses, the control of systematic uncertainties around the per mille level is substantiated by numerous cross-checks. As the figure shows, the two new results are consistent with each other and with other results at the 2σ level but do not confirm the previous evidence of CP violation in the charm sector.
Theoretical work has shown that several well motivated models could induce large CP-violation effects in the charm sector. These new results constrain the parameter space of such models. Further updates to this and to related measurements will be needed to discover if – and at what level – nature distinguishes between charm and anticharm. The full data sample recorded by LHCb until the start of the long shutdown contains more than three times the number of charm decays analysed in these new analyses, so progress can be anticipated during the LHC long shutdown.
The TOTEM collaboration has published the first luminosity-independent measurement of the total proton–proton cross-section at a centre-of-mass energy of 7 TeV. This is based on the experiment’s simultaneous studies of both inelastic and elastic scattering in proton collisions at the LHC.
The TOTEM (TOTal cross-section, Elastic scattering and diffraction dissociation Measurement) experiment, which co-habits the intersection region at Point 5 (IP5) with the CMS experiment, is optimized for making precise measurements of particles that emerge from collisions close to the non-interacting beam particles. To study elastic proton–proton (pp) collisions, in which the interacting protons simply change direction slightly, the experiment uses silicon detectors in Roman Pots, which can bring the detectors close to the beam line. For inelastic collisions, where new particles are created, two charged-particle telescopes, T1 and T2, come into play. T1 is based on cathode-strip chambers in two “arms” at about 9 m from IP5; T2 employs gas electron-multiplier (GEM) chambers, in this case in two arms at around 13.5 m from IP5.
The measurements at 7 TeV in the centre of mass are based on data recorded in October 2011 with a special setting of the LHC in which the beams were not squeezed for high luminosity but were left relatively wide and straight. With the Roman Pot detectors moved close to the beam, the TOTEM collaboration measured the differential elastic cross-section, dσ/dt, down to values of the four-momentum transfer squared, |t| < 0.005 GeV2. Using the luminosity at IP5 as measured by CMS then gave a value for the elastic pp cross-section, σel = 25.4±1.1 mb ( TOTEM collaboration 2013a). Using the optical theorem, which relates dσ/dt at t=0 to σtot, the measurement of dσ/dt also provided a value of the total cross-section, σtot, and indirectly, for the inelastic cross-section, as σinel= σtot – σel. This yielded σinel = 73.2±1.4 mb.
To measure σinel more directly, the collaboration has analysed events that have at least one charged particle in the T2 telescope. After applying several corrections and, again, the luminosity from CMS, they arrive at a final result of σinel = 73.7±3.4 mb (TOTEM collaboration 2013b).
The excellent agreement between this value for σinel and the one determined from dσ/dt confirms that the collaboration understands well the systematic uncertainties and corrections used in the analysis and allows them to extract still more information from the data. In particular, as the elastic and inelastic data were collected simultaneously, the optical theorem allows the rates to be combined without the need to know the luminosity. This gives luminosity-independent values of σel = 25.1±1.1 mb, σinel = 72.9±1.5 mb and σtot = 98.0±2.5 mb (TOTEM collaboration 2013c). Using the optical theorem in a complementary way also allows TOTEM to determine the luminosity and in this case the collaboration finds values that are in excellent agreement with those measured by CMS.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.